<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" >
    <channel>
        <title>PIXLS.US</title>
        <link>https://pixls.us</link>
        <description>The PIXLS.US feed. The F/OSS photography website.</description>

        <atom:link href="https://pixls.us/feed.xml" rel="self" type="application/rss+xml" />
        <ttl>60</ttl>
        <lastBuildDate>Thu, 12 Jan 2017 18:05:27 GMT</lastBuildDate>
        <category></category>
        

        <item>
            <title><![CDATA[ New Year, New Raw Samples Website ]]></title>
            <link>https://pixls.us/blog/2017/01/new-year-new-raw-samples-website/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2017/01/new-year-new-raw-samples-website/</guid>
            <pubDate>Thu, 12 Jan 2017 17:10:38 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2017/01/new-year-new-raw-samples-website/lede_IMG_5355.jpg" /><br/>
                 <h1>New Year, New Raw Samples Website</h1>  
                 <h2>A replacement for rawsamples.ch</h2>   
                <p>Happy New Year, and I hope everyone has had a wonderful holiday!</p>
<p>We’ve been busy working on various things ourselves, including migrating <a href="http://rawpedia.rawtherapee.com">RawPedia</a> to a new server as well as building a replacement raw sample database/website to alleviate the problems that <a href="http://rawsamples.ch">rawsamples.ch</a> was having…</p>
<!-- more -->
<h2 id="rawsamples-ch-replacement"><a href="#rawsamples-ch-replacement" class="header-link-alt">rawsamples.ch Replacement</a></h2>
<p><a href="http://rawsamples.ch">Rawsamples.ch</a> is a website with the goal to:</p>
<blockquote>
<p> …provide RAW-Files of nearly all available Digitalcameras mainly to software-developers.  [sic]</p>
</blockquote>
<p>It was created by Jakob Rohrbach and had been running since March 2007, having amassed over 360 raw files in that time from various manufacturers and cameras. Unfortunately, back in 2016 the site was hit with a SQL-injection that ended up corrupting the database for the <a href="https://www.joomla.org/">Joomla</a> install that hosted the site. To compound the pain, there were no database backups… :(</p>
<p>On the good side, the <a href="https://pixls.us">PIXLS.US</a> community has some dangerous folks with idle hands. Our friendly, neighborhood @andabata (<a href="https://www.flickr.com/photos/andabata" title="andabata&#39;s Flickr page">Kees Guequierre</a>) had some time off at the end of the year and a desire to build something. You may know @andabata as the fellow responsible for the super-useful <a href="https://dtstyle.net/">dtstyle</a> website, which is chock full of <a href="http://darktable.org">darktable</a> styles to peruse and download (if you haven’t heard of it before &ndash; you’re welcome!). He’s also my go-to for macro photography and is responsible for this awesome image used on a slide for the <a href="http://libregraphicsmeeting.org/2016/">Libre Graphics Meeting</a>:</p>
<figure>
<img src="https://pixls.us/blog/2017/01/new-year-new-raw-samples-website/pixls-11.jpg" alt='PIXLS.US LGM Slide'>
</figure>

<p>Luckily, he decided to build a site where contributors could upload sample raw files from their cameras for everyone to use &ndash; particularly developers. We downloaded the archive of the raw files kept at rawsamples.ch to include with files that we already had. The biggest difference between the files from rawsamples.ch and <a href="https://raw.pixls.us">raw.pixls.us</a> is the licensing.  The existing files, and the preference for any new contributions, are licensed as <a href="https://creativecommons.org/publicdomain/zero/1.0/" title="Creative Commons Zero - Public Domain">Creative Commons Zero - Public Domain</a> (as opposed to <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" title="Creative Commons Attribution-NonCommercial-ShareAlike">CC-BY-NC-SA</a>).</p>
<p>After some hacking, with input and guidance from <a href="http://darktable.org">darktable</a> developer <a href="https://github.com/LebedevRI">Roman Lebedev</a>, the site was finally ready.
The repository for it can be found on GitHub: <a href="https://github.com/pixlsus/raw">raw.pixls.us repo</a>.</p>
<h2 id="-raw-pixls-us-"><a href="#-raw-pixls-us-" class="header-link-alt"><a href="https://raw.pixls.us">raw.pixls.us</a></a></h2>
<p>The site is now live at <a href="https://raw.pixls.us">https://raw.pixls.us</a>.</p>
<p>You can <a href="https://raw.pixls.us#repo">look at the submitted files</a> and search/sort through all of them (and download the ones you want).</p>
<p>In addition to browsing the archive, it would be fantastic if you were able to supplement the database by uploading sample images.  Many of the files from the rawsamples.ch archive are licensed <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/" title="Creative Commons Attribution-NonCommercial-ShareAlike">CC-BY-NC-SA</a>, but we’d rather have the files licensed <a href="https://creativecommons.org/publicdomain/zero/1.0/" title="Creative Commons Zero - Public Domain">Creative Commons Zero - Public Domain</a>.  CC0 is preferable because if the sample raw files are separated from the database, they can safely be redistributed without attribution. So if you have a camera that is already in the list with the more restrictive license, then please consider uploading a replacement for us!</p>
<p><strong>We are looking for shots that are:</strong></p>
<ul>
<li>Lens mounted on the camera</li>
<li>Lens cap off</li>
<li>In focus</li>
<li>Properly exposed (not over/under)</li>
<li>Landscape orientation</li>
<li>Licensed under the <a href="https://creativecommons.org/publicdomain/zero/1.0/" title="Creative Commons Zero - Public Domain">Creative Commons Zero</a></li>
</ul>
<p><strong>We are <em>not</em> looking for:</strong></p>
<ul>
<li>Series of images with different ISO, aperture, shutter, wb, or lighting<br>(Even if it’s a shot of a color target)</li>
<li>DNG files created with Adobe DNG Converter</li>
</ul>
<p>Please take a moment and see if you can provide samples to help the developers!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Welcome Digital Painters ]]></title>
            <link>https://pixls.us/blog/2016/12/welcome-digital-painters/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/12/welcome-digital-painters/</guid>
            <pubDate>Mon, 05 Dec 2016 21:50:29 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/12/welcome-digital-painters/lede_Fisherman.jpg" /><br/>
                 <h1>Welcome Digital Painters</h1>  
                 <h2>You mean there's art outside photography?</h2>   
                <p>Yes, there really is art outside photography. :)</p>
<p>The history and evolution of painting has undergone a similar transformation as most things adapting to a digital age. As photographers, we adapted techniques and tools commonly used in the darkroom to software, and found new ways to extend what was possible to help us achieve a vision.  Just as we tried to adapt skills to a new environment, so too did traditional artists, like painters. </p>
<!-- more -->
<figure>
<img src="https://pixls.us/blog/2016/12/welcome-digital-painters/patdavid-by-deveze.jpg" alt='Pat David Painting by Gustavo Deveze' width='400' height='470'>
<figcaption>
<a href="https://pixls.us/images/Pat-David-Headshot-Crop-2048-Q60.jpg" title="Pat David&#39;s Headshot">My headshot</a>, as painted by <a href="http://www.deveze.com.ar/" title="Gustavo Deveze&#39;s website">Gustavo Deveze</a>
</figcaption>
</figure>

<p>These artists adapted by not only emulating the results of various techniques, but by pushing forward the boundaries of what was possible through these new (<em>Free Software</em>) tools.</p>
<h2 id="impetus"><a href="#impetus" class="header-link-alt">Impetus</a></h2>
<p>Digital painting discussions with Free Software lacks a good outlet for collaboration that can open the discussion for others to learn from and participate in.  This is a similar situation the Free Software + photography world was in that prompted the creation of <a href="https://pixls.us">pixls.us</a>.</p>
<p>Due to this, both <a href="http://americogobbo.com.br">Americo Gobbo</a> and <a href="http://ninedegreesbelow.com/">Elle Stone</a> reached out to us to see if we could create a new category in the community about Digital Painting with a focus on promoting serious discussion around techniques, processes, and associated tools.</p>
<p>Both of them have been working hard on advancing the capabilities and quality of various Free Software tools for years now.  Americo brings with him the interest of other painters who want to help accelerate the growth and adoption of Free Software projects for painting (and more) in a high-quality and professional capacity. A little background about them:</p>
<p><strong><a href="http://americogobbo.com.br">Americo Gobbo</a></strong> studied Fine Arts in Bologna, Italy. Today he lives and works in Brazil, where he continues to develop studies and create experimentation with painting and drawing mainly within the digital medium in which he tries to replicate the traditional effects and techniques from the real world to the virtual.</p>
<figure>
<img src="https://pixls.us/blog/2016/12/welcome-digital-painters/Imaginary Landscape - Americo Gobbo.png" alt='Imaginary Landscape Painting by Americo Gobbo' width='610' height='377'>
<figcaption>
Imaginary Landscape - Wet sketches, experiments on GIMP 2.9.+ <br>
<a href="http://americogobbo.com.br">Americo Gobbo</a>, 2016. 
</figcaption>
</figure>

<p><strong><a href="http://ninedegreesbelow.com/">Elle Stone</a></strong> is an amateur photographer with a long-standing interest in the history of photography and print making, and in combining painting and photography. She’s been contributing to GIMP development since 2012, mostly in the areas of color management and proper color mixing and blending.</p>
<figure>
<img src="https://pixls.us/blog/2016/12/welcome-digital-painters/Leaves in May - Elle Stone.jpg" alt='Leaves in May Image by Elle Stone' width='480' height='626'>
<figcaption>
Leaves in May, GIMP-2.9 (GIMP-CCE)<br> 
<a href="http://ninedegreesbelow.com/">Elle Stone</a>, 2016.
</figcaption>
</figure>

<h2 id="artists"><a href="#artists" class="header-link-alt">Artists</a></h2>
<p>With this introductory post to the new Digital painting category forum we feature Gustavo Deveze, who is a Visual Artist using free software. Deveze’s work is characterized by mixing different medias and techniques. With future posts we want to continue featuring artists using free software.</p>
<h3 id="gustavo-deveze"><a href="#gustavo-deveze" class="header-link-alt">Gustavo Deveze</a></h3>
<p>Gustavo Deveze is a visual artist and lives in Buenos Aires. He trained as a draftsman at the National School of Fine Arts “Manuel Belgrano”, and filmmaker at <a href="http://idac.edu.ar/">IDAC - Instituto de Arte Cinematográfica</a> in Avellaneda, Argentina.</p>
<p>His works utilize different materials and supports and he is published by different publishers. Although in the last years he works mainly in digital format and with free software.
He has participated in national and international shows and exhibitions of graphics and cinema with many awards. His last exposition can be seen on issuu.com:
<a href="https://issuu.com/gustavodeveze/docs/inadecuado2edicion">https://issuu.com/gustavodeveze/docs/inadecuado2edicion</a></p>
<p>Website: <a href="http://www.deveze.com.ar">http://www.deveze.com.ar</a></p>
<ul>
<li>Blog: <a href="http://jeneverito.blogspot.com.ar/">http://jeneverito.blogspot.com.ar/</a></li>
<li>Google+: <a href="https://plus.google.com/107589083968107443043">https://plus.google.com/107589083968107443043</a></li>
<li>Facebook: <a href="https://www.facebook.com/gustavo.deveze">https://www.facebook.com/gustavo.deveze</a></li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/12/welcome-digital-painters/The Emperors happiness.jpg" title="Cudgels and Bootlickers: The Emperor's happiness - Gustavo Deveze" alt="Cudgels and Bootlickers: The Emperor's happiness - Gustavo Deveze" width='640' height='640'>
<figcaption>Cudgels and Bootlickers: The Emperor’s happiness - <a href="http://www.deveze.com.ar/" title="Gustavo Deveze&#39;s website">Gustavo Deveze</a>.
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/12/welcome-digital-painters/Lets be clear.jpg"  title="Let's be clear: the village's idiot is not tall... - Gustavo Deveze" alt="Let's be clear: the village's idiot is not tall... - Gustavo Deveze" width='640' height='640'>
<figcaption>Let’s be clear: the village’s idiot is not tall… - <a href="http://www.deveze.com.ar/" title="Gustavo Deveze&#39;s website">Gustavo Deveze</a>.
</figcaption>
</figure>


<h2 id="digital-painting-category"><a href="#digital-painting-category" class="header-link-alt">Digital Painting Category</a></h2>
<p>The new Digital Painting category is for discussing painting techniques, processes, and associated tools in a digital environment using Free/Libre software. Some relevant topics might include:</p>
<ul>
<li><p>Emulating non-digital art, drawing on diverse historical and cultural genres and styles of art.</p>
</li>
<li><p>Emulating traditional “wet darkroom” photography, drawing on the rich history of photographic and printmaking techniques.</p>
</li>
<li><p>Exploring ways of making images that were difficult or impossible before the advent of new algorithms and fast computers to run them on, including averaging over large collections of images.</p>
</li>
<li><p>Discussion of topics that transcend “just photography” or “just painting”, such as composition, creating a sense of volume or distance, depicting or emphasizing light and shadow, color mixing, color management, and so forth.</p>
</li>
<li><p>Combining painting and photography: Long before digital image editing artists already used photographs as aids to and part of making paintings and illustrations, and photographers incorporated painting techniques into their photographic processing and printmaking.</p>
</li>
<li><p>An important goal is also to encourage artists to submit tutorials and videos about Digital Painting with Free Software and to also submit high-quality finished works.</p>
</li>
</ul>
<h2 id="say-hello-"><a href="#say-hello-" class="header-link-alt">Say Hello!</a></h2>
<p>Please feel free to stop into the new <a href="https://discuss.pixls.us/c/digital-painting">Digital Painting category</a>, introduce yourself, and say hello! I look forward to seeing what our fellow artists are up to.</p>
<p><small>All images not otherwise specified are licensed <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">CC-BY-NC-SA</a></small></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ A Masashi Wakui look with GIMP ]]></title>
            <link>https://pixls.us/articles/a-masashi-wakui-look-with-gimp/</link>
            <guid isPermaLink="true">https://pixls.us/articles/a-masashi-wakui-look-with-gimp/</guid>
            <pubDate>Mon, 28 Nov 2016 19:25:21 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/lede_Akihabara.jpg" /><br/>
                 <h1>A Masashi Wakui look with GIMP</h1>  
                 <h2>A color bloom fit for night urban landscapes</h2>   
                <p>This tutorial explains how to achieve an effect based on the post processing by <a href="https://www.flickr.com/photos/megane_wakui/">photographer Masashi Wakui</a>.  His primary subjects appear as urban landscape views of Japan where he uses some pretty and aggressive color toning to complement his scenes along with a soft ‘bloom’ effect on the highlights. The results evoke a strong feeling of an almost cyberpunk or futuristic aesthetic (particularly for fans of <a href="http://www.imdb.com/title/tt0083658/">Bladerunner</a> or <a href="http://www.imdb.com/title/tt0094625">Akira</a>!).</p>
<figure>
<a href="https://www.flickr.com/photos/megane_wakui/24803565399/in/dateposted/" title="Untitled by Masashi Wakui"><img src="https://c8.staticflickr.com/2/1706/24803565399_6b41ea3a17_z.jpg" width="640" height="426" alt="Untitled"></a>

<a href="https://www.flickr.com/photos/megane_wakui/24405269789/in/dateposted/" title="Untitled by Masashi Wakui"><img src="https://c6.staticflickr.com/2/1464/24405269789_4a80f97545_z.jpg" width="640" height="427" alt="Untitled"></a>

<a href="https://www.flickr.com/photos/megane_wakui/22817821874/in/dateposted/" title="Untitled by Masashi Wakui"><img src="https://c3.staticflickr.com/1/742/22817821874_267a642ff9_z.jpg" width="640" height="427" alt="Untitled"></a>
</figure>

<p>This tutorial started its life in the <a href="https://discuss.pixls.us/t/technique-inspired-by-masashi-wakui-post/2618" title="Technique inspired by masashi wakui post">pixls.us forum</a>, which was inspired by <a href="https://discuss.pixls.us/t/achieve-the-masashi-wakui-look/634" title="Achieve the Masashi Wakui look">a forum post</a> seeking assistance on replicating the color grading and overall look/feel of Masashi’s photography.</p>
<h2 id="prerequisites">Prerequisites<a href="#prerequisites" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>To follow along will require a couple of plugins for GIMP. </p>
<p>The <a href="http://registry.gimp.org/node/28644">Luminosity Mask</a> filter will be used to target color grading to specific tones. You can find out more about <em>luminosity masks</em> in GIMP at <a href="http://blog.patdavid.net/2011/10/getting-around-in-gimp-luminosity-masks.html">Pat David’s blog post</a> and his <a href="http://blog.patdavid.net/2013/11/getting-around-in-gimp-luminosity-masks.html">follow-up blog post</a>.  If you need to install the script, directions can be found (along with the scripts) at the <a href="https://github.com/pixlsus/GIMP-Scripts#installing-gimp-scripts-scheme-scm">PIXLS.US GIMP scripts git repository</a>.</p>
<p>You will also need the <a href="http://registry.gimp.org/node/11742">Wavelet decompose</a> plugin. The easiest way to get this plugin is to use the one available in <a href="https://gmic.eu">G’MIC</a>. As a bonus you’ll get access to many other incredible filters as well! Once you’ve installed <a href="https://gmic.eu">G’MIC</a> the filter can be found under<br><code>Details → Split details [wavelets]</code>.</p>
<p>We will do some basic toning and then apply Gimp’s wavelet decompose filter to do some magic.
Two things will be used from the wavelet decompose results:</p>
<ul>
<li>the residual</li>
<li>the coarsest wavelet scale (number 8 in this case)</li>
</ul>
<p>The basic idea is to use the residual of the the wavelet decompose filter to color the image. What this does is average and blur the colors. The trick strengthens the effect of the surroundings being colored by the lights. The number of wavelet scales to use depends on the pixel size of the picture; the relative size of the coarsest wavelet scale compared to the picture is the defining parameter. The wavelet scale 8 will then produce overemphasised local contrasts, which will accentuate the lights further. This works nicely in pictures with lights as the brightest areas will be around lights. Used on daytime picture this effect will also accentuate brighter areas which will lead to a kind of “glow” effect. I tried this as well and it does look good on some pictures while on others it looks just wrong. Try it!</p>
<p>We will be applying all the following steps to this picture, taken in Akihabara, Tokyo.</p>
<figure class="big-vid">
    <a href="Akihabara_original.jpg">
      <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Akihabara_base.jpg" alt="The unaltered photograph" width="960" height="590">
    </a>
    <figcaption>
    The starting image (<a href='Akihabara_original.jpg' title='Download the full resolution version to follow along'>download full resolution</a>).
    </figcaption>
</figure>

<ol>
<li><p>Apply the <em>luminosity mask</em> filter to the base picture. We will use this later.</p>
<p><span class='Cmd'>Filters → Generic → Luminosity Masks</span></p>
</li>
<li><p>Duplicate the base picture (Ctrl+Shift+D).</p>
<p><span class='Cmd'>Layer → Duplicate Layer</span></p>
</li>
<li><p>Tone the shadows of the duplicated picture using the <em>tone curve</em> by lowering the reds in the shadows. If you want your shadows to be less green, slightly raise the blues in the shadows.</p>
<p><span class='Cmd'>Colors → Curves</span></p>
<figure>
    <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Curves_toning.png" alt="The toning curves" width="372" height="526">
</figure>

<figure class="big-vid">
  <a href="Akihabara_tonedshadows.jpg">
    <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Akihabara_tonedshadows_sm.jpg" alt="The photograph with the toning curve applied" width="900" height="553">
  </a>
</figure>
</li>
<li><p>Apply a <em>layer mask</em> to the duplicated and toned picture. Choose the DD luminosity mask from a channel.</p>
<p><span class='Cmd'>Layer → Mask → Add Layer Mask</span></p>
<figure>
 <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Mask-DD.png" alt='Luminosity Mask Added' width='293' height='370'>
</figure>
</li>
<li><p>With both layers visible, create a new layer from what is visible. Call this layer the “blended” layer.</p>
<p><span class='Cmd'>Layer → New from Visible</span></p>
<figure class="big-vid">
  <a href="Akihabara_blended.jpg">
    <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Akihabara_blended_sm.jpg" alt="The photograph after the blended layer" width='900' height='553'>
  </a>
</figure>
</li>
<li><p>Apply the <em>wavelet decompose</em> filter to the “blended” layer and choose 9 as number of detail scales.  Set the G’MIC <em>output</em> mode to “New layer(s)” (see below).</p>
<p><span class='Cmd'>Filters → G’MIC<br>
Details → Split Details [wavelets]</span></p>
<figure class='big-vid'>
  <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/gmic-wavelet.png" alt="G'MIC Split Details Wavelet Decompose dialog" width='900' height='457'>
<figcaption>
Remember to set G’MIC to output the results on <em>New Layer(s)</em>.
</figcaption>
</figure>
</li>
<li><p>Make the <strong>blended</strong> and <strong>blended [residual]</strong> layers visible. Then set the mode of the <strong>blended [residual]</strong> layer to <em>color</em>. This will give you a picture with averaged, blurred colors.</p>
<figure class="big-vid">
  <a href="Akihabara_color_100.jpg">
    <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Akihabara_color_100_sm.jpg" alt="The fully colored photograph" width='899' height='553'>
  </a>
</figure>
</li>
<li><p>Turn the opacity of the <strong>blended [residual]</strong> down to 70%, or any other value to your taste, to bring back some color detail.</p>
<figure class="big-vid">
  <a href="Akihabara_color_70.jpg">
    <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Akihabara_color_70_sm.jpg" alt="The partially colored photograph" width='899' height='553'>
  </a>
</figure>
</li>
<li><p>Turn on the <strong>blended [scale #8]</strong> layer, set the mode to <em>grain&nbsp;merge</em>, and see how the lights start shining. Adjust opacity to taste.</p>
<figure class="big-vid">
  <a href="Akihabara_scale_8.jpg">
    <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Akihabara_scale_8_sm.jpg" alt="The augmented contrast layer" width='899' height='553'>
  </a>
</figure>
</li>
<li><p>Optional: Turn the wavelet scale 3 (or any other) on to sharpen the picture and blend to taste.</p>
</li>
<li><p>Make sure the following layers are visible:</p>
<ul>
<li>blended</li>
<li>residual</li>
<li>wavelet scale 8</li>
<li>Any other wavelet scale you want to use for sharpening</li>
</ul>
</li>
<li><p>Make a new layer from visible</p>
<p><span class='Cmd'>Layer → New from Visible</span></p>
</li>
<li><p>Raise and slightly crush the shadows using the tone curve.</p>
<figure>
   <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Curves_raiseshadows.png" alt='Raise the shadow curve' width='372' height='526'>
</figure>
</li>
<li><p>Optional: Adjust saturation to taste. If there are predominantly white lights and the
colors come mainly from other objects, the residual will be washed out, as is
the case with this picture. </p>
<p>I noticed that the reds and yellows were very dominant compared to greens and blues.  So using the <strong>Hue-Saturation</strong> dialog I raised the master saturation by <em>+70</em> and lowered the yellow saturation by <em>-50</em> and lowered the red saturation by <em>-40</em> all using an overlap of <em>60</em>.</p>
</li>
</ol>
<p>The final result:</p>
<figure class="big-vid">
      <img src="https://pixls.us/articles/a-masashi-wakui-look-with-gimp/Akihabara_final_sm.jpg" alt="The final image!" width="960" height="590" data-swap-src="Akihabara_base.jpg">
    <figcaption>
    The final result.  (Click to compare to original.)<br>
    <a href="Akihabara_final.jpg">Download the full size result.</a>
    </figcaption>
</figure>

  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Giving Thanks ]]></title>
            <link>https://pixls.us/blog/2016/11/giving-thanks/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/11/giving-thanks/</guid>
            <pubDate>Tue, 22 Nov 2016 16:16:49 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/11/giving-thanks/Thanksgiving-Brownscombe-1123.jpg" /><br/>
                 <h1>Giving Thanks</h1>  
                 <h2>For an awesome community!</h2>   
                <p>Here in the U.S., we have a big holiday coming up this week: <a href="https://en.wikipedia.org/wiki/Thanksgiving_(United_States)">Thanksgiving</a>.
Serendipitously, this holiday also happens to fall when a few neat things are happening around the community, and what better time is there to recognize some folks and to give thanks of our own?  <em>No time like the present!</em></p>
<!-- more -->
<h2 id="a-special-thanks"><a href="#a-special-thanks" class="header-link-alt">A Special Thanks</a></h2>
<p>I feel a special “Thank You” should first go to a photographer and fantastic supporter of the community, <a href="https://plus.google.com/+DimitriosPsychogios">Dimitrios Psychogios</a>.  Last year for our trip to <a href="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/">Libre Graphics Meeting, London</a> he stepped up with an awesome donation to help us bring some fun folks together.</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/11/giving-thanks/LGM2016-Crew.jpg" alt='LGM2016 Dinner'>
<figcaption>
Fun folks together.<br>
Mairi, the darktable nerds, a RawTherapee nerd, and a PhotoFlow nerd.<br>
(and the nerd taking the photo, patdavid)
</figcaption>
</figure>

<p>This year he was incredibly kind by offering a donation to the community (completely unsolicited) that covers our hosting and infrastructure costs for an entire year!  So on behalf of the community, <strong>Thank You for your support, Dimitrios</strong>!</p>
<p>I’ll be creating a page soon that will list our supporters as a means of showing our gratitude. Speaking of supporters and a new page on the site…</p>
<h2 id="a-support-page"><a href="#a-support-page" class="header-link-alt">A Support Page</a></h2>
<p>Someone had asked about the possibility of donating to the community on a post.  We were <a href="https://discuss.pixls.us/t/midi-controller-for-darktable/2582">talking about providing support</a> in <a href="http://www.darktable.org">darktable</a> for using a midi controller deck and the costs for some of the options weren’t too extravagant.  This got us thinking that enough small donations could probably cover something like this pretty easily, and if it was community hardware we could make sure it got passed around to each of the projects that would be interested in creating support for it.</p>
<figure>
<img src="https://pixls.us/blog/2016/11/giving-thanks/nanokontrol2.jpg" alt='KORG NanoControl2'>
<figcaption>
An example midi-controller that we might get support<br>for in darktable and other projects.
</figcaption>
</figure>

<p>That conversation had me thinking about ways to allow folks to support the community.  In particular, ways to make it easy to provide support on an on-going basis if possible (in addition to simple, single donations).  There are goal-oriented options out there that folks are probably already familiar with (Kickstarter, Indiegogo and others) but the model for us is less goal-oriented and more about continuous support. </p>
<p>Patreon was an option as well (and I already had a skeleton Patreon account set up), but the fees were just too much in the end.  They wanted a flat 5% along with the regular PayPal fees.  The general consensus among the staff was that we wanted to maximize the funds getting to the community.</p>
<p>The best option in the end was to create a merchant account on PayPal and manually set up the various payment options.  I’ve set them up similar to how a service like Patreon might run with four different <em>recurring</em> funding levels and an option for a single one-time payment of whatever a user would like.  Recurring levels are nice because they make it easier to plan with.</p>
<h3 id="we-re-not-asking"><a href="#we-re-not-asking" class="header-link-alt">We’re Not Asking</a></h3>
<p>Our requirements for the infrastructure of the site are modest and we haven’t actively pursued support or donations for the site before.  <em>That hasn’t changed.</em></p>
<p>We’re not asking for support now.  The <em>best</em> way that someone can help the community is by <em>being an active part of it.</em></p>
<blockquote>
<p>Engaging others, sharing what you’ve done or learned, and helping other users out wherever you can. This is the best way to support the community.</p>
</blockquote>
<p>I purposely didn’t talk about funding before because I don’t want folks to have to worry or think about it.  And before you ask: no, we are not and will not run any advertising on the site. I’d honestly rather just keep paying for things out of my pocket instead.</p>
<p>We’re not asking for support, <em>but we’ll accept it</em>.</p>
<p>With that being said, I understand that there’s still some folks that would like to contribute to the infrastructure or help us to get hardware to add support in projects and more.  So if you do want to contribute, the page for doing so can be found here:</p>
<p><a href="https://pixls.us/support">https://pixls.us/support</a></p>
<p>There are four recurring funding levels of $1, $3, $5, and $10 per month.
There is also a one-time contribution option as well.</p>
<p>We also have an <a href="https://www.amazon.com//ref=as_li_ss_tl?ref_=nav_custrec_signin&amp;&amp;linkCode=ll2&amp;tag=pixls.us-20&amp;linkId=418b8960b708accf468db7964fc2d4b5" title="Go to Amazon.com using our affiliate link">Amazon Affiliate</a> link option.  If you’re not familiar with it, you simply click the link to go to Amazon.com.  Then anything you buy for the next 24 hours will give us some small percentage of your purchase price.  It doesn’t affect the price of what you’re buying at all. So if you were going to purchase something from Amazon anyway, and don’t mind - then by all means use our link first to help out!</p>
<hr>
<h2 id="1000-users"><a href="#1000-users" class="header-link-alt">1000 Users</a></h2>
<p>This week we also finally hit 1,000 users registered on <a href="https://discuss.pixls.us">discuss</a>! Which is just bananas to me.  I am super thankful for each and every member of the community that has taken the time to participate, share, and generally make one of the better parts of my day catching up on what’s been going on.  You all rock!</p>
<div class='fluid-vid'>
<iframe width="560" height="315" src="https://www.youtube.com/embed/StTqXEQ2l-Y" frameborder="0" allowfullscreen></iframe>
</div>

<p>While we’re talking about a number “1” with bunch of zeros after it, we recently made some neat improvements to the forums…</p>
<h2 id="100-megabytes"><a href="#100-megabytes" class="header-link-alt">100 Megabytes</a></h2>
<p>We are a photography community and it seemed stupid to have to restrict users from uploading full quality images or raw files.  Previously it was a concern because the server the forums are hosted on have limited disk space (40GB).  Luckily, <a href="http://www.discourse.org/">Discourse</a> has an option for storing all uploads to the forum on <a href="https://aws.amazon.com/s3/">Amazon S3</a> buckets.</p>
<p>I went ahead and created some S3 buckets so that any uploads to the forums will now be hosted on Amazon instead of taking up precious space on the server. The costs are quite reasonable (around $0.30/GB right now), and it also means that I’ve been able to bump the upload size to 100MB for forum posts! You can now just drag and drop full resolution raw files directly into the post editor to include the file!</p>
<figure>
<img src="https://pixls.us/blog/2016/11/giving-thanks/drag-drop-320.gif" alt='Drag and Drop files in discuss'>
<figcaption>
70MB GIMP .xcf file?  Just drag-and-drop to upload, no problem! :)
</figcaption>
</figure>


<h2 id="travis-ci-automation"><a href="#travis-ci-automation" class="header-link-alt">Travis CI Automation</a></h2>
<p>On a slightly geekier note, did you know that the code for the entire website is available on <a href="https://github.com/pixlsus/website">Github</a>?  It’s also licensed liberally (<a href="https://github.com/pixlsus/website/blob/master/LICENSE">CC-BY-SA</a>), so no reason not to come and fiddle with things with us!  One of the features of using Github is integration with <a href="https://travis-ci.org">Travis CI</a> (Continuous Integration).</p>
<p>What this basically means is that every commit to the Github repo for the website gets picked up by Travis and built to test that everything is working ok.  You can actually see the <a href="https://travis-ci.org/pixlsus/website/builds">history of the website builds</a> there.</p>
<p>I’ve now got it set up so that when a build is successful on Travis, it will automatically publish the results to the main webserver and make it live. Our build system, <a href="http://www.metalsmith.io/">Metalsmith</a>, is a static site generator.  This means that we build the entire website on our local computers when we make changes, and then publish all of those changes to the webserver.  This change automates that process for us now by handling the building and publishing if everything is ok.</p>
<p>In fact, if everything is working the way I <em>think</em> it should, this very blog post will be the first one published using the new automated system!  Hooray!</p>
<p>You can poke me or @paperdigits on discuss if you want more details or feel like playing with the website.</p>
<h2 id="mica"><a href="#mica" class="header-link-alt">Mica</a></h2>
<p>Speaking of @paperdigits, I want to close this blog post with a great big “<strong>Thank You!</strong>“ to him as well. He’s the only other person insane enough to try and make sense of all the stuff I’ve done building the site so far, and he’s been extremely helpful hacking at the website code, writing articles, make good infrastructure suggestions, taking the initiative on things (t-shirts and github repos), and generally being awesome all around.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ João Almeida's darktable Presets ]]></title>
            <link>https://pixls.us/blog/2016/11/jo-o-almeida-s-darktable-presets/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/11/jo-o-almeida-s-darktable-presets/</guid>
            <pubDate>Mon, 14 Nov 2016 18:19:19 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/11/jo-o-almeida-s-darktable-presets/portra400_after.jpg" /><br/>
                 <h1>João Almeida's darktable Presets</h1>  
                 <h2>A gorgeous set of film emulation for darktable</h2>   
                <p>I realize that I’m a little late to this, but photographer <a href="http://www.joaoalmeidaphotography.com/">João Almeida</a> has created a wonderful set of film emulation presets for <a href="http://www.darktable.org/">darktable</a> that he uses in his own workflow for personal and commisioned work. Even more wonderful is that he has graciously <a href="http://www.joaoalmeidaphotography.com/en/t3mujinpack-film-darktable/">released them for everyone to use</a>.</p>
<!-- more -->
<p>These film emulations started as a personal side project for João, and he adds a disclaimer to them that he did not optimize them all for each brand or model of his cameras.  His end goal was for these to be as simple as possible by using a few <a href="http://www.darktable.org/">darktable</a> modules. He describes it best on <a href="http://www.joaoalmeidaphotography.com/en/t3mujinpack-film-darktable/">his blog post about them</a>:</p>
<blockquote>
<p>The end goal of these presets is to be as simple as possible by using few Darktable modules, it works solely by manipulating Lab Tone Curves for color manipulation, black &amp; white films rely heavily on Channel Mixer. Since I what I was aiming for was the color profiles of each film, other traits related with processing, lenses and others are unlikely to be implemented, this includes: grain, vignetting, light leaks, cross-processing, etc.</p>
</blockquote>
<p>Some before/after samples from his blog post:</p>
<figure>
<img src="https://pixls.us/blog/2016/11/jo-o-almeida-s-darktable-presets/portra400_after.jpg" data-swap-src='portra400_before-1.jpg' alt='João Almeida Portra 400 sample'>
<figcaption>
João Portra 400<br>
(Click to compare to original)
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/11/jo-o-almeida-s-darktable-presets/kodachrome64_after.jpg" data-swap-src='kodachrome64_before-1.jpg' alt='João Alemida Kodachrome 64 sample'>
<figcaption>
João Kodachrome 64<br>
(Click to compare to original)
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/11/jo-o-almeida-s-darktable-presets/velvia50__after.jpg" data-swap-src='velvia50_before.jpg' alt='João Alemida Velvia 50 sample'>
<figcaption>
João Velvia 50<br>
(Click to compare to original)
</figcaption>
</figure>

<p>You can read more on <a href="http://www.joaoalmeidaphotography.com/en/t3mujinpack-film-darktable/">João’s website</a> and you can see many more <a href="https://www.flickr.com/photos/tags/t3mujinpack">images on Flickr with the #t3mujinpack tag</a>. The full list of film emulations included with his pack:</p>
<ul>
<li>AGFA APX 25, 100</li>
<li>Fuji Astia 100F</li>
<li>Fuji Neopan 1600, Acros 100</li>
<li>Fuji Pro 160C, 400H, 800Z</li>
<li>Fuji Provia 100F, 400F, 400X</li>
<li>Fuji Sensia 100</li>
<li>Fuji Superia 100, 200, 400, 800, 1600, HG 1600</li>
<li>Fuji Velvia 50, 100</li>
<li>Ilford Delta 100, 400, 3200</li>
<li>Ilford FP4 125</li>
<li>Ilford HP5 Plus 400</li>
<li>Ilford XP2</li>
<li>Kodak Ektachrome 100 GX, VS</li>
<li>Kodak Ektar 100</li>
<li>Kodak Elite Chrome 400</li>
<li>Kodak Kodachrome 25, 64, 200</li>
<li>Kodak Portra 160 NC, VC</li>
<li>Kodak Portra 400 NC, UC, VC</li>
<li>Kodak Portra 800</li>
<li>Kodak T-Max 3200</li>
<li>Kodak Tri-X 400</li>
</ul>
<p>If you see João around the forums stop and say hi (and maybe a thank you). Even better, if you find these useful, consider buying him a beer (donation link is on his blog post)!</p>
<h3 id="related-reading"><a href="#related-reading" class="header-link-alt">Related Reading</a></h3>
<ul>
<li><a href="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/">color manipulation with the colour checker lut module (darktable)</a></li>
<li><a href="http://gmic.eu/film_emulation/">Pat David’s film emulation LUTs (G’MIC)</a></li>
<li><a href="https://discuss.pixls.us/t/common-color-curves-portra-provia-velvia/2154">Common Color Curves (Portra, Provia, Velvia) (RawTherapee)</a></li>
<li><a href="https://github.com/pmjdebruijn/colormatch">Pascal’s colormatch</a></li>
</ul>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Aligning Images with Hugin ]]></title>
            <link>https://pixls.us/articles/aligning-images-with-hugin/</link>
            <guid isPermaLink="true">https://pixls.us/articles/aligning-images-with-hugin/</guid>
            <pubDate>Fri, 04 Nov 2016 19:12:04 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/aligning-images-with-hugin/hugin_lede.jpg" /><br/>
                 <h1>Aligning Images with Hugin</h1>  
                 <h2>Easily process your bracketed exposures</h2>   
                <p><a href="http://hugin.sourceforge.net/">Hugin</a> is an excellent tool for for aligning and stitching images. In this article, we’ll focus on aligning a stack of images. Aligning a stack of images can be useful for achieving several results, such as:</p>
<ul>
<li>bracketed exposures to make an HDR or fused exposure (using enfuse/enblend), or manually blending the images together in an image editor</li>
<li>photographs taken at different focal distances to extend the depth of field, which can be very useful when taking macros</li>
<li>photographs taken over a period of time to make a time-lapse movie</li>
</ul>
<p>For the example images included with this tutorial, the <em>focal length</em> is <strong>12mm</strong> and the <em>focal length multiplier</em> is <strong>1</strong>. A big thank you to <a href="https://discuss.pixls.us/users/isaac/activity">@isaac</a> for providing these images.</p>
<p>You can download a zip file of all of the sample <em>Beach Umbrellas</em> images here:</p>
<p><a href="https://pixls.us/articles/aligning-images-with-hugin/Outdoor_Beach_Umbrella.zip">Download <strong>Outdoor_Beach_Umbrella.zip</strong></a> (62MB)</p>
<p>Other sample images to try with this tutorial can be <a href="#image-files">found at the end of the post</a>.</p>
<p>These instructions were adapted from the <a href="https://discuss.pixls.us/t/only-a-small-testimony/2130/5">original forum post</a> by <a href="https://discuss.pixls.us/users/Carmelo_DrRaw/activity">@Carmelo_DrRaw</a>; many thanks to him as well.</p>
<p>We’re going to align these bracked exposures so we can blend them:</p>
<figure class="big-vid">
    <a href="side-by-side-example.jpg">
      <img src="https://pixls.us/articles/aligning-images-with-hugin/side-by-side-example.jpg" alt='Blend Examples' width='907' height='230'>
    </a>
</figure>



<ol>
<li><p>Select <strong>Interface</strong> → <strong>Expert</strong> to set the  interface to <strong>Expert</strong> mode. This will expose all of the options offered by Hugin.</p>
</li>
<li><p>Select the <strong>Add images…</strong> button to load your bracketed images. Select your images from the file chooser dialog and click <strong>Open</strong>.</p>
</li>
<li><p>Set the optimal setting for aligning images:</p>
<ul>
<li>Feature Matching Settings: Align image stack</li>
<li>Optimize Geometric: Custom parameters</li>
<li>Optimize Photometric: Low dynamic range</li>
</ul>
</li>
<li><p>Select the <strong>Optimizer</strong> tab.</p>
</li>
<li><p>In the <strong>Image Orientation</strong> section, select the following variables for each image:</p>
<ul>
<li>Roll</li>
<li>X (TrX) [horizontal translation]</li>
<li>Y (TrY) [vertical translation]</li>
</ul>
<p>You can <code>Ctrl</code> + left mouse click to enable or disable the variables.</p>
<figure class="big-vid">
 <a href="roll_x_y_hugin.png">
   <img src="https://pixls.us/articles/aligning-images-with-hugin/roll_x_y_hugin.png" alt='roll x y Hugin' width='878' height='714'>
 </a>
</figure>

<p>Note that you do not need to select the parameters for the anchor image:</p>
<figure class="big-vid">
 <a href="anchor_image_hugin.png">
   <img src="https://pixls.us/articles/aligning-images-with-hugin/anchor_image_hugin.png" alt='Hugin anchor image' width='882' height='742'>
 </a>
</figure>
</li>
<li><p>Select <strong>Optimize now!</strong> and wait for the software to finish the calculations. Select <strong>Yes</strong> to apply the changes.</p>
</li>
<li><p>Select the <strong>Stitcher</strong> tab.</p>
</li>
<li><p>Select the <strong>Calculate Field of View</strong> button.</p>
</li>
<li><p>Select the <strong>Calculate Optimal Size</strong> button.</p>
</li>
<li><p>Select the <strong>Fit Crop to Images</strong> button.</p>
</li>
<li><p>To have the maximum number of post-processing options, select the following image outputs:</p>
<ul>
<li>Panorama Outputs: Exposure fused from any arrangement<ul>
<li>Format: TIFF</li>
<li>Compression: LZW</li>
</ul>
</li>
<li>Panorama Outputs: High dynamic range<ul>
<li>Format: EXR</li>
</ul>
</li>
<li><p>Remapped Images: No exposure correction, low dynamic range</p>
<figure class="big-vid">
 <a href="image_export_hugin.png">
   <img src="https://pixls.us/articles/aligning-images-with-hugin/image_export_hugin.png" alt='Hugin Image Export' width='840' height='928'>
 </a>
</figure>
</li>
</ul>
</li>
<li><p>Select the <strong>Stitch!</strong> button and choose a place to save the files. Since Hugin generates quite a few temporary images, save the PTO file in it’s own folder.</p>
</li>
</ol>
<p>Hugin will output the following images:</p>
<ul>
<li>a tif file blended by enfuse/enblend</li>
<li>an HDR image in the EXR format</li>
<li>the individual images after remapping and without any exposure correction that you can import into the GIMP as layers and blend manually.</li>
</ul>
<p>You can see the result of the image blended with enblend/enfuse:</p>
  <figure class="big-vid">
    <a href="beach_umbrella_fused.jpg">
      <img src="https://pixls.us/articles/aligning-images-with-hugin/beach_umbrella_fused.jpg" alt='Beach Umbrella Fused' width='960' height='718'>
    </a>
  </figure>

<p>With the output images, you can:</p>
<ul>
<li>edit the enfuse/enblend tif file further in the GIMP or RawTherapee</li>
<li>tone map the EXR file in LuminanceHDR</li>
<li>manually blend the remapped tif files in the GIMP or PhotoFlow</li>
</ul>
<hr>
<h2 id="image-files">Image files<a href="#image-files" class="header-link"><i class="fa fa-link"></i></a></h2>
<ul>
<li>Camera: Olympus E-M10 mark ii</li>
<li>Lens: Samyang 12mm F2.0</li>
</ul>
<h3 id="indoor_guitars">Indoor_Guitars<a href="#indoor_guitars" class="header-link"><i class="fa fa-link"></i></a></h3>
<p><a href="https://s3.amazonaws.com/pixls-files/Indoor_Guitars.zip"><strong>Download Indoor_Guitars.zip</strong></a> (75MB)</p>
<ul>
<li>5 brackets</li>
<li>&plusmn;0.3 EV increments</li>
<li>f5.6</li>
<li>focus at about 1m</li>
<li>center priority metering</li>
<li>exposed for guitars, bracketed for the sky, outdoor area, and indoor area</li>
<li>manual mode (shutter speed recorded in EXIF)</li>
<li>shot in burst mode, handheld</li>
</ul>
<h3 id="outdoor_beach_umbrella">Outdoor_Beach_Umbrella<a href="#outdoor_beach_umbrella" class="header-link"><i class="fa fa-link"></i></a></h3>
<p><a href="https://s3.amazonaws.com/pixls-files/Outdoor_Beach_Umbrella.zip"><strong>Download Outdoor_Beach_Umbrella.zip</strong></a> (62MB)</p>
<ul>
<li>3 brackets</li>
<li>&plusmn;1 EV increments</li>
<li>f11</li>
<li>focus at infinity</li>
<li>center priority metering</li>
<li>exposed for the water, bracketed for umbrella and sky</li>
<li>manual mode (shutter speed recorded in EXIF)</li>
<li>shot in burst mode, handheld</li>
</ul>
<h3 id="outdoor_sunset_over_ocean">Outdoor_Sunset_Over_Ocean<a href="#outdoor_sunset_over_ocean" class="header-link"><i class="fa fa-link"></i></a></h3>
<p><a href="https://s3.amazonaws.com/pixls-files/Outdoor_Sunset_Over_Ocean.zip"><strong>Download Outdoor_Sunset_Over_Ocean.zip</strong></a> (60MB)</p>
<ul>
<li>3 brackets</li>
<li>&plusmn;1 EV increments</li>
<li>f11</li>
<li>focus at infinity</li>
<li>center priority metering</li>
<li>exposed for the darker clouds, bracketed for darker water and lighter sky areas and sun</li>
<li>manual mode (shutter speed recorded in EXIF)</li>
<li>shot in burst mode, handheld</li>
</ul>
<h4 id="licencing-information">Licencing Information<a href="#licencing-information" class="header-link"><i class="fa fa-link"></i></a></h4>
<ul>
<li>Images created by <a href="https://discuss.pixls.us/users/isaac/activity">Isaac I. Ullah</a>, 2016, and released under the <a href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0</a> licence (<a class='cc' href='http://creativecommons.org/licenses/by-sa/4.0/'>cba</a>).</li>
</ul>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ The Royal Photographic Society Journal ]]></title>
            <link>https://pixls.us/blog/2016/11/the-royal-photographic-society-journal/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/11/the-royal-photographic-society-journal/</guid>
            <pubDate>Wed, 02 Nov 2016 14:36:20 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/11/the-royal-photographic-society-journal/RPS_Logo_WithCrest_RGB.png" /><br/>
                 <h1>The Royal Photographic Society Journal</h1>  
                 <h2>Who let us in here?</h2>   
                <p>The <a href="http://www.rps.org/rps-journals/about"><em>Journal of the Photographic Society</em></a> is the journal for one of oldest photographic societies in the world: the <a href="http://www.rps.org/">Royal Photographic Society</a>. First published in 1853, the <a href="http://www.rps.org/rps-journals/about"><em>RPS Journal</em></a> is the oldest photographic periodical in the world (just edging out the <a href="http://www.bjp-online.com/about-british-journal-of-photography/"><em>British Journal of Photography</em></a> by about a year).</p>
<p>So you can imagine my doubt when confronted with an email about using some material from <a href="pixls.us">pixls.us</a> for their latest issue…</p>
<!-- more -->
<hr>
<p>If the name sounds familiar to anyone it may be from a recent post by <a href="http://blog.joemcnally.com/">Joe McNally</a> who is featured prominently in the September 2016 issue.  He <a href="http://blog.joemcnally.com/2016/10/13/royal-photographic-society/">was also just inducted</a> as a fellow into the society!</p>
<figure>
<img src="https://pixls.us/blog/2016/11/the-royal-photographic-society-journal/RPS_Journal_09_2016_COVER.jpg" alt='RPS Journal 2016-09 Cover' width='640' height='886'>
</figure>

<hr>
<p>It turns out my initial doubts were completely unfounded, and they really wanted to run a page based off one of our tutorials.
The editors liked the <a href="https://pixls.us/articles/an-open-source-portrait-mairi/">Open Source Portrait</a> tutorial.  In particular, the section on using <a href="https://pixls.us/articles/an-open-source-portrait-mairi/#skin-retouching-with-wavelet-decompose"><em>Wavelet Decompose</em></a> to touch up the skin tones:</p>
<figure>
<img src="https://pixls.us/blog/2016/11/the-royal-photographic-society-journal/INDEPTH_RPS_NOV16.jpg" alt='RPS Journal 2016-11 PD'>
<figcaption>
Yay Mairi!
</figcaption>
</figure>


<p>How cool is that?  I actually searched the archive and the only other mention I can find of <a href="https://www.gimp.org">GIMP</a> (or any other F/OSS) is from a <a href="http://archive.rps.org/archive/volume-149/755209?q=GIMP#page/125">“Step By Step” article written by Peter Gawthrop</a> (Vol. 149, February 2009).  I think it’s pretty awesome that we can promote a little more exposure for Free Software alternatives.  Especially in more mainstream publications and to a broader audience!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Arnold Newman Portraits ]]></title>
            <link>https://pixls.us/blog/2016/10/arnold-newman-portraits/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/10/arnold-newman-portraits/</guid>
            <pubDate>Fri, 28 Oct 2016 17:39:58 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/newman-stravinsky.jpg" /><br/>
                 <h1>Arnold Newman Portraits</h1>  
                 <h2>The beginnings of "Environmental Portraits"</h2>   
                <p>Anyone that has spent any time around me would realize that I’m particularly fond of portraits. From the wonderful works of <a href="https://www.google.com/search?q=martin+schoeller&amp;tbm=isch">Martin Schoeller</a> to the sublime <a href="https://www.google.com/search?q=dan+winters&amp;tbm=isch">Dan Winters</a>, I am simply fascinated by a well executed portrait. So I thought it would be fun to take a look at some selections from the “father” of environmental portraits - <a href="http://arnoldnewman.com/">Arnold Newman</a>.</p>
<!-- more -->
<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/Newman Self Portrait.jpg" alt='Arnold Newman, Self Portrait, Baltimore MD, 1939' width='640' height='658'>
<figcaption>
<a href="http://arnoldnewman.com/">Arnold Newman</a>, Self Portrait, Baltimore MD, 1939
</figcaption>
</figure>

<p>Newman wanted to become a painter before needing to drop out of college after only two years to take a job shooting portraits in a photo studio in Philadelphia. This experience apparently taught him what he did <em>not</em> want to do with photography…</p>
<p>Luckily it may have started defining what he <em>did</em> want to do with his photography. Namely, his approach to capturing his subjects alongside (or within) the context of the things that made them notable in some way.  This would became known as “Environmental Portraiture”. He described it best in an interview for <a href="https://books.google.com/books?id=qWOpWDKpUjgC&amp;pg=PA36#v=onepage&amp;q&amp;f=true">American Photo</a> in 2000:</p>
<blockquote>
<p>I didn’t just want to make a photograph with some things in the background.  The surroundings had to add to the composition and the understanding of the person.  No matter who the subject was, it had to be an interesting photograph.  Just to simply do a portrait of a famous person doesn’t mean a thing. <sup><a href="https://books.google.com/books?id=qWOpWDKpUjgC&amp;pg=PA36#v=onepage&amp;q&amp;f=true">1</a></sup></p>
</blockquote>
<p>Though he has felt that the term might be unnecessarily restrictive (and possibly overshadows his other pursuits including abstractions and photojournalism), there’s no denying the impact of the results. Possibly his most famous portrait, of composer Igor Stravinsky, illustrates this wonderfully.  The overall tones are almost monotone (flat - pun intended, and likely intentional on behalf of Newman) and are dominated by the stark duality of the white wall with the black piano.</p>
<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/Igor Stravinsky, New York, NY, 1946.jpg" alt='Igor Stravinsky by Arnold Newman' width='640' height='332'>
<figcaption>
<em>Igor Stravinsky, New York, NY, 1946</em> by <a href="http://arnoldnewman.com/">Arnold Newman</a>
</figcaption>
</figure>

<p>Newman realized that the open lid of the piano <em>“…is like the shape of a musical flat symbol&mdash;strong, linear, and beautiful, just like Stravinsky’s work.”</em> <sup><a href="https://books.google.com/books?id=qWOpWDKpUjgC&amp;pg=PA36#v=onepage&amp;q&amp;f=true">1</a></sup> The geometric construction of the image instantly captures the eye and the aggressive crop makes the final composition even more interesting. In this case the crop was a fundamental part of the original composition as shot, but it was not uncommon for him to find new life in images with different crops.</p>
<p>In a similar theme his portraits of both <a href="https://en.wikipedia.org/wiki/Salvador_Dal%C3%AD">Salador Dalí</a> and <a href="https://en.wikipedia.org/wiki/John_F._Kennedy">John F. Kennedy</a> show a willingness to allow the crop to bring in different defining characteristics of his subjects. In the case of Dalí it allows an abstraction to hang there mimicking the pose of the artist himself. Kennedy is mostly the only organic form, striking a relaxed pose, while dwarfed by the imposing architecture and hard lines surrounding him.</p>
<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/Salvador Dali, New York, NY, 1951.jpg" alt='Salvador Dali, New York, NY, 1951' width='572' height='780'>
<figcaption>
Salvador Dali, New York, NY, 1951 by <a href="http://arnoldnewman.com/">Arnold Newman</a>
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/John F. Kennedy, Washington D.C., 1953.jpg" alt='John F. Kennedy, Washington D.C., 1953' width='629' height='780'>
<figcaption>
John F. Kennedy, Washington D.C., 1953 by <a href="http://arnoldnewman.com/">Arnold Newman</a>
</figcaption>
</figure>

<p>He manages to bring the same deft handling of placing his subjects in the context of their work with other photographers as well.  His portrait of <a href="http://anseladams.com/">Ansel Adams</a> shows the photographer just outside his studio with the surrounding wilderness not only visible around the frame but reflected in the glass of the doors behind him (and the photographers glasses). Perhaps an indication of the nature of Adams work being to capture natural scenes through glass? </p>
<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/Ansel Adams, 1975.jpg" alt='Ansel Adams, 1975 by Arnold Newman' width='599' height='780'>
<figcaption>
Ansel Adams, 1975 by <a href="http://arnoldnewman.com/">Arnold Newman</a>
</figcaption>
</figure>

<p>For anyone familiar with the pioneer of another form of photography, Newman’s portrait of (the usually camera shy) <a href="https://en.wikipedia.org/wiki/Henri_Cartier-Bresson">Henri Cartier-Bresson</a> will instantly evoke a sense of the artists candid street images.  In it, Bresson appears to take the place of one of his subjects caught briefly on the streets in a fleeting moment. The portrait has an almost spontaneous feeling to it, (again) mirroring the style of the work of its subject.</p>
<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/Henri Cartier-Bresson, New York, NY, 1947.jpg" alt='Henri Cartier-Bresson, New York, NY, 1947' width='640' height='454'>
<figcaption>
Henri Cartier-Bresson, New York, NY, 1947 by <a href="http://arnoldnewman.com/">Arnold Newman</a>
</figcaption>
</figure>

<p>Eight years after his portrait of surrealist painter Dali, Newman shot another famous (abstraction) artist, <a href="https://en.wikipedia.org/wiki/Pablo_Picasso">Pablo Picasso</a>. This particular portrait is much more intimate and more classically composed, framing the subject as a headshot with little of the surrounding environment as before. I can’t help but think that the placement of the hand being similar in both images is intentional; a nod to the unconventional views both artists brought to the world.</p>
<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/Pablo Picasso,Vallauris, France, 1954.jpg" alt='Pablo Picasso,Vallauris, France, 1954' width='609' height='780'>
<figcaption>
Pablo Picasso,Vallauris, France, 1954 by <a href="http://arnoldnewman.com/">Arnold Newman</a>
</figcaption>
</figure>

<hr>
<p>The eloquent <a href="https://en.wikipedia.org/wiki/Gregory_Heisler">Gregory Heisler</a> had a wonderful discussion about Newman for <a href="http://www.acpinfo.org/blog/2008/09/29/gregory-heisler-on-arnold-newman-the-man-and-his-impact-wednesday-oct-1st-7pm-the-high-museum/"><em>Atlanta Celebrates Photography</em></a> at the High Musuem in 2008:</p>
<div class='fluid-vid'>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/IjY8XbGXmXw" frameborder="0" allowfullscreen></iframe>
</div>

<p>Arnold Newman produced an amazing body of work that warrants some time and consideration for anyone interested in portraiture. These few examples simply do not do his <a href="http://arnoldnewman.com/content/portraits-0">collection of portraits</a> justice.  If you have a few moments to peruse some amazing images head over to his website and have a look (I’m particularly fond of his extremely design-oriented portrait of chinese-american architect <a href="http://arnoldnewman.com/media-gallery/detail/58/315">I.M. Pei</a>):</p>
<figure>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/I.M. Pei, New York, NY, 1967.jpg" alt='I.M. Pei, New York, NY, 1967' width='640' height='773'>
<figcaption>
I.M. Pei, New York, NY, 1967 by <a href="http://arnoldnewman.com/">Arnold Newman</a>
</figcaption>
</figure>

<p>Of historical interest is a look at Newman’s contact sheet for the Stravinsky image showing various compositions and approaches to his subject with the piano. (I would have easily chosen the last image in the first row as my pick.) I have seen the second image in the second row cropped as indicated, which was also a very strong choice. I adore being able to investigate contact sheets from shoots like this - it helps me to humanize these amazing photographers while simultaneously allowing me an opportunity to learn a little about their thought process and how I might incorporate it into my own photography.</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/10/arnold-newman-portraits/Igor Stravinsky contact.jpg" alt='Igor Stravinsky contact sheet' width='960' height='694'>
</figure>

<p>To close, a quote from his interview with <em>American Photo</em> magazine back in 2000 that will likely remain relevant to photographers for a long time:</p>
<blockquote>
<p>But a lot of photographers think that if they buy a better camera they’ll be able to take better photographs.  A better camera won’t do a thing for you if you don’t have anything in your head or in your heart. <sup><a href="https://books.google.com/books?id=qWOpWDKpUjgC&amp;pg=PA36#v=onepage&amp;q&amp;f=true">1</a></sup></p>
</blockquote>
<p><small>
<sup>1</sup> Harris, Mark. <a href="https://books.google.com/books?id=qWOpWDKpUjgC&amp;pg=PA36#v=onepage&amp;q&amp;f=true">“Arnold Newman: The Stories Behind Some of the Most Famous Portraits of the 20th Century.”</a> <em>American Photo</em>, March/April 2000, pp. 36-38
</small></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Highlight Bloom and Photoillustration Look ]]></title>
            <link>https://pixls.us/articles/highlight-bloom-and-photoillustration-look/</link>
            <guid isPermaLink="true">https://pixls.us/articles/highlight-bloom-and-photoillustration-look/</guid>
            <pubDate>Wed, 12 Oct 2016 18:47:35 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/lede-woman.jpg" /><br/>
                 <h1>Highlight Bloom and Photoillustration Look</h1>  
                 <h2>Replicating a 'Lucisart'/Dave Hill type illustrative look</h2>   
                <p>Over in <a href="https://discuss.pixls.us/t/heres-some-kind-lucisart-processing-using-gmic-filters/2394" title="Topic on Discuss">the forums</a> community member <a href="https://discuss.pixls.us/users/sguyader/activity" title="sguyader on discuss">Sebastien Guyader</a> (@sguyader) posted a neat workflow for emulating a photo-illustrative look popularized by photographers like <a href="http://davehillphoto.com/classics-2005-2010/">Dave Hill</a> where the resulting images often seem to have a sort of hyper-real feeling to them. Some of this feeling comes from a local-contrast boost and slight ‘blooming’ of the lighter tones in the image (though arguably most of the look is due to lighting and compositing of multiple elements).</p>
<p>To illustrate, here are a few representative samples of Dave Hill’s work that reflects this feeling:</p>
<figure>
<a href='http://davehillphoto.com/classics-2005-2010/4sj9tswggio55wowsdzl7vtflvfjm4'>
    <img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/09_cliff_final.jpg" alt='Dave Hill Cliff' width='640' height='312'>
</a>
<a href='http://davehillphoto.com/classics-2005-2010/c8kqlov3w2osl8yvtqvro0ckl12q6m'>
    <img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/finishline_Lotion_Guy_Hot_Girl_092d.jpg" alt='Dave Hill Finishline Lotion' width='640' height='395'>
</a>
<a href='http://davehillphoto.com/classics-2005-2010/yg988exvuge6ek4290vge1s4rarujf'>
    <img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/track_6187a.jpg" alt='Dave Hill Track' width='640' height='427'>
</a>
<a href='http://davehillphoto.com/classics-2005-2010/4bt8vpcqi2vi1k8eve575sb861xk4m'>
    <img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/nick_saban_6443a.jpg" alt='Dave Hill Nick Saban' width='640' height='932'>
</a>
<figcaption>
A collection of example images. &copy;<a href="http://davehillphoto.com/classics-2005-2010/">Dave Hill</a>
</figcaption>
</figure>

<p>A video of Dave presenting on how he brought together the idea and images for the series the first image above is from:</p>
<div class='fluid-vid'>
<iframe width="560" height="315" src="https://www.youtube.com/embed/zSGY_N2Z_y0" frameborder="0" allowfullscreen></iframe>
</div>

<p>This effect is also popularized in Photoshop<sup><small>®</small></sup> filters such as <a href="https://www.google.com/search?q=photoshop+lucisart&amp;rlz=1C1CHBF_enUS707US707&amp;source=lnms&amp;tbm=isch&amp;sa=X&amp;ved=0ahUKEwi-no2l_NXPAhUBYT4KHbekC9QQ_AUICCgB&amp;biw=1353&amp;bih=1073#tbm=isch&amp;q=lucisart" title="Google Image search for &#39;Lucisart&#39;">LucisArt</a> in an effort to attain what some would (<em>erroneously</em>) call an “HDR” effect.  Really what they likely mean is a not-so-subtle tone-mapping. In particular the exaggerated local contrasts is often what garners folks attention.</p>
<p>We had <a href="https://pixls.us/articles/freaky-details-calvin-hollywood/">previously posted</a> about a method for exaggerating fine local contrasts and details using the <a href="https://pixls.us/articles/freaky-details-calvin-hollywood/">“Freaky Details”</a> method described by Calvin Hollywood. This workflow provides a similar idea but different results that many might find more appealing (it’s not as <em>gritty</em> as the Freaky Details approach).</p>
<p>Sebastien produced some great looking preview images to give folks a feeling for what the process would produce:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/bmw-vehicle-ride-bike-journey-1313343.jpg" alt='BMW' width='960' height='270' />
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/ifa-f9-oldtimer-pkw-ddr-1661767.jpg" alt='IFA-F9' width='960' height='310' />
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/fashion-woman-beauty-leisure-model-1636868.jpg" alt='Fashion Woman' width='960' height='320' />
<figcaption>
Images from <a href="https://pixabay.com">pixabay</a> (<a href="https://creativecommons.org/publicdomain/zero/1.0/deed.en" title="Creative Commons Zero - Public Domain">CC0, public domain</a>): <a href="https://pixabay.com/en/bmw-vehicle-ride-bike-journey-1313343/">Motorcycle</a>, <a href="https://pixabay.com/en/ifa-f9-oldtimer-pkw-ddr-1661767/">car</a>, <a href="https://pixabay.com/en/fashion-woman-beauty-leisure-model-1636868/">woman</a>.
</figcaption>
</figure>

<h2 id="replicating-a-dave-hill-lucasart-effect">Replicating a “Dave Hill”/“LucasArt” effect<a href="#replicating-a-dave-hill-lucasart-effect" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Sebastien’s approach relies only on having the always useful <a href="http://gmic.eu">G’MIC</a> plugin for <a href="https://www.gimp.org">GIMP</a>. The general workflow is to do a high-pass frequency separation, and to apply some effects like local contrast enhancement and some smoothing on the residual low-pass layer.  Then recombine the high+low pass layers to get the final result.</p>
<ol>
<li>Open the image.</li>
<li>Duplicate the base layer.<br>Rename it to <em>“Lowpass”</em>.</li>
<li>With the top layer (<em>“Lowpass”</em>) active, open G’MIC.</li>
<li>Use the <em>Photocomix smoothing</em> filter:
<p><span class="Cmd">Testing → Photocomix → Photocomix smoothing</span></p>
Set the <strong>Amplitude</strong> to <strong>10</strong>. Apply.<br>This is to taste, but a good startig place might be around 1% of the image dimensions (so a 2000px wide image - try using an Amplitude of 20).</li>
<li>Change the <em>“Lowpass”</em> layer blend mode to <em>Grain extract</em>.</li>
<li>Right-Click on the layer and choose <em>New from visible</em>.<br>Rename this layer from “<em>Visible</em>“ to something more memorable like <em>“Highpass”</em> and set its layer mode to <em>Grain merge</em>.<br>Turn off this layer visibility for now.</li>
<li>Activate the <em>“Lowpass”</em> layer and set its layer blend mode back to <em>Normal</em>.<br>The rest of the filters are applied to this <em>“Lowpass”</em> layer.</li>
<li>Open G’MIC again.<br>Apply the <em>Simple local contrast</em> filter:
<p><span class="Cmd">Details → Simple local contrast</span></p>
Using:<ul>
<li><strong>Edge Sensitivity</strong> to <strong>25</strong></li>
<li><strong>Iterations</strong> to <strong>1</strong></li>
<li><strong>Paint effect</strong> to <strong>50</strong></li>
<li><strong>Post-gamma</strong> to <strong>1.20</strong>  </li>
</ul>
</li>
<li>Open G’MIC again.<br>Now apply the <em>Graphic novel</em> filter:
<p><span class="Cmd">Artistic → Graphic novel</span></p>
Using:<ul>
<li>check the <strong>Skip this step</strong> checkbox for <strong>Apply Local Normalization</strong></li>
<li><strong>Pencil size</strong> to <strong>1</strong></li>
<li><strong>Pencil amplitude</strong> to <strong>100-200</strong></li>
<li><strong>Pencil smoother sharpness/edge protection/smoothness</strong><br>  to <strong>0</strong></li>
<li>Boost merging options <strong>Mixer</strong> to <strong>Soft light</strong></li>
<li><strong>Painter’s touch sharpness</strong> to <strong>1.26</strong></li>
<li><strong>Painter’s edge protection flow</strong> to <strong>0.37</strong></li>
<li><strong>Painter’s smoothness</strong> to <strong>1.05</strong></li>
</ul>
</li>
<li>Finally, make the <em>“Highpass”</em> layer visible again to bring back the fine details.</li>
</ol>
<h3 id="trying-it-out-">Trying It Out!<a href="#trying-it-out-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Let’s walk through the process. Sebastien got his sample images from the website <a href="https://pixabay.com">https://pixabay.com</a>, so I thought I would follow suit and find something suitable from there also.  After some searching I found this neat image from Jerzy Gorecki licensed <a href="https://creativecommons.org/publicdomain/zero/1.0/deed.en" title="Creative Commons Zero - Public Domain">Create Commons 0/Public Domain</a>.</p>
<figure>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-01-base.jpg" alt='Model' width='640' height='815'/>
<figcaption>
The base image (<a href="https://pixabay.com/en/girl-hands-the-act-of-portrait-1527959/">link</a>).<br>From <a href="https://pixabay.com">pixabay</a>, (<a href="https://creativecommons.org/publicdomain/zero/1.0/deed.en" title="Creative Commons Zero - Public Domain">CC0 - Public Domain</a>): Jerzy Gorecki.
</figcaption>
</figure>

<h4 id="frequency-separation">Frequency Separation<a href="#frequency-separation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The first steps (1&mdash;7) are to create a High/Low pass frequency separation of the image.  If you have a different method for obtaining the separation then feel free to use it.  Sebastien uses the Photocomix smoothing filter to create his low-pass layer (other options might be Gaussian blur, bi-lateral smoothing, or even wavelets).</p>
<p>The basic steps to do this are to duplicate the base layer, blur it, then set the layer blend mode to <strong>Grain extract</strong> and create a new layer from visible. The new layer will be the Highpass (high-frequency) details and should have its layer blend mode set to <strong>Grain merge</strong>.  The original blurred layer is the Lowpass (low-frequency) information and should have its layer blend mode set back to <strong>Normal</strong>.</p>
<p>So, following Sebastien’s steps, duplicate the base layer and rename the layer to “lowpass”.  Then open G’MIC and apply:</p>
<p><span class="Cmd">Testing → Photocomix → Photocomix smoothing</span></p>

<p>with an amplitude of around 20. Change this to suit your own taste, but about 1% of the image width is a decent starting point.  You’ll now have the base layer and the “lowpass” layer above it that has been smoothed:</p>
<figure>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-02-photocomix-smooth.jpg" alt='Photocomix Smoothing' width='640' height='815'>
<figcaption>
“lowpass” layer after Photocomix smoothing with <strong>Amplitude</strong> set to 20.
</figcaption>
</figure>

<p>Setting the “lowpass” layer blend mode to <strong>Grain extract</strong> will reveal the high-frequency details:</p>
<figure>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-02-photocomix-smooth-HP.png" alt='Grain Extract' width='271' height='197'>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-03-photocomix-smooth-grain-extract.jpg" alt='HP' width='640' height='815'>
<figcaption>
The high-frequency details visible after setting the blurred “lowpass” layer blend mode to <strong>Grain extract</strong>.
</figcaption>
</figure>

<p>Now create a new layer from what is currently visible.  Either right-click the “lowpass” layer and choose “New from visible” or from the menus:</p>
<p><span class="Cmd">Layer → New from Visible</span></p>

<p>Rename this new layer from “Visible” to “highpass” and set its layer blend mode to <strong>Grain merge</strong>.  Select the “lowpass” layer and set its layer blend mode back to <strong>Normal</strong>.</p>
<figure>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-03-frequency-separation.png" alt='Layers' width='271' height='237'>
</figure>

<p>The visible result should be back to what your starting image looked like.
The rest of the steps for this tutorial will operate on the “lowpass” layer.
You can leave the “highpass” filter visible during the rest of the steps to see what your results will look like.</p>
<h4 id="modifying-the-low-frequency-layer">Modifying the Low-Frequency Layer<a href="#modifying-the-low-frequency-layer" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>These next steps will modify the underlying low-frequency image information to smooth it out and give it a bit of a contrast boost. First the “Simple local contrast” filter will separate tones and do some preliminary smoothing, while the “Graphic novel” filter will provide a nice boost to light tones along with further smoothing.</p>
<h4 id="simple-local-contrast">Simple Local Contrast<a href="#simple-local-contrast" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>On the “lowpass” layer, open <a href="http://gmic.eu">G’MIC</a> and find the “Simple local contrast” filter:</p>
<p><span class="Cmd">Details → Simple local contrast</span></p>

<p>Change the following settings:</p>
<ul>
<li><strong>Edge Sensitivity</strong> to <strong>25</strong></li>
<li><strong>Iterations</strong> to <strong>1</strong></li>
<li><strong>Paint effect</strong> to <strong>50</strong></li>
<li><strong>Post-gamma</strong> to <strong>1.20</strong>  </li>
</ul>
<p>This will smooth out overall tones while simultaneously providing a nice local contrast boost. This is the step that causes small lighting details to “pop”:</p>
<figure>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-04-simple-local-contrast.jpg" alt='Simple Local Contrast' data-swap-src='tut-01-base.jpg' width='640' height='815' >
<figcaption>
After applying the “Simple local contrast” filter.<br>(Click to compare to the original image)
</figcaption>
</figure>

<p>The contrast increase provides a nice visual punch to the image. The addition of the “Graphic novel” filter will push the overall image much closer to a feeling of a photo-illustration.</p>
<h4 id="graphic-novel">Graphic Novel<a href="#graphic-novel" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Still on the “lowpass” layer, re-open <a href="http://gmic.eu">G’MIC</a> and open the “Graphic Novel” filter:</p>
<p><span class="Cmd">Artistic → Graphic novel</span></p>

<p>Change the following settings:</p>
<ul>
<li>check the <strong>Skip this step</strong> checkbox for <strong>Apply Local Normalization</strong></li>
<li><strong>Pencil size</strong> to <strong>1</strong></li>
<li><strong>Pencil amplitude</strong> to <strong>100-200</strong></li>
<li><strong>Pencil smoother sharpness/edge protection/smoothness</strong><br>  to <strong>0</strong></li>
<li>Boost merging options <strong>Mixer</strong> to <strong>Soft light</strong></li>
<li><strong>Painter’s touch sharpness</strong> to <strong>1.26</strong></li>
<li><strong>Painter’s edge protection flow</strong> to <strong>0.37</strong></li>
<li><strong>Painter’s smoothness</strong> to <strong>1.05</strong></li>
</ul>
<p>The intent with this filter is to further smooth the overall tones, simplify details, and to give a nice boost to the light tones of the image:</p>
<figure>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-05-graphic-novel.jpg" alt='Graphic Novel' data-swap-src='tut-04-simple-local-contrast.jpg' width='640' height='815'>
<figcaption>
After applying the “Graphic novel” filter.<br>(Click to compare to the local contrast result)
</figcaption>
</figure>

<p>The effect at 100% opacity can be a little strong.  If so, simply adjust the opacity of the “lowpass” layer to taste. In some cases it would probably be desirable to mask areas you don’t want the effect applied to.</p>
<p>I’ve included the GIMP .xcf.bz2 file of this image while I was working on it for this article.  You can <a href="girl-hands-the-act-of-portrait-1527959-full.xcf.bz2"><strong>download the file here</strong></a> (34.9MB). I did each step on a new layer so if you want to see the results of each effect step-by-step, simply turn that layer on/off:</p>
<figure>
<img src="https://pixls.us/articles/highlight-bloom-and-photoillustration-look/tut-04-xcf-sample.png" alt='Sample layers' width='271' height='320'>
<figcaption>
Example XCF layers
</figcaption>
</figure>

<p>Finally, a great big <strong>Thank You!</strong> to Sebastien Guyader (@sguyader) for <a href="https://discuss.pixls.us/t/heres-some-kind-lucisart-processing-using-gmic-filters/">sharing this with everyone</a> in the community!</p>
<h4 id="a-g-mic-command">A G’MIC Command<a href="#a-g-mic-command" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Of course, this wouldn’t be complete if someone didn’t come along with the direct <a href="http://gmic.eu">G’MIC</a> commands to get a similar result!  And we can thank Iain Fergusson (@Iain) for coming up with the commands:</p>
<pre><code>--gimp_anisotropic_smoothing[0] 10,0.16,0.63,0.6,2.35,0.8,30,2,0,1,1,0,1

-sub[0] [1]

-simplelocalcontrast_p[1] 25,1,50,1,1,1.2,1,1,1,1,1,1
-gimp_graphic_novelfxl[1] 1,2,6,5,20,0,1,100,0,1,0,0.78,1.92,0,0,2,1,1,1,1.26,0.37,1.05
-add
-c 0,255
</code></pre>  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ From the Community Vol. 1 ]]></title>
            <link>https://pixls.us/blog/2016/09/from-the-community-vol-1/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/09/from-the-community-vol-1/</guid>
            <pubDate>Sun, 04 Sep 2016 00:00:00 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/09/from-the-community-vol-1/photography-tile.png" /><br/>
                 <h1>From the Community Vol. 1</h1>  
                  
                <p>Welcome to the first installment of <em>From the Community</em>, a (hopefully) quarterly blog post to highlight a few of the things our community members have been doing!</p>
<!-- more -->
<h2 id="rapid-photo-downloader-process-model"><a href="#rapid-photo-downloader-process-model" class="header-link-alt">Rapid Photo Downloader Process Model</a></h2>
<p><a href="https://discuss.pixls.us/t/the-rapid-photo-downloader-0-9-process-model/2114">@damonlynch has a great write up of Rapid Photo Download’s process model</a>. Rapid Photo Downloader is built using <a href="https://www.python.org/">Python</a>, so if you’re looking for a good way to add threads to your Python program, this write up has some good information for you, check it out!</p>
<figure class='big-vid'>
    <img src="https://pixls.us/blog/2016/09/from-the-community-vol-1/rpd-process-model.png" alt='rpd process model'>
</figure>

<h2 id="community-built-software-downloads-page"><a href="#community-built-software-downloads-page" class="header-link-alt">Community-built Software downloads page</a></h2>
<p>Free Software development tends to move at a pretty good pace, so there is always something new to try out! Not all of the new things warrant a new release, but our community steps up and builds the software so that others can use and test! Instead of random links to dropboxes and such, we’ve created a <a href="https://discuss.pixls.us/t/community-built-software/2137">Community-built Software page</a> to help centralize and make it easy for our users to help find and download the freshest builds of software from our great community members. Keep in mind that support may be limited for these builds and they’re considered testing, so quality may vary, but if you covet the newest, shiniest things, this is the place for you!</p>
<h2 id="glitch-art-filters-coming-to-g-mic"><a href="#glitch-art-filters-coming-to-g-mic" class="header-link-alt">Glitch art filters coming to G’MIC</a></h2>
<p><a href="https://discuss.pixls.us/t/on-the-road-to-1-7-6/2167">G’MIC will be getting some cool glitch art filters in 1.7.6</a>. <a href="https://discuss.pixls.us/users/thething">@thething</a> is interested in <a href="https://en.wikipedia.org/wiki/Glitch_art">glitch art</a> and <a href="https://discuss.pixls.us/t/glitch-art-filters/2159">requested some new filters in G’MIC</a>, and <a href="https://discuss.pixls.us/users/david_tschumperle">@David_Tschumperle</a> delivered very quickly!</p>
<p>You can flip blocks:</p>
<figure class='big-vid'>
    <img src="https://pixls.us/blog/2016/09/from-the-community-vol-1/gmic-block-flipping.png" alt='GMIC block flipping'>
</figure>

<p>and warp your images:</p>
<figure class='big-vid'>
    <img src="https://pixls.us/blog/2016/09/from-the-community-vol-1/gmic-warp.png" alt='GMIC image warping'>
</figure>

<h2 id="an-alternative-to-watermarking"><a href="#an-alternative-to-watermarking" class="header-link-alt">An Alternative to Watermarking</a></h2>
<p>Watermarking is ugly and takes focus away from your image. <a href="https://discuss.pixls.us/t/annotation-with-imagemagick-watermark-ish/1813">Why not try and add an attribution bar to your images?</a> In this post, <a href="https://discuss.pixls.us/users/patdavid">@patdavid</a> lays out how to add a bar underneath your image with your name, the image title, and a little logo. <a href="https://discuss.pixls.us/users/david_tschumperle">@David_Tschumperle</a> followed that effort up with an alternate implementation using G’MIC instead of imagemagic. Lastly, <a href="https://discuss.pixls.us/users/vato">@vato</a> rolled the imagemagick version into a <a href="https://discuss.pixls.us/t/annotation-with-imagemagick-watermark-ish/1813/6">bash script</a> with the necessary parameters exposed as variables at the beginning of the script.</p>
<p>Here is an example image by <a href="https://discuss.pixls.us/users/morgan_hardwood">@Morgan_Hardwood</a>:</p>
<figure class='big-vid'>
    <img src="https://pixls.us/blog/2016/09/from-the-community-vol-1/attrib-bar.jpg" alt='attribution bar example'>
</figure>

<h2 id="help-author-a-tutorial-for-beginners"><a href="#help-author-a-tutorial-for-beginners" class="header-link-alt">Help Author a Tutorial for Beginners</a></h2>
<p>Finally, <a href="https://discuss.pixls.us/t/article-idea-beginners-intro-to-free-software-photography/931">we’re still working on our beginner article</a> to help new users navigate the myriad of free software photography software that is out there. If you have ideas, or better yet, want to author a bit of content with our community, please join and help out! The post is community wiki and has complete revision control, so don’t be afraid to jump in and contribute!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ A Chiaroscuro Portrait ]]></title>
            <link>https://pixls.us/articles/a-chiaroscuro-portrait/</link>
            <guid isPermaLink="true">https://pixls.us/articles/a-chiaroscuro-portrait/</guid>
            <pubDate>Wed, 27 Jul 2016 18:16:07 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-lede.jpg" /><br/>
                 <h1>A Chiaroscuro Portrait</h1>  
                 <h2>Following the Old Masters</h2>   
                <h2 id="introduction-concept-theory-">Introduction (Concept/Theory)<a href="#introduction-concept-theory-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>The term <a href="https://en.wikipedia.org/wiki/Chiaroscuro"><em>Chiaroscuro</em></a> is derived from the Italian <em>chiaro</em> meaning ‘clear, bright’ and <em>oscuro</em> meaning ‘dark, obscure’.  In art the term has come to refer to the use of bold contrasts between light and shadow, particularly across an entire composition, where they are a prominent feature of the work.</p>
<p>This interplay of shadow and light is particularly important in allowing the viewer to extrapolate volume from a flat image.  The use of a single light source helps to accentuate the perception of volume as well as adding drama and dynamics to the scene.</p>
<p>Historically the use of chiaroscuro can often be associated with the works of old masters such as <a href="https://en.wikipedia.org/wiki/Rembrandt">Rembrandt</a> and <a href="https://en.wikipedia.org/wiki/Caravaggio">Caravaggio</a>.  The use of such extreme lighting immediately evokes a sense of shape and volume, while focusing the attention of the viewer.</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/rembrandt-self.jpg" alt='Rembrandt Self Portrait' width='391' height='480'>
<figcaption>
<a href='https://commons.wikimedia.org/wiki/File:Rembrandt_van_Rijn_184.jpg'><em>Self Portrait with Gorget</em></a> by <a href="https://en.wikipedia.org/wiki/Rembrandt">Rembrandt</a>
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/pearl_earring.jpg" alt='Girl with a Pearl Earring' width='410' height='480'>
<figcaption>
<a href="https://en.wikipedia.org/wiki/Girl_with_a_Pearl_Earring"><em>Girl with a Pearl Earring</em></a> by <a href="https://en.wikipedia.org/wiki/Johannes_Vermeer">Johannes Vermeer</a>
</figcaption>
</figure>

<p>The aim of this tutorial will be to emulate the lighting characteristics of chiaroscuro in producing a portrait to evoke the feeling of an old master painting.</p>
<h3 id="equipment">Equipment<a href="#equipment" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>In examining chiaroscuro portraiture, it becomes apparent that a strong characteristic of the images is the use of single light source on the scene.  So this tutorial will focus on using a single source to illuminate the portrait.</p>
<p>Getting the keylight off the camera is essential.  The closer the keylight is to the axis of the camera the larger the reduction in shadows.  This is counter to the intention of this workflow.  Shadows are an essential component in producing this look, and on-camera lighting simply will not work.</p>
<p>The reason to choose a softbox versus the myriad of other light modifiers available is simple: control.  Umbrellas can soften the light, but due to their open nature have a tendency to spill light everywhere while doing so.  A softbox allows the light to be softened while also retaining a higher level of spill control.</p>
<p>Light spill can still occur with a softbox, so the best option is to bring the light in as close as possible to the subject.  Due to the inverse square nature of light attenuation, this will help to drop the background very dark (or black) when exposing properly for the subject.</p>
<figure class='big-vid'>
<a href='three-dots.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/three-dots.jpg" alt='Inverse Square Light Fall Off' width='960' height='320'>
</a>
</figure>

<p><strong>Left</strong><br>For example, in the sample images above, a 20 inch softbox was initially located about 18 inches away from the subject (first).  The rear wall was approximately 48 inches away from the subject or just over twice the distance from the softbox.  Thus, on a proper exposure for the subject, the background would be around 3 stops lower in light.  This is seen as the background in the first image has dropped to a dark gray.</p>
<p><strong>Middle</strong><br>When the light distance to the subject is doubled and the light distance to the rear wall stays the same, the ratio is not as extreme between them.  The light distance from the subject is now 36 inches, while the light distance to the rear wall is still 48 inches.  When properly exposing for the subject, the rear wall is now only about 1 stop lower in light.</p>
<p><strong>Right</strong><br>In the final example, the distance from the light to both the subject and the rear wall are very close.  As such, a proper exposure for the subject almost brings the wall to a middle exposure.</p>
<p>What this example provides is a good visual guide for how to position the subject and light relative to the surroundings to create the desired look.  To accentuate the ratio between dark and light in the image it would be best to move the light as close to the subject as possible.</p>
<p>If there is nothing to reflect light on the shadow side of the subject, then the shadows would fall to very dark or black.  Usually, there are at least walls and ceilings in a space that will reflect some light, and the amount falling on the shadow side can be attenuated by either moving the subject nearer to a wall on that side, or using a bounce/reflector as desired.</p>
<h2 id="shooting">Shooting<a href="#shooting" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="planning">Planning<a href="#planning" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The setup for the shot would be to push the key light in very close to the model, while still allowing some bounce to slightly fill the shadows.</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/light-setup.png" alt='Mairi Light Setup' width='640' height='905' style='max-height:100vh;'>
</figure>

<p>As noted previously, having the key light close to the model would allow the rest of the scene to become much darker.  The softbox is arranged such that the face is almost completely vertical and the bottom edge is just above the models eyes.  This was to feather the lower edge of the light falloff along the front of the model.</p>
<p>There are 2 main adjustments that can be made to fine-tune the image result with this setup.</p>
<p>The first is the key light distance/orientation to the subject.  This will dictate the proper exposure for the subject.  For this image the intention is to push the key light in as close as possible without being in frame.  There is also the option of angling the key light relative to the subject.  In the diagram above, the softbox is actually angled away from the subject.  The intention here was to feather the edge of the light in order to control spill onto the rest of the model (putting more emphasis on her face).</p>
<p>The second adjustment, once the key light is in a good location, is the distance from the key light and subject together, to the surrounding walls (or a reflector if one is being used).  Moving both subject and keylight closer to the side wall will increase the amount of reflected light being bounced into the shadows.</p>
<h4 id="mood-board">Mood Board<a href="#mood-board" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>If possible, it can be extremely helpful to both the model and photographer to have a Mood Board available.  This is usually just a collection or collage of images that help to convey the desired feeling or desired result from the session.  For help in directing the model, the images do not necessarily need the same lighting setup.  The intention is to help the model understand what your vision is for the pose and facial expressions.</p>
<h3 id="the-shoot">The Shoot<a href="#the-shoot" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The lighting is set up and the model understands what type of look is desired, so all that’s left is to shoot the image!</p>
<figure class='big-vid'>
<a href='mairi-contact.jpg'>
    <img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-contact.jpg" alt='Mairi Contact Sheet' width='960' height='685'>
</a>
</figure>

<p>In the end, I favored the last image in the sequence for a combination of the models head position/body language and the slight smile she has.</p>
<h2 id="postprocessing">Postprocessing<a href="#postprocessing" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Having chosen the final image from the contact sheet, it’s now time to proceed with developing the image and retouching as needed.</p>
<p>If you’d like to follow along you can download the raw .ORF file: </p>
<p><a href="Mairi_Troisieme.ORF"><strong>Mairi_Troisieme.ORF</strong></a> (13MB)</p>
<p>This file is licensed <a href="https://creativecommons.org/licenses/by-nc-sa/3.0/" title="Creative Commons By-Attribution Non-Commercial Share-Alike"><img src="https://pixls.us/articles/a-chiaroscuro-portrait/cc-by-nc-sa.png" height='15' style='display: inline; margin: 0; width: initial;'></a>
(<a href="https://creativecommons.org/licenses/by-nc-sa/3.0/" title="Creative Commons By-Attribution Non-Commercial Share-Alike">Creative Commons, By-Attribution, Non-Commercial, Share-Alike</a>), and is the same image that I shared with everyone on the forums for a PlayRaw processing practice.  You can see how other folks approached processing this image <a href="https://discuss.pixls.us/t/playraw-mairi-troisieme/967">in the topic on discuss</a>.  If you decide to try this out for yourself, come share your results with us!</p>
<h3 id="raw-development">Raw Development<a href="#raw-development" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>There are various <a href="https://pixls.us/software">Free raw processing tools</a> available and for this tutorial I will be using the wonderful <a href="http://www.darktable.org">darktable</a>.</p>
<figure>
<a href='http://www.darktable.org' title='darktable website'>
    <img src="https://pixls.us/articles/a-chiaroscuro-portrait/dtbg_logo.png" alt='darktable logo'>
</a>
</figure>

<h4 id="base-curve">Base Curve<a href="#base-curve" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Not surprisingly the initial image loaded without any modifications is a bit dark and rather flat looking.  By default darktable should have recognized that the file is from Olympus, and attempted to apply a sane base curve to the linear raw data.  If it doesn’t you can choose the preset “olympus like alternate”.</p>
<p>I found that the preset tended to crush the darkest tones a bit too much, and instead opted for a simple curve with a single point as seen here:</p>
<figure class='big-vid'>
<a href='darktable_0001.jpg'>
    <img src="https://pixls.us/articles/a-chiaroscuro-portrait/darktable_0001.jpg" alt='darktable base curve' width='960' height='526'>
</a>
</figure>

<p>Resist the temptation to try and adjust overall exposure and contrast with the base curve.  These parameters will be adjusted shortly in the appropriate modules.  The base curve is only intended to transform the linear raw rgb to something that looks good on your output device.  The base curve will affect how the contrasts, colors, and saturation all relate in the final output.  For the purposes of this tutorial, it is enough to simply choose a preset.</p>
<p>The next series of steps focus on adjusting various exposure parameters for the image.  Conceptually they start with the most broad adjustment, exposure, then to slightly more targeted adjustments such as contrast, brightness, and saturation, then finish with targeted tonal adjustments in tone curves.</p>
<p><a href="https://www.darktable.org/usermanual/ch03s04.html.php#base_curve">darktable manual: base curve</a></p>
<h4 id="exposure">Exposure<a href="#exposure" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Once the base curve is set, the next module to adjust would be the overall exposure of the image (and the black point).  This is done in the “exposure” module (below the base curve).</p>
<figure class='big-vid'>
<a href='darktable_0002.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/darktable_0002.jpg" alt='darktable exposure' width='960' height='526'>
</a>
</figure>

<p>The important area to watch while adjusting the exposure for the image is the histogram.  The image was exposed a little dark, so increase the exposure overall for the image.  In the histogram, avoid clipping any channels by allowing them to be pushed outside the range.  In this case, the desire is to provide a nice mid-level brightness to the models face.  The exposure can be raised until the channels begin to clip on the far right of the histogram, then brought back down a bit to leave some headroom.</p>
<p>The darkest areas of the histogram on the left are clipped a bit, so raising the black level brings the detail back in the darkest shadows.  When in doubt try to let the histogram guide you with data from the image.  Particularly around the highest and lowest values (avoid clipping if possible).</p>
<p>An easy way to think of the exposure module is that it allows the entire image exposure to be shifted along with compressing/expanding the overall range by modifying the black point.</p>
<p><a href="https://www.darktable.org/usermanual/ch03s04.html.php#exposure">darktable manual: exposure</a></p>
<h4 id="contrast-brightness-saturation">Contrast Brightness Saturation<a href="#contrast-brightness-saturation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Where the Exposure module shifts the overall image values from a global perspective, modules such as the “contrast brightness saturation” allow finer tuning of the image within the range of the exposure.</p>
<p>To emphasis the models face, while also strengthening the interplay of shadow and light on the image, drop the brightness down to taste.  I brought the brightness levels down quite a bit (-0.31) to push almost all of the image below medium brightness.</p>
<figure class='big-vid'>
<a href='darktable_0003.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/darktable_0003.jpg" alt='darktable contrast brightness saturation' width='960' height='526'>
</a>
</figure>

<p>Overall this helps to emphasis the models face over the rest of the image initially.  While the rest of the image is comprised of various dark/neutral tones, the models face is not.  Pushing the saturation down as well will remove much of the color from the scene and face.  This is done to bring the skin tones back down to something slightly more natural looking, while also muting some of those tones.</p>
<figure class='big-vid'>
<a href='darktable_0004.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/darktable_0004.jpg" alt='darktable contrast brightness saturation' width='960' height='526'>
</a>
</figure>

<p>The skin now looks a bit more natural but muted.  The background tones have become more neutral as well.  A very slight bump in contrast to taste finishes out this module.</p>
<p><a href="https://www.darktable.org/usermanual/ch03s04.html.php#contrast_brightness_saturation">darktable manual: contrast brightness saturation</a></p>
<h4 id="tone-curve">Tone Curve<a href="#tone-curve" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>A final modification to the exposure of the image is through a tone curve adjustment.  This gives us the ability to make some slight changes to particular tonal ranges.  In this case pushing the darker tones down a bit more while boosting the upper mid and high tones.</p>
<figure class='big-vid'>
<a href='darktable_0005.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/darktable_0005.jpg" alt='darktable tone curve' width='960' height='526'>
</a>
</figure>

<p>This is actually a type of contrast increase, but controlled to specific tones based on the curve.  The darkest darks (bottom of the curve) get pushed a little bit darker, which will include most of the sweater, background, and shadow side of the models face.  The very slight rolling boost to the lighter tones primarily helps to allow the face to brighten up against the background even more.</p>
<p>The changes are very slight and to taste.  The tone curve is very sensitive to changes, and often only very small modifications are required to achieve a given result.</p>
<p><a href="https://www.darktable.org/usermanual/ch03s04s02.html.php#tone_curve">darktable manual: tone curve</a></p>
<h4 id="sharpen">Sharpen<a href="#sharpen" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>By default the sharpen module will apply a small amount of sharpening to the image.  The module uses unsharp mask for sharpening, so the radius parameter is the blur radius into the unsharp mask.  I wanted to sharpen lightly very fine details, so set the radius to ~1, with an amount around 0.9 and no threshold.  This produced results that are very hard to distinguish from the default settings, but appears to sharpen smaller structures just slightly more.</p>
<figure class='big-vid'>
<a href='darktable_0006.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/darktable_0006.jpg" alt='darktable exposure' width='960' height='526'>
</a>
</figure>

<p>I personally include a final sharpening step as a side effect of using wavelet decompose for skin retouching later in the process with <a href="https://www.gimp.org">GIMP</a>.  As such I am not usually as concerned about sharpening here as much.  If I were, there are better modules for adjusting sharpening from wavelets using the equalizer module.</p>
<p><a href="https://www.darktable.org/usermanual/ch03s04s04.html.php#sharpen">darktable manual: sharpen</a></p>
<h4 id="denoise-profiled-">Denoise (profiled)<a href="#denoise-profiled-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The darktable team and its users profiled many different cameras for noise profiles at various ISOs to build a statistical model with brightness across the three color channels.  Using these profiles, darktable can then do a better job at efficiently denoising images.  In the case of my camera (Olympus OM-D E-M5), there was a profile already captured for ISO200.</p>
<figure class='big-vid'>
<a href='darktable_0007.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/darktable_0007.jpg" alt='darktable denoise profiled' width='960' height='526'>
</a>
</figure>

<p>In this case, the chroma noise wasn’t too bad, and a very slight reduction in luma noise would be sufficient for the image.  As such, I used a non-local means with a large patch size (to retain sharpness) and a low strength.  This was all applied uniformly against the HSV lightness option.</p>
<p><a href="https://www.darktable.org/usermanual/ch03s04s04.html.php#denoise_profiled">darktable manual: denoise - profiled</a></p>
<h4 id="export">Export<a href="#export" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Finally!  The image tones and exposure are in a desirable state, so export the results to a new file.  I tend to use either TIF or PNG at 16 bit.  This is in case I want to work in a full 16 bit workflow with the latest <a href="https://www.gimp.org">GIMP</a>, or may want to in the future.</p>
<h3 id="gimp">GIMP<a href="#gimp" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>When there are still some pixel-level modifications that need to be done to the image, the go-to software is <a href="https://www.gimp.org">GIMP</a>.</p>
<ul>
<li>Skin retouching</li>
<li>spot healing/touchups</li>
<li>Background rebuild</li>
</ul>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/wilber-big.png" alt='GIMP - GNU Image Manipulation Program <3' width='300' height='224'>
</figure>


<h4 id="skin-retouching-with-wavelet-decompose">Skin Retouching with Wavelet Decompose<a href="#skin-retouching-with-wavelet-decompose" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This step is not always needed, but who doesn’t want their skin to look a little nicer if possible?</p>
<p>The ability to modify an image based on detail scales isolated on their own layers is a very powerful tool.  The approach is similar to frequency separation, but has the advantage of providing multiple frequencies to modify simultaneously of progressively larger and larger detail scales.  This offers a large range of flexibility and an easier workflow vs. frequency separation (you can work on any detail scale simply by switching to a different layer).</p>
<p>I used to use the wonderful <a href="http://registry.gimp.org/node/11742">Wavelet Decompose</a> plugin from marcor on the GIMP plugin registry.  I have since switched to using the same result from <a href="http://gmic.eu">G’MIC</a> once David Tschumperlé added it in for me.  It can be found in G’MIC under:</p>
<p class='Cmd'>Details &rarr; Split details [wavelets]</p>

<p>Running <strong>Split details [wavelets]</strong> against the image to produce 5 wavelet scales and a residual layer yields (cropped):</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/wavelets-example.jpg" alt='Wavelet scales example decompose' width='640' height='960'>
</figure>

<p>The plugin (or script) will produce 5 layers of isolated details plus a residual layer of low-frequency color information.  Seen here in ascending size of detail scales.  The finest scales (1 &amp; 2) will be hard to discern the details as they are quite fine.</p>
<p>To help visualizing what the different scale levels look like here is a view of the same levels above, normalized:</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/wavelets-example-normalized.jpg" alt='Wavelet scales normalized' width='640' height='960'>
</figure>

<p>The normalized view shows clearly the various types of detail scales on each layer.</p>
<p>There are various types of changes that can be made to the final image from these details scales.  In this image, we are going to focus on evening out the skin tones overall.  The scales with the biggest impact on even skin tones for this image are 4 and 5.</p>
<p>A good workflow when smoothing overall skin tones and using wavelet scales is to work on smoothing from the largest detail scales and working down to finer scales.  Usually, a nice amount of pleasing tonal smoothing can be accomplished in the first couple of coarse detail scales.</p>
<h4 id="skin-retouching-zones">Skin Retouching Zones<a href="#skin-retouching-zones" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Different portions of a face will often require different levels of smoothing.  Below is a rough map of facial contours to consider when retouching.  Not all faces will require the exact same regions, but it is a good starting point to consider when approaching a new image.</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/skin-zones.jpg" alt='Skin retouching by zones' width='640' height='742'>
</figure>

<p>The selections are made with the Free Select Tool with the “Feather edges” option on and set to roughly 30px.</p>
<h4 id="smoothing">Smoothing<a href="#smoothing" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>A good starting point to consider is the forehead on the largest detail scale (5).  The basic workflow is to select a region of interest and a layer of detail, then to suppress the features on that detail level.  The method of suppressing features is a matter of personal taste but is usually done across the entire selection using a blur filter of some sort.</p>
<p>A good first choice would be to use a gaussian blur (or Selective Gaussian Blur) to smooth the selection.  A better choice, if G’MIC is installed, is to use a bilateral blur for its edge-preserving properties.  The rest of these examples will use the bilateral blur for smoothing.</p>
<p>Considering the forehead region:</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/forehead-orig-scale5-4.jpg" alt='Sking retouching wavelet scales forehead' width='640' height='1397'>
</figure>

<p>The first image is the original.  The second image is after running a bilateral blur (in G’MIC: Smooth [bilateral]), with the default parameter values:</p>
<ul>
<li>Spatial variance: 10</li>
<li>Value variance: 7</li>
<li>Iterations: 2</li>
</ul>
<p>These values were chosen from experience using this filter for the same purpose across many, many images.  The results of running a single blur on the largest wavelet scale is immediately obvious.  The unevenness of the skin and tones overall are smoothed in a pleasing way, while still retaining the finer details that allow the eye to see a realistic skin texture.</p>
<p>The last image is the result of working on the next detail scale layer down (Wavelet scale 4), with much softer blur parameters:</p>
<ul>
<li>Spatial variance: 5</li>
<li>Value variance: 2</li>
<li>Iterations: 1</li>
</ul>
<p>This pass does a good job of finishing off the skin tones globally.  The overall impression of the skin is much smoother than the original, but crucial fine details are all left intact (wrinkles, pores) to keep the it looking realistic.</p>
<p>This same process is repeated for each of the facial regions described.  In some cases the results of running the first bilateral blur on the largest scale level is enough to even out the tones (the cheeks and upper lip for example).  The chin got the same treatment as the forehead.  The process is entirely subjective, and will vary from person to person for the parameters.  Experimentation is encouraged here.</p>
<p>More importantly, the key word to consider while working on skin tones is <strong><em>moderation</em></strong>.  It is also important to check your results zoomed out, as this will give you an impression of the image as seen when scaled to something more web-sized.  A good rule of thumb might be: </p>
<blockquote>
<p>“If it looks good to you, go back and reduce the effect more”.</p>
</blockquote>
<p>The original vs. results after wavelet smoothing:</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/face-wavelet.jpg" alt='Mairi Face Wavelet' data-swap-src='face-original.jpg' width='640' height='741'>
<figcaption>
Wavelet Smoothed.<br>
Click to compare original
</figcaption>
</figure>

<noscript>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/face-original.jpg" alt='Mairi Face Original' width='640' height='741'>
<figcaption>
Original
</figcaption>
</figure>
</noscript>

<p>When the work is finished on the wavelet scales, a new layer from all of the visible layers can be created to continue touching up spot areas that may need it.</p>
<p class='Cmd'>Layer → New from Visible</p>


<h4 id="spot-touchups">Spot Touchups<a href="#spot-touchups" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The use of wavelets is good for a large-scale selection area smoothing but a different set of tools is required for spot touchups where needed.  For example, there is a stray hair that runs across the models forehead that can be removed using the Heal tool.</p>
<p>For best results when using the Heal tool, use a hard edged brush.  Soft edges can sometimes lead to a slight smearing in the feathered edge of a brush that is undesirable. Due to the nature of the heal algorithm sampling, it is also advisable to avoid trying to heal across hard/contrasty edges.</p>
<p>This is also a good tool to use for small blemishes that might have been tedious to repair across all of the wavelet scales from the previous section.  This is also a good time to repair hot-spots, fly-away hairs, or other small details.</p>
<h4 id="sweater-enhancement">Sweater Enhancement<a href="#sweater-enhancement" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The model is wearing a nicely textured sweater but the details and texture are a slightly muted.  A small increase in contrast and local details will help to bring some enhancement to the textures and tones.  One method of enhancing local details would be to use the Unsharp Mask enhancement with a high radius and low amount (HiRaLoAm is an acronym some might use for this).  </p>
<p>Create a duplicate of the “Spot Healing” layer that was worked on in the previous step, and apply an Unsharp Mask to the layer using HiRaLoAm values.</p>
<p>For example, a good starting point for parameters might be:</p>
<ul>
<li>Radius: 200</li>
<li>Amount: 0.25</li>
</ul>
<p>With these parameters the sharpen function will instead tend to increase local contrast more, providing more “presence” or “pop” to the sweater texture.</p>
<h4 id="background-rebuild">Background Rebuild<a href="#background-rebuild" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The background of the image is a little too uniformly dark and could benefit from some lightening and variation.  A nice lighter background gradient will enhance the subject a little.</p>
<p>Normally this could be obtained through the use of a second strobe (probably gridded or with a snoot) firing at the background.  In our case we will have to fake the same result through some masking.</p>
<p>First, a crop is chosen to focus the composition a little stronger on the subject.  I placed the center of the models face along the right-side golden section vertical and tried to place things near the center of the frame:</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-cropped.jpg" alt='Mairi cropped' width='640' height='800'>
</figure>

<p>The slight-centered crop is to emulate the type of crop that might be expected from a classical painting (thereby strengthening the overall theme of the portrait further).</p>
<h4 id="subject-isolation">Subject Isolation<a href="#subject-isolation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>There are a few different methods to approach the background modification.  The method I describe here is simply one of them.</p>
<p>The image at this point is duplicated and the duplicate has the levels raised to brighten it up considerably.  In this way, a simple layer mask can control the brightness and where it occurs in the image at this point.</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-isolation.jpg" alt='Mairi isolation' width='640' height='799'>
</figure>

<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-isolation-layers.png" alt='Mairi isolation layers' width='259' height='286'>
</figure>

<p>This is what will give our background a gradient of light.  To get our subject back to dark will require masking the subject on a layer mask again.  A quick way to get a mask to work from is to add a layer mask to the “Over” layer, letting the background show through, but turning the subject opaque.</p>
<p>Add a layer mask to the “Over” layer as a “Grayscale copy of layer”, and check the “Invert mask” option:</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-isolation-add-layer-mask.png" alt='Mairi isolation add layer mask' width='297' height='383'>
</figure>

<p>With an initial mask in place, a quick use of the tool:</p>
<p class='Cmd'>Colors &rarr; Threshold</p>

<p>will allow you to modify the mask to define the shoulder of the model as a good transition.  The mask will be quite narrow.  Adjust the threshold until the lighter background is speckle-free and there is a good definition of the edge of the sweater against the background.</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-isolation-threshold.jpg" alt='Mairi threshold' width='640' height='311'>
</figure>

<p>Once the initial mask is in place it can be cleaned up further by making the subject entirely opaque (white on the mask), and the background fully transparent (black on the mask).  This can be done with paint tools easily.  For not much work a decent mask and result can be had:</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-isolation-final.jpg" alt='Mairi isolation final' width='640' height='799'>
</figure>

<p>This provides a nice contrast of the background being lighter behind the darker portions of the model and the opposite on the lighter subjects face.</p>
<h4 id="lighten-face-highlights">Lighten Face Highlights<a href="#lighten-face-highlights" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Speaking of the subjects face, there’s a nice simple method for applying a small accent on the highlighted portions of the models face in order to draw more attention to her.</p>
<p>Duplicate the lightened layer that was used to create the background gradient, move it to the top of the layer stack, and remove the layer mask from it.</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-lighten-layers.png" alt='Mairi Lighten Face Layers' width='258' height='282'>
</figure>

<p>Set the layer mode of the copied layer to “Lighten only.</p>
<p>As before, add a new layer mask to it, “Grayscale copy of layer”, but don’t check the “Invert mask” option.  This time use the Levels tool:</p>
<p class='Cmd'>Colors → Levels</p>

<p>to raise the blacks of the mask up to about mid-way or more.  This will isolate the lightening mask to the brightest tones in the image that happen to correspond to the models face. You should see your adjustments modify the mask on-canvas in real-time.  When you are happy with the highlights, apply.</p>
<figure>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-lighten.jpg" alt='Mairi Lighten Highlights' width='640' height='799'>
</figure>


<h4 id="last-sharpening-pass-grain">Last Sharpening Pass + Grain<a href="#last-sharpening-pass-grain" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Finally, using I like to apply a last pass of sharpening to the image, and to overlay some grain from a grain field I have to help add some structure to the image as well as mask any gradient issues when rebuilding the background.  For this particular image the grain step isn’t really needed as there’s already sufficient luma noise to provide its own structure.</p>
<p>Usually, I will use the smallest of the wavelet scales from the prior steps and sometimes the next largest scale as well (Wavelet scale 1 &amp; 2).  I’ll leave Wavelet scale 1 at 100% opacity, and scale 2 usually around 50% opacity (to taste, of course).</p>
<figure class='big-vid'>
<a href='mairi-final.jpg'>
<img src="https://pixls.us/articles/a-chiaroscuro-portrait/mairi-final_960.jpg" alt='Mairi Final' style='max-height: 100vh;' width='862' height='1077'>
</a>
</figure>

<p>Minor touchups that could still be done might include darkening the chair in the bottom right corner, darkening the gradient in the bottom left corner, and possibly adding a slight white overlay to the eyes to subtly give them a small pop.</p>
<p>As it stands now I think the image is a decent representation of a chiaroscuro portrait that mimics the style of a classical composition and interplay between light and shadows across the subject.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ HD Photo Slideshow with Blender ]]></title>
            <link>https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/</guid>
            <pubDate>Tue, 12 Jul 2016 13:36:55 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/beck-roses.jpg" /><br/>
                 <h1>HD Photo Slideshow with Blender</h1>  
                 <h2>Because who doesn't love a challenge?</h2>   
                <p>While I was out at <a href="http://2016.texaslinuxfest.org/">Texas Linux Fest</a> this past weekend I got to watch a fun presentation from the one and only <a href="https://twitter.com/designbybeck">Brian Beck</a>.  He walked through an introduction to <a href="http://www.blender.org">Blender</a>, including an overview of creating his great <em>The Lady in the Roses</em> image that was a part of the <a href="http://librecal2015.libreart.info/en/">2015 Libre Calendar</a> project.</p>
<p>Coincidentally, during my trip home community member <a href="https://discuss.pixls.us/users/Fotonut/">@Fotonut</a> asked about software to create an HD slideshow with images.  The first answer that jumped into my mind was to consider using <a href="http://www.blender.org">Blender</a> (a very close second was <a href="http://www.openshot.org/">OpenShot</a> because I had just spent some time talking with Jon Thomas about it).</p>
<!-- more -->
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/beck-roses.jpg" alt='Brian Beck Roses' width='640' height='453'>
<figcaption>
<em>The Lady in the Roses</em> by Brian Beck <a class='cc' href='https://creativecommons.org/licenses/by/4.0/' title='Creative Commons By-Attribution 4.0'>cba</a>
</figcaption>
</figure>

<p>I figured this much Blender being talked about deserved at least a post to answer <a href="https://discuss.pixls.us/users/Fotonut/">@Fotonut</a>‘s question in greater detail.  I know that many community members likely abuse Blender in various ways as well &ndash; so please let me know if I get something way off!</p>
<h2 id="enter-blender"><a href="#enter-blender" class="header-link-alt">Enter Blender</a></h2>
<p>The reason that Blender was the first thing that popped into many folks minds when the question was posed is likely because it has been a go-to swiss-army knife of image and video creation for a long, long time.  For some it was the only viable video editing application for heavy use (not that there weren’t other projects out there as well).  This is partly due to to the fact that it integrates so much capability into a single project.</p>
<p>The part that we’re interested in for the context of Fotonut’s original question is the <a href="https://www.blender.org/manual/de/editors/sequencer/">Video Sequence Editor</a> (VSE).  This is a very powerful (though often neglected) part of Blender that lets you arrange audio and video (and image!) assets along a timeline for rendering and some simple effects.  Which is actually perfect for creating a simple HD slideshow of images, as we’ll see.</p>
<h3 id="the-plan"><a href="#the-plan" class="header-link-alt">The Plan</a></h3>
<p>Blenders interface is likely to take some getting used to for newcomers (right-click!) but we’ll be focusing on a <em>very</em> small subset of the overall program&mdash;so hopefully nobody gets lost.  The overall plan will be:</p>
<ol>
<li>Setup the environment for video sequence editing</li>
<li>Include assets (images) and how to manipulate them on the timeline</li>
<li>Add effects such as cross-fades between images</li>
<li>Setup exporting options</li>
</ol>
<p>There’s also an option of using a very helpful add-on for automatically resizing images to the correct size to maintain their aspect ratios. Luckily, Blender’s add-on system makes it trivially easy to set up.</p>
<h3 id="setup"><a href="#setup" class="header-link-alt">Setup</a></h3>
<p>On opening Blender for the first time we’re presented with the comforting view of the default cube in 3D space.  Don’t get too cozy, though.  We’re about to switch up to a different screen layout that’s already been created for us by default for Video Editing.</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/main-window.jpg" alt='Blender default main window' width='960' height='540'>
<figcaption>
The main blender default view.
</figcaption>
</figure>

<p>The developers were nice enough to include various default “Screen Layout” options for different tasks, and one of them happens to be for <em>Video Editing</em>.  We can click on the screen layout option on the top menu bar and choose the one we want from the list (<em>Video Editing</em>):</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/screen-layout.jpg" alt='Blender screen layout options' width='960' height='540'>
<figcaption>
Choosing a new Screen Layout option.
</figcaption>
</figure>

<p>Our screen will then change to the new layout where the top left pane is the F-curve window, the top right is the video preview, the large center section is the sequencer, and the very bottom is a timeline.  Blender will let you arrange, combine, and collapse all the various panes into just about any layout that you might want, including changing what each of them are showing.  For our example we will <em>mostly</em> leave it all as-is with the exception of the F-curve pane, which we won’t be using and don’t need.</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/video-editing-layout.jpg" alt='Blender video editing layout' width='960' height='540'>
<figcaption>
The Video Editing default layout.
</figcaption>
</figure>

<p>What we can do now is to define what the resolution and framerate of our project should be.  This is done in the <strong>Properties</strong> pane, which isn’t shown right now.  So we will change the <strong>F-Curve</strong> pane into the <strong>Properties</strong> pane by clicking on the button shown in red above to change the panel type.  We want to choose <strong>Properties</strong> from the options in the list:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/change-to-properties.jpg" alt='Blender change pane to properties' width='601' height='528'>
</figure>

<p>Which will turn the old F-Curve pane into the <strong>Properties</strong> pane:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/properties.jpg" alt='Blender properties' width='569' height='373'>
</figure>


<p>You’ll want to set the appropriate X and Y resolution for your intended output (don’t forget to set the scaling from the default 50% to 100% now as well) as well as your intended framerate.  Common rates might be 23.976 (23.98), 25, 30, or even 60 frames per second.  If your intended target is something like YouTube or an HD television you can probably safely use 30 or 60 (just remember that a higher frame rate means a longer render time!).</p>
<p>For our example I’m going to set the output resolution to 1920&nbsp;&times;&nbsp;1080 at 30fps.</p>
<h4 id="one-extra-thing"><a href="#one-extra-thing" class="header-link-alt">One Extra Thing</a></h4>
<p>Blender does need a little bit of help when it comes to using images on the sequence editor.  It has a habit of scaling images to whatever the output resolution is set to (ignoring the original aspect ratios). This can be fixed by simply applying a transform to the images but normally requires us to manually compute and enter the correct scaling factors to get the images back to their original aspect ratios.</p>
<p>I did find a nice small add-on <a href="http://blenderartists.org/forum/showthread.php?280731-VSE-Transform-tool">on this thread</a> at <a href="http://blenderartists.org">blenderartists.org</a> that binds some handy shortcuts onto the VSE for us. The author kgeogeo has the add-on <a href="https://github.com/kgeogeo/VSE_Transform_Tools">hosted on Github</a>, and you can download the <a href="http://www.python.org">Python</a> file directly from here: <a href="https://raw.githubusercontent.com/kgeogeo/VSE_Transform_Tools/master/VSE_Transform_Tool.py">VSE Transform Tool</a> (you can <strong>Right-Click</strong> and save the link).  Save the .py file somewhere easy to find.</p>
<p>To load the add-on manually we’re going to change the <strong>Properties</strong> panel to <strong>User Preferences</strong>:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/change-to-pref.jpg" alt='Blender change to preferences' width='568' height='538'>
</figure>

<p>Click on the <strong>Add-ons</strong> tab to open that window and at the bottom of the panel is an option to “Install from File…”.  Click that and navigate to the <code>VSE_Transform_Tool.py</code> file that you downloaded previously.</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/add-ons.jpg" alt='Blender add-ons' width='570' height='423'>
</figure>

<p>Once loaded, you’ll still need to <em>Activate</em> the plugin by clicking on the box:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/add-addon.jpg" alt='Blender adding add-ons' width='570' height='398'>
</figure>

<p>That’s it!  You’re now all set up to begin adding images and creating a slideshow.  You can set the <strong>User Preferences</strong> pane back to <strong>Properties</strong> if you want to.</p>
<h3 id="adding-images"><a href="#adding-images" class="header-link-alt">Adding Images</a></h3>
<p>Let’s have a look at adding images onto the sequencer.</p>
<p>You can add images by either choosing <strong>Add &rarr; Image</strong> from the VSE menu and navigating to your images location, choosing them:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/add-image.jpg" alt='Blender VSE add image' width='585' height='276'>
</figure>

<p>Or by drag-and-dropping your images onto the sequencer timeline from Nautilus, Finder, Explorer, etc…</p>
<p>When you do, you’ll find that a strip now appears on the VSE window (purple in my case) that represents your image.  You should also see a preview of your video in the top-right preview window (sorry for the subject).</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/add-first-image.jpg" alt='Blender VSE add image' width='960' height='540'>
</figure>

<p>At this point we can use the handy add-on we installed previously by <strong>Right-Clicking</strong> on the purple strip to make sure it’s activated and then hitting the “T” key on the keyboard.  This will automatically add a transform to the image that scales it to the correct aspect ratio for you.  A small green <em>Transform</em> strip will appear above your purple image strip now:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/add-transform.jpg" alt='Blender VSE add transform strip' width='327' height='276'>
</figure>

<p>Your image should now also be scaled to fit at the correct aspect ratio.</p>
<h4 id="adjusting-the-image"><a href="#adjusting-the-image" class="header-link-alt">Adjusting the Image</a></h4>
<p>If you scroll your mouse wheel in the VSE window, you will zoom in and out of time editor based on time (the x-axis in the sequencer window). You’ll notice that the time compresses or expands as you scroll the mouse wheel.</p>
<p>The middle-mouse button will let you pan around the sequencer.</p>
<p>The right-mouse button will select things.  You can try this now by extending how long your image is displayed in the video. <strong>Right-Click</strong> on the small arrow on the end of the purple strip to activate it.  A small number will appear above it indicating which frame it is currently on (26 in my example):</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/select-right.jpg" alt='Blender VSE' width='468' height='203'>
</figure>

<p>With the right handle active you can now either press “G” on the keyboard and drag the mouse to re-position the end of the strip, or <strong>Right-Click</strong> and drag to do the same thing. The timeline in seconds is shown along the bottom of the window for reference.  If we wanted to let the image be visible for 5 seconds total, we could drag the end to the 5+00 mark on the sequencer window.</p>
<p>Since I set the framerate to 30 frames per second, I can also drag the end to frame 150 (30fps * 5s = 150 frames).</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/five-seconds.jpg" alt='Blender VSE five seconds' width='582' height='170'>
</figure>

<p>When you drag the image strip, the transform strip will automatically adjust to fit (so you don’t have to worry about it).</p>
<p>If you had selected the center of the image strip instead of the handle on one end and tried to move it, you would find that you can move the entire strip around instead of one end.  This is how you can re-position image strips, which you may want to do when you add a second image to your sequencer.</p>
<p>Add a new image to your sequencer now following the same steps as above.</p>
<p>When I do, it adds a new strip back at the beginning of the timeline (basically where the current time is set):</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/second-image.jpg" alt='Blender VSE second image' width='624' height='211'>
</figure>

<p>I want to move this new strip so that it overlaps my first image by about half a second (or 15 frames).  Then I will pull the right handle to resize the display time to about 5 seconds also.</p>
<p>Click on the new strip (center, not the ends), and press the “G” key to move it.  Drag it right until the left side overlaps the previous image strip by a little bit:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/second-image-drag.jpg" alt='Blender VSE drag strip' width='560' height='196'>
</figure>

<p>When you click on the strip right handle to modify it’s length, notice the window on the far right of the VSE.  The <strong>Edit Strip</strong> window should also show the strip “Length” parameter in case you want to change it by manually inputting a value (like 150):</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/second-image-edit.jpg" alt='Blender VSE adjust strip' width='600' height='250'>
</figure>

<p>I forgot to use the add-on to automatically fix the aspect ratio.  With the strip selected I can press “T” at any time to invoke the add-on and fix the aspect ratio.</p>
<h3 id="adding-a-transition-effect"><a href="#adding-a-transition-effect" class="header-link-alt">Adding a Transition Effect</a></h3>
<p>With the two image strips slightly overlapping, we now want to define a simple cross fade between the two images as a transition effect.  This is actually something alreayd built into the Blender VSE for us, and is easy to add.  We <em>do</em> need to be careful to select the right things to get the transition working correctly, though.</p>
<p>Once you’ve added a transform effect to a strip, you’ll need to make sure that subsequent operations use the <em>transform</em> strip as opposed to the original image strip.</p>
<p>For instance, to add a cross fade transition between these two images, click the first image strip transform (green), then <strong>Shift-Click</strong> on the second image transform strip (green). Now they are both selected, so add a <em>Gamma Cross</em> by using the <strong>Add</strong> menu in the VSE (Add &rarr; Effect Strip… &rarr; Gamma Cross):</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/add-gamma-cross.jpg" alt='Blender VSE add gamma cross' width='600' height='531'>
</figure>

<p>This will add a <em>Gamma Cross</em> effect as a new strip that is locked to the two images overlap.  It will do a cross-fade between the two images for the duration of the overlap.  You can <strong>Left-Click</strong> now and scrub over the cross-fade strip to see it rendered in the preview window if you’d like:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/gamma-cross-applied.jpg" alt='Blender Gamma Cross' width='500' height='442'>
</figure>

<p>At any time you can also use the hotkey “Alt-A” to view a render preview.  This may run slow if your machine is not super-fast, but it should run enough to give you a general sense of what you’ll get.</p>
<p>If you want to modify the transition effect by changing its length, you can just increase the overlap between the strips as desired (using the original image strip &mdash; if you try to drag the transform strip you’ll find it locked to the original image strip and won’t move).</p>
<h4 id="repeat-repeat"><a href="#repeat-repeat" class="header-link-alt">Repeat Repeat</a></h4>
<p>You can basically follow these same steps for as many images as you’d like to include.</p>
<h3 id="exporting"><a href="#exporting" class="header-link-alt">Exporting</a></h3>
<p>To generate your output you’ll still need to change a couple of things to get what you want…</p>
<h4 id="render-length"><a href="#render-length" class="header-link-alt">Render Length</a></h4>
<p>You may notice on the VSE that there are vertical lines outside of which things will appear slightly grayed out.  This is a visual indicator of the total start/end of the output.  This is controlled via the <strong>Start</strong> and <strong>End</strong> frame settings on the timeline (bottom pane):</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/start-end.jpg" alt='Blender VSE start and end' width='640' height='201'>
</figure>

<p>You’ll need to set the <strong>End</strong> value to match your last output frame from your video sequence.  You can find this value by selecting the last strip in your sequence and pressing the “G” key: the start/end frame numbers of that last strip will be visible (you’ll want the last frame value, of course).</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/last-frame.jpg" alt='Blender VSE end frame' width='509' height='299'>
<figcaption>
Current last frame of my video is 284
</figcaption>
</figure>

<p>In my example above, my anticipated last frame should be 284, but the last render frame is currently set to 250.  I would need to update that <strong>End</strong> frame to match my video to get output as expected.</p>
<h4 id="render-format"><a href="#render-format" class="header-link-alt">Render Format</a></h4>
<p>Back on the <strong>Properties</strong> panel (assuming you set the top-left panel back to <strong>Properties</strong> earlier&mdash;if not do so now), if we scroll down a bit we should see a section dedicated to <em>Output</em>.</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/output-options.jpg" alt='Blender Properties Output Options' width='570' height='374'>
</figure>

<p>You can change the various output options here to do frame-by-frame dumps or to encode everything into a video container of some sort. You can set the output directory to be something different if you don’t want it rendered into /tmp here.</p>
<p>For my example I will encode the video with <a href="https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC">H.264</a>:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/output-h264.jpg" alt='Blender output h264' width='585' height='347'>
</figure>

<p>By choosing this option, Blender will then expose a new section of the <strong>Properties</strong> panel for setting the <em>Encoding</em> options:</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/encoding-panel.jpg" alt='Blender output encoding options' width='570' height='347'>
</figure>

<p>I will often use the H264 preset and will enable the <em>Lossless Output</em> checkbox option. If I don’t have the disk space to spare I can also set different options to shrink the resulting filesize down further.  The <em>Bitrate</em> option will have the largest effect on final file size and image quality.</p>
<p>When everything is ready (or you just want to test it out), you can render your output by scrolling back to the top of the <strong>Properties</strong> window and pressing the <em>Animation</em> button, or by hitting <strong>Ctrl-F12</strong>.</p>
<figure>
<img src="https://pixls.us/blog/2016/07/hd-photo-slideshow-with-blender/render-button.jpg" alt='Blender Render Button' width='570' height='374'>
</figure>


<h3 id="the-results"><a href="#the-results" class="header-link-alt">The Results</a></h3>
<p>After adding portraits of all of the GIMP team from LGM London and adding gamma cross fade transitions, here are my results:</p>
<div class='big-vid'>
<iframe width="853" height="480" src="https://www.youtube-nocookie.com/embed/i56iRHp9mkk?rel=0" frameborder="0" allowfullscreen></iframe>
</div>

<p><br></p>
<h2 id="in-summary"><a href="#in-summary" class="header-link-alt">In Summary</a></h2>
<p>This may seem overly complicated, but in reality much of what I covered here is the setup to get started and the settings for output.  Once you’ve done this successfully it becomes pretty quick to use.  One thing you can do is set up the environment the way you like it and then save the .blend file to use as a template for further work like this in the future.  The next time you need to generate a slideshow you’ll have everything all ready to go and will only need to start adding images to the editor.</p>
<p>While looking for information on some VSE shortcuts I <em>did</em> run across a really interesting looking set of functions that I want to try out: <a href="http://blendervelvets.org/">the Blender Velvets</a>. I’m going to go off and give it a good look when I get a chance as there’s quite a few interesting additions available. </p>
<p>For Blender users: did I miss anything?</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Texas Linux Fest 2016 ]]></title>
            <link>https://pixls.us/blog/2016/07/texas-linux-fest-2016/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/07/texas-linux-fest-2016/</guid>
            <pubDate>Mon, 04 Jul 2016 11:48:16 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/07/texas-linux-fest-2016/txlf-lede.png" /><br/>
                 <h1>Texas Linux Fest 2016</h1>  
                 <h2>Everything's Bigger in Texas!</h2>   
                <p>While in London this past April I got a chance to hang out a bit with <a href="https://lwn.net/">LWN.net</a> editor and fellow countryman, <a href="https://plus.google.com/110044519468273778141">Nathan Willis</a>.  (It sounds like the setup for a bad joke: <em>“An Alabamian and Texan meet in a London pub…”</em>). Which was awesome because even though we were both at LGM2014, we never got a chance to sit down and chat.</p>
<!-- more -->
<p>So it was super-exciting for me to hear from Nate about possibly doing a photowalk and Free Software photo workshop at the <a href="http://2016.texaslinuxfest.org/">2016 Texas Linux Fest</a>, and as soon as I cleared it with my boss, I agreed!</p>
<figure>
<img src="https://pixls.us/blog/2016/07/texas-linux-fest-2016/dot-eyes-open.jpg" alt='Dot at LGM 2014'>
<figcaption>
My Boss</figcaption>
</figure>

<p><em><strong>So…</strong> mosey on down</em> to Austin, Texas on July 8-9 for <a href="http://2016.texaslinuxfest.org/">Texas Linux Fest</a> and join <a href="http://www.shallowsky.com/">Akkana Peck</a> and myself for a photowalk first thing of the morning on Friday (July 8) to be immediately followed by workshops from both of us.  I’ll be talking about Free Software photography workflows and projects and Akkana will be focusing on a GIMP workshop.</p>
<p>This is part of a larger “Open Graphics” track on the entire first day that also includes <a href="http://gould.cx/ted/">Ted Gould</a> creating technical diagrams using <a href="https://inkscape.org/">Inkscape</a>, <a href="http://2016.texaslinuxfest.org/node/103">Brian Beck</a> doing a <a href="http://www.blender.org">Blender</a> tutorial, and <a href="http://2016.texaslinuxfest.org/node/55">Jonathon Thomas</a> showing off <a href="http://www.openshot.org/">OpenShot 2.0</a>.  You can find the <a href="http://2016.texaslinuxfest.org/content/schedule">full schedule on their website</a>.</p>
<p>I hope to see some of you there!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Color Manipulation with the Colour Checker LUT Module ]]></title>
            <link>https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/</guid>
            <pubDate>Wed, 29 Jun 2016 13:44:08 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/darktable-lut-lede.jpg" /><br/>
                 <h1>Color Manipulation with the Colour Checker LUT Module</h1>  
                 <h2>hanatos tinkering in darktable again...</h2>   
                <p>I was lucky to get to spend some time in London with the darktable crew.
Being the wonderful nerds they are, they were constantly working on <em>something</em> while we were there.
One of the things that Johannes was working on was the colour checker module for darktable.</p>
<p>Having recently acquired a Fuji camera, he was working on matching color styles from the built-in rendering on the camera.
Here he presents some of the results of what he was working on.</p>
<p><em>This was originally published on the <a href="http://www.darktable.org/2016/05/colour-manipulation-with-the-colour-checker-lut-module/">darktable blog</a>, and is being republished here with permission.</em> &mdash;Pat</p>
<!-- more -->
<hr>
<h2 id="motivation"><a href="#motivation" class="header-link-alt">motivation</a></h2>
<p>for raw photography there exist great presets for nice colour rendition:</p>
<ul>
<li>in-camera colour processing such as canon picture styles</li>
<li>fuji film-emulation-like presets (provia velvia astia classic-chrome)</li>
<li><a title="pat david's film emulation luts" href="http://gmic.eu/film_emulation/">pat david’s film emulation luts</a></li>
</ul>
<p>unfortunately these are eat-it-or-die canned styles or icc lut profiles. you
have to apply them and be happy or tweak them with other tools. but can we
extract meaning from these presets? can we have understandable and tweakable
styles like these?</p>
<p>in a first attempt, i used a non-linear optimiser to control the parameters of
the modules in darktable’s processing pipeline and try to match the output of
such styles. while this worked reasonably well for some of pat’s film luts, it
failed completely on canon’s picture styles. it was very hard to reproduce
generic colour-mapping styles in darktable without parametric blending.</p>
<p>that is, we require a generic colour to colour mapping function. this should be
equally powerful as colour look up tables, but enable us to inspect it and
change small aspects of it (for instance only the way blue tones are treated).</p>
<h2 id="overview"><a href="#overview" class="header-link-alt">overview</a></h2>
<p>in git master, there is a new module to implement generic colour mappings: the
colour checker lut module (lut: look up table). the following will be a
description how it works internally, how you can use it, and what this is good
for.</p>
<p>in short, it is a colour lut that remains understandable and editable. that is,
it is not a black-box look up table, but you get to see what it actually does
and change the bits that you don’t like about it.</p>
<p>the main use cases are precise control over source colour to target colour
mapping, as well as matching in-camera styles that process raws to jpg in a
certain way to achieve a particular look. an example of this are the fuji film
emulation modes. to this end, we will fit a colour checker lut to achieve their
colour rendition, as well as a tone curve to achieve the tonal contrast.</p>
<figure>
<img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/target.jpg" alt="target" width="560" height="416" />
</figure>

<p>to create the colour lut, it is currently necessary to take a picture of an
<a title="wolf faust's it8 charts" href="http://targets.coloraid.de">it8 target</a> (well, technically we support any similar target, but
didn’t try them yet so i won’t really comment on it). this gives us a raw
picture with colour values for a few colour patches, as well as a in-camera jpg
reference (in the raw thumbnail..), and measured reference values (what we know
it <strong>should</strong> look like).</p>
<p>to map all the other colours (that fell in between the patches on the chart) to
meaningful output colours, too, we will need to interpolate this measured
mapping.</p>
<h2 id="theory"><a href="#theory" class="header-link-alt">theory</a></h2>
<p>we want to express a smooth mapping from input colours \(\mathbf{s}\) to target
colours \(\mathbf{t}\), defined by a couple of sample points (which will in our
case be the 288 patches of an it8 chart).</p>
<p>the following is a quick summary of what we implemented and much better
described in JP’s siggraph course <a href="#ref0">[0]</a>.</p>
<h3 id="radial-basis-functions"><a href="#radial-basis-functions" class="header-link-alt">radial basis functions</a></h3>
<p>radial basis functions are a means of interpolating between sample points
via</p>
<p>$$f(x) = \sum_i c_i\cdot\phi(| x - s_i|),$$</p>
<p>with some appropriate kernel \(\phi(r)\) (we’ll get to that later) and a set of
coefficients \(c_i\) chosen to make the mapping \(f(x)\) behave like we want it at
and in between the source colour positions \(s_i\). now to make
sure the function actually passes through the target colours, i.e. \(f(s_i) =
t_i\), we need to solve a linear system. because we want the function to take
on a simple form for simple problems, we also add a polynomial part to it. this
makes sure that black and white profiles turn out to be black and white and
don’t oscillate around zero saturation colours wildly. the system is</p>
<p>$$ \left(\begin{array}{cc}A &amp;P\\P^t &amp; 0\end{array}\right) \cdot \left(\begin{array}{c}\mathbf{c}\\\mathbf{d}\end{array}\right) = \left(\begin{array}{c}\mathbf{t}\\0\end{array}\right)$$</p>
<p>where</p>
<p>$$ A=\left(\begin{array}{ccc}
\phi(r_{00})&amp; \phi(r_{10})&amp; \cdots \\
\phi(r_{01})&amp; \phi(r_{11})&amp; \cdots \\
\phi(r_{02})&amp; \phi(r_{12})&amp; \cdots \\
\cdots &amp; &amp; \cdots
\end{array}\right),$$</p>
<p>and \(r_{ij} = | s_i - t_j |\) is the distance (CIE 76 \(\Delta\)E,
\(\sqrt{(L_s - L_t)^2 + (a_s - a_t)^2 + (b_s - b_t)^2}\) ) between
source colour \(s_i\) and target colour \(t_j\), in our case</p>
<p>$$P=\left(\begin{array}{cccc}
L_{s_0}&amp; a_{s_0}&amp; b_{s_0}&amp; 1\\
L_{s_1}&amp; a_{s_1}&amp; b_{s_1}&amp; 1\\
\cdots
\end{array}\right)$$</p>
<p>is the polynomial part, and \(\mathbf{d}\) are the coefficients to the polynomial
part. these are here so we can for instance easily reproduce \(t = s\) by setting
\(\mathbf{d} = (1, 1, 1, 0)\) in the respective row. we will need to solve this
system for the coefficients \(\mathbf{c}=(c_0,c_1,\cdots)^t\) and \(\mathbf{d}\).</p>
<p>many options will do the trick and solve the system here. we use singular value
decomposition in our implementation. one advantage is that it is robust against
singular matrices as input (accidentally map the same source colour to
different target colours for instance).</p>
<h3 id="thin-plate-splines"><a href="#thin-plate-splines" class="header-link-alt">thin plate splines</a></h3>
<p>we didn’t yet define the radial basis function kernel. it turns out so-called
thin plate splines have very good behaviour in terms of low oscillation/low curvature
of the resulting function. the associated kernel is</p>
<p>$$\phi(r) = r^2 \log r.$$</p>
<p>note that there is a similar functionality in gimp as a gegl colour mapping
operation (which i believe is using a shepard-interpolation-like scheme).</p>
<h3 id="creating-a-sparse-solution"><a href="#creating-a-sparse-solution" class="header-link-alt">creating a sparse solution</a></h3>
<p>we will feed this system with 288 patches of an it8 colour chart. that means,
with the added four polynomial coefficients, we have a total of 292
source/target colour pairs to manage here. apart from performance issues when
executing the interpolation, we didn’t want that to show up in the gui like
this, so we were looking to reduce this number without introducing large error.</p>
<p>indeed this is possible, and literature provides a nice algorithm to do so, which
is called <strong>orthogonal matching pursuit</strong> <a href="#ref1">[1]</a>.</p>
<p>this algorithm will select the most important hand full of coefficients \(\in
\mathbf{c},\mathbf{d}\), to keep the overall error low. In practice we run it up
to a predefined number of patches (\(24=6\times 4\) or \(49=7\times 7\)), to make
best use of gui real estate.</p>
<h2 id="the-colour-checker-lut-module"><a href="#the-colour-checker-lut-module" class="header-link-alt">the colour checker lut module</a></h2>
<figure>
<img  src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/clut-iop.png" alt="clut-iop" width="522" height="592"  />
</figure>


<h3 id="gui-elements"><a href="#gui-elements" class="header-link-alt">gui elements</a></h3>
<p>when you select the module in darkroom mode, it should look something like the
image above (configurations with more than 24 patches are shown in a 7\(\times\)7 grid
instead). by default, it will load the 24 patches of a colour checker classic
and initialise the mapping to identity (no change to the image).</p>
<ul>
<li>the grid shows a list of coloured patches. the colours of the patches are
the source points \(\mathbf{s}\).</li>
<li>the target colour \(t_i\) of the selected patch \(i\) is shown as
offset controlled by sliders in the ui under the grid of patches.</li>
<li>an outline is drawn around patches that have been altered, i.e. the source
and target colours differ.</li>
<li>the selected patch is marked with a white square, and the number shows
in the combo box below.</li>
</ul>
<h3 id="interaction"><a href="#interaction" class="header-link-alt">interaction</a></h3>
<p>to interact with the colour mapping, you can change both source and target
colours. the main use case is to change the target colours however, and start
with an appropriate <strong>palette</strong> (see the presets menu, or download a style
somewhere).</p>
<ul>
<li>you can change lightness (L), green-red (a), blue-yellow (b), or saturation
(C) of the target colour via sliders.</li>
<li>select a patch by left clicking on it, or using the combo box, or using the
colour picker</li>
<li>to change source colour, select a new colour from your image by using the
colour picker, and shift-left-click on the patch you want to replace.</li>
<li>to reset a patch, double-click it.</li>
<li>right-click a patch to delete it.</li>
<li>shift-left-click on empty space to add a new patch (with the currently
picked colour as source colour).</li>
</ul>
<hr>
<h2 id="example-use-cases"><a href="#example-use-cases" class="header-link-alt">example use cases</a></h2>
<h3 id="example-1-dodging-and-burning-with-the-skin-tones-preset"><a href="#example-1-dodging-and-burning-with-the-skin-tones-preset" class="header-link-alt">example 1: dodging and burning with the skin tones preset</a></h3>
<p>to process the following image i took of pat in the overground, i started with
the <strong>skin tones</strong> preset in the colour checker module (right click on nothing in
the gui or click on the icon with the three horizontal lines in the header and
select the preset).</p>
<p>then, i used the colour picker (little icon to the right of the patch# combo
box) to select two skin tones: very bright highlights and dark shadow tones.
the former i dragged the brightness down a bit, the latter i brightened up a
bit via the lightness (L) slider. this is the result:</p>
<figure>
<img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/pat_crop_02.png" alt="original" width='250' height='375' style='width:250px; display: inline; margin-right: 0.5rem;' />
<img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/pat_crop_03_flat.png" alt="dialed down contrast in skin tones"  width='250' height='375' style='width:250px; display: inline;' />
</figure>



<h3 id="example-2-skin-tones-and-eyes"><a href="#example-2-skin-tones-and-eyes" class="header-link-alt">example 2: skin tones and eyes</a></h3>
<p>in this image, i started with the fuji classic chrome-like style (see below for
a download link), to achieve the subdued look in the skin tones. then, i
picked the iris colour and saturated this tone via the saturation slider.</p>
<p>as a side note, the flash didn’t fire in this image (iso 800) so i needed to
stop it up by 2.5ev and the rest is all natural lighting..</p>
<figure>
<a href='mairi_crop_01.jpg'><img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/mairi_crop_01.jpg" alt="original" width="300" height="449" style='width: 300px;' /></a>
</figure>


<figure>
<a href='mairi_crop_02.jpg'><img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/mairi_crop_02.jpg" alt="+2.5ev classic chrome" width="300" height="449" style='width:300px; display:inline;' /></a>
<a href='mairi_crop_03.jpg'><img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/mairi_crop_03.jpg" alt="saturated eyes" width="300" height="449" style='width:300px; display:inline;'/></a>
</figure>



<h2 id="use-darktable-chart-to-create-a-style"><a href="#use-darktable-chart-to-create-a-style" class="header-link-alt">use <code>darktable-chart</code> to create a style</a></h2>
<p>as a starting point, i matched a colour checker lut interpolation function to
the in-camera processing of fuji cameras. these have the names of old film and
generally do a good job at creating pleasant colours. this was done using the
<code>darktable-chart</code> utility, by matching raw colours to the jpg output (both in Lab space in the darktable pipeline).</p>
<p>here is the <a href="https://jo.dreggn.org/blog/darktable-fuji-styles.tar.xz" title="fuji-like styles">link to the fuji styles</a>, and <a href="https://www.darktable.org/usermanual/ch02s03s08.html.php" title="darktable user manual on styles">how to use them</a>.
i should be doing pat’s film emulation presets with this, too, and maybe
styles from other cameras (canon picture styles?). <code>darktable-chart</code> will
output a dtstyle file, with the mapping split into tone curve and colour
checker module. this allows us to tweak the contrast (tone curve) in isolation
from the colours (lut module).</p>
<p>these styles were created with the X100T model, and reportedly they work so/so
with different camera models. the idea is to create a Lab-space mapping which
is well configured for all cameras. but apparently there may be sufficient
differences between the output of different cameras after applying their colour
matrices (after all these matrices are just an approximation of the real camera
to XYZ mapping).</p>
<p>so if you’re really after maximum precision, you may have to create the styles
yourself for your camera model. here’s how:</p>
<h3 id="step-by-step-tutorial-to-match-the-in-camera-jpg-engine"><a href="#step-by-step-tutorial-to-match-the-in-camera-jpg-engine" class="header-link-alt">step-by-step tutorial to match the in-camera jpg engine</a></h3>
<p>note that this is essentially similar to <a href="https://github.com/pmjdebruijn/colormatch">pascal’s colormatch script</a>, but will result in an editable style for darktable instead of a fixed icc lut.</p>
<ul>
<li><p>need an it8 (sorry, could lift that, maybe, similar to what we do for <a title="fit basecurves for darktable" href="http://www.darktable.org/2013/10/about-basecurves/">basecurve fitting</a>)</p>
</li>
<li><p>shoot the chart with your camera:</p>
<ul>
<li>shoot raw + jpg</li>
<li>avoid glare and shadow and extreme angles, potentially the rims of your image altogether</li>
<li>shoot a lot of exposures, try to match L=92 for G00 (or look that up in
  your it8 description)</li>
</ul>
</li>
<li><p>develop the images in darktable:</p>
<ul>
<li>lens and vignetting correction needed on both or on neither of raw + jpg</li>
<li>(i calibrated for vignetting, see <a title="calibrate vignetting for lensfun" href="http://wilson.bronger.org/lens_calibration_tutorial/#id3">lensfun</a>)</li>
<li>output colour space to Lab (set the secret option in <code>darktablerc</code>:
<code>allow_lab_output=true</code>)</li>
<li>standard input matrix and camera white balance for the raw, srgb for jpg.</li>
<li>no gamut clipping, no basecurve, no anything else.</li>
<li>maybe do <a title="perspective correction in darktable" href="http://www.darktable.org/2016/03/a-new-module-for-automatic-perspective-correction/">perspective correction</a> and crop the chart</li>
<li>export as float pfm</li>
</ul>
</li>
<li><p><code>darktable-chart</code></p>
<ul>
<li>load the pfm for the raw image and the jpg target in the second tab</li>
<li>drag the corners to make the mask match the patches in the image</li>
<li>maybe adjust the security margin using the slider in the top right, to
avoid stray colours being blurred into the patch readout</li>
<li>you need to select the gray ramp in the combo box (not auto-detected)</li>
<li>export csv</li>
</ul>
</li>
</ul>
<figure>
<a href='darktable-lut-tool-crop-01.jpg'><img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/darktable-lut-tool-crop-01.jpg" alt="darktable-lut-tool-crop-01" width='640' height='655' /></a>
<a href='darktable-lut-tool-crop-02.jpg'><img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/darktable-lut-tool-crop-02.jpg" alt="darktable-lut-tool-crop-02" width='640' height='655' /></a>
<a href='darktable-lut-tool-crop-03.jpg'><img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/darktable-lut-tool-crop-03.jpg" alt="darktable-lut-tool-crop-03" width='640' height='655' /></a>
<a href='darktable-lut-tool-crop-04.jpg'><img src="https://pixls.us/blog/2016/06/color-manipulation-with-the-colour-checker-lut-module/darktable-lut-tool-crop-04.jpg" alt="darktable-lut-tool-crop-04" width="640" height="655"   /></a>
</figure>

<p>edit the csv in a text editor and manually add two fixed fake patches <code>HDR00</code>
and <code>HDR01</code>:</p>
<pre><code>name;fuji classic chrome-like
description;fuji classic chrome-like colorchecker
num_gray;24
patch;L_source;a_source;b_source;L_reference;a_reference;b_reference
A01;22.22;13.18;0.61;21.65;17.48;3.62
A02;23.00;24.16;4.18;26.92;32.39;11.96
...
HDR00;100;0;0;100;0;0
HDR01;200;0;0;200;0;0
...
</code></pre><p>this is to make sure we can process high-dynamic range images and not destroy
the bright spots with the lut. this is needed since the it8 does not deliver
any information out of the reflective gamut and for very bright input. to fix
wide gamut input, it may be needed to enable gamut clipping in the input colour
profile module when applying the resulting style to an image with highly
saturated colours. <code>darktable-chart</code> does that automatically in the style it
writes.</p>
<ul>
<li>fix up style description in csv if you want</li>
<li>run <code>darktable-chart --csv</code></li>
<li>outputs a <code>.dtstyle</code> with everything properly switched off, and two modules on: colour checker + tonecurve in Lab</li>
</ul>
<h3 id="fitting-error"><a href="#fitting-error" class="header-link-alt">fitting error</a></h3>
<p>when processing the list of colour pairs into a set of coefficients for the
thin plate spline, the program will output the approximation error, indicated
by average and maximum CIE 76 \(\Delta\)E for the input patches (the it8 in the
examples here). of course we don’t know anything about colours which aren’t
represented in the patch. the hope would be that the sampling is dense enough
for all intents and purposes (but nothing is holding us back from using a
target with even more patches).</p>
<p>for the fuji styles, these errors are typically in the range of mean \(\Delta
E\approx 2\) and max \(\Delta E \approx 10\) for 24 patches and a bit less for 49.
unfortunately the error does not decrease very fast in the number of patches
(and will of course drop to zero when using all the patches of the input chart).</p>
<pre><code>provia 24:rank 28/24 avg DE 2.42189 max DE 7.57084
provia 49:rank 53/49 avg DE 1.44376 max DE 5.39751

astia-24:rank 27/24 avg DE 2.12006 max DE 10.0213
astia-49:rank 52/49 avg DE 1.34278 max DE 7.05165

velvia-24:rank 27/24 avg DE 2.87005 max DE 16.7967
velvia-49:rank 53/49 avg DE 1.62934 max DE 6.84697

classic chrome-24:rank 28/24 avg DE 1.99688 max DE 8.76036
classic chrome-49:rank 53/49 avg DE 1.13703 max DE 6.3298

mono-24:rank 27/24 avg DE 0.547846 max DE 3.42563
mono-49:rank 52/49 avg DE 0.339011 max DE 2.08548
</code></pre><h3 id="future-work"><a href="#future-work" class="header-link-alt">future work</a></h3>
<p>it is possible to match the reference values of the it8 instead of a reference
jpg output, to calibrate the camera more precisely than the colour matrix
would.</p>
<ul>
<li>there is a button for this in the <code>darktable-chart</code> tool</li>
<li>needs careful shooting, to match brightness of reference value closely.</li>
<li>at this point it’s not clear to me how white balance should best be handled here.</li>
<li>need reference reflectances of the it8 (wolf faust ships some for a few illuminants).</li>
</ul>
<p>another next step we would like to take with this is to match real film footage
(porta etc). both reference and film matching will require some global exposure
calibration though.</p>
<h2 id="references"><a href="#references" class="header-link-alt">references</a></h2>
<ul>
<li><a name="ref0"></a>[0] Ken Anjyo and J. P. Lewis and Frédéric Pighin, “Scattered data interpolation for computer graphics” in Proceedings of SIGGRAPH 2014 Courses, Article No. 27, 2014. <a href="http://scribblethink.org/Courses/ScatteredInterpolation/scatteredinterpcoursenotes.pdf">pdf</a></li>
<li><a name="ref1"></a>[1] J. A. Tropp and A. C. Gilbert, “Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit”, in IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655-4666, Dec. 2007.</li>
</ul>
<h2 id="links"><a href="#links" class="header-link-alt">links</a></h2>
<ul>
<li><a title="pat david's film emulation luts" href="http://gmic.eu/film_emulation/">pat david’s film emulation luts</a></li>
<li><a title="fuji-like styles" href="darktable-fuji-styles.tar.xz">download fuji styles</a></li>
<li><a title="darktable user manual on styles" href="https://www.darktable.org/usermanual/ch02s03s08.html.php">darktable’s user manual on styles</a></li>
<li><a title="wolf faust's it8 charts" href="http://targets.coloraid.de">it8 target</a></li>
<li><a title="colormatch" href="https://github.com/pmjdebruijn/colormatch">pascal’s colormatch</a></li>
<li><a title="calibrate vignetting for lensfun" href="http://wilson.bronger.org/lens_calibration_tutorial/#id3">lensfun calibration</a></li>
<li><a title="perspective correction in darktable" href="http://www.darktable.org/2016/03/a-new-module-for-automatic-perspective-correction/">perspective correction in darktable</a></li>
<li><a title="fit basecurves for darktable" href="http://www.darktable.org/2013/10/about-basecurves/">fit basecurves for darktable</a></li>
</ul>
<script type='text/javascript' src='https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=default&ver=1.2.1'></script>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Sharing is Caring ]]></title>
            <link>https://pixls.us/blog/2016/06/sharing-is-caring/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/06/sharing-is-caring/</guid>
            <pubDate>Wed, 22 Jun 2016 15:10:14 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/06/sharing-is-caring/SHARING.jpg" /><br/>
                 <h1>Sharing is Caring</h1>  
                 <h2>Letting it all hang out</h2>   
                <p>It was always my intention to make the entire PIXLS.US website available under a permissive license.  The content is already all licensed <a href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons, By Attribution, Share-Alike</a> (unless otherwise noted).  I just hadn’t gotten around to actually posting the site source.</p>
<p>Until now(<em>ish</em>).  I say “<em>ish</em>“ because I apparently released the code back in April and am just now getting around to talking about it.</p>
<p>Also, we finally have a category specifically for all those <a href="http://www.darktable.org">darktable</a> weenies on <a href="https://discuss.pixls.us">discuss</a>!</p>
<!-- more -->
<h2 id="don-t-laugh"><a href="#don-t-laugh" class="header-link-alt">Don’t Laugh</a></h2>
<p>I finally got around to pushing my code for this site up to <a href="https://github.com/pixlsus/">Github</a> on April 27 (I’m basing this off git logs because my memory is likely suspect).  It took a while, but better late than never?  I think part of the delay was a bit of minor embarrassment on my part for being so sloppy with the site code.  In fact, I’m still embarrassed - so don’t laugh at me too hard (and if you do, at least don’t point while laughing too).</p>
<figure>
<img src="https://pixls.us/blog/2016/06/sharing-is-caring/carrie-laugh-at-u.jpg" alt='Carrie White'>
<figcaption>
Brian De Palma’s <a href="http://www.imdb.com/title/tt0074285/">interpretation of my fears…</a></figcaption>
</figure>

<p>So really this post is just a reminder to anyone that was interested that this site is available on Github:  </p>
<p><a href="https://github.com/pixlsus/">https://github.com/pixlsus/</a></p>
<p>In fact, we’ve got a couple of other repositories under the <a href="https://github.com/pixlsus">Github Organization PIXLS.US</a> including this website, presentation assets, lighting diagram SVG’s, and more. If you’ve got a Github account or wanted to join in with hacking at things, by all means send me a note and we’ll get you added to the organization asap.</p>
<p><em>Note</em>: you don’t need to do anything special if you just want to grab the site code.  You can do this quickly and easily with:</p>
<p><code>git clone https://github.com/pixlsus/website.git</code></p>
<p>You actually don’t even need a Github account to clone the repo, but you will need one if you want to fork it on Github itself, or to send pull-requests.  You can also feel free to simply email/post patches to us as well:</p>
<p><code>git format-patch testing --stdout &gt; your_awesome_work.patch</code></p>
<p>Being on Github means that we also now have <a href="https://github.com/pixlsus/website/issues">an issue tracker</a> to report any bugs or enhancements you’d like to see for the site.</p>
<p>So no more excuses - if you’d like to lend a hand just dive right in!  We’re all here to help! :)</p>
<h3 id="speaking-of-helping"><a href="#speaking-of-helping" class="header-link-alt">Speaking of Helping</a></h3>
<p>Speaking of which, I wanted to give a special shout-out to community member <a href="https://discuss.pixls.us/users/paperdigits/activity">@paperdigits</a> (<a href="http://silentumbrella.com/">Mica</a>), who has been active in sharing presentation materials in the <a href="https://github.com/pixlsus/Presentations">Presentations repo</a> and has been actively hacking at the website. Mica’s recommendations and pull requests are helping to make the site code cleaner and better for everyone, and I really appreciate all the help (even if I <em>am</em> scared of change).</p>
<p><em>Thank you, Mica!</em>  You <strong>rock</strong>!</p>
<h2 id="those-stinky-darktable-people"><a href="#those-stinky-darktable-people" class="header-link-alt">Those Stinky darktable People</a></h2>
<p>Yes, after member Claes <a href="https://discuss.pixls.us/t/why-no-darktable-section/1575">asked the question on discuss</a> about why we didn’t have a <a href="http://www.darktable.org">darktable</a> category on the forums, I relented and <a href="https://discuss.pixls.us/c/software/darktable">created one</a>.  Normally I want to make sure that any category is going to have active people to maintain and monitor the topics there.  I feel like having an empty forum can sometimes be detrimental to the perception of a project/community.</p>
<figure>
<img src='https://discuss.pixls.us/uploads/default/original/2X/b/b2076a2e18c4126bf25c6a852424ce3a3333b480.png' alt='darktable logo'>
</figure>

<p>In this case, any topics in the <a href="https://discuss.pixls.us/c/software/darktable">darktable category</a> will <em>also</em> show up in the more general <a href="https://discuss.pixls.us/c/software/">Software</a> category as well.  This way the visibility and interactions are still there, but with the added benefit that we can now choose to see <em>only</em> darktable posts, ignore them, or let all those <a href="https://discuss.pixls.us/t/why-no-darktable-section/1575/4">stinky users</a> do what they want in there.</p>
<p>Besides, now we can say that we’ve sufficiently appeased <a href="https://discuss.pixls.us/users/morgan_hardwood/activity">Morgan Hardwood</a>‘s organizational needs…</p>
<p>So, come on by and say hello in the brand new <a href="https://discuss.pixls.us/c/software/darktable"><strong>darktable category</strong></a>!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Sharing Galore ]]></title>
            <link>https://pixls.us/blog/2016/06/sharing-galore/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/06/sharing-galore/</guid>
            <pubDate>Tue, 21 Jun 2016 18:30:29 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/06/sharing-galore/2016-06-16_oak.jpg" /><br/>
                 <h1>Sharing Galore</h1>  
                 <h2>or, Why This Community is Awesome</h2>   
                <p>Community member and <a href="http://www.rawtherapee.com">RawTherapee</a> hacker Morgan Hardwood brings us a great tutorial + assets from one of his strolls near the <a href="https://en.wikipedia.org/wiki/S%C3%B6der%C3%A5sen_National_Park">Söderåsen National Park</a> (Sweden!). <a href="https://discuss.pixls.us/users/ofnuts/">Ofnuts</a> is apparently trying to get me to burn the forum down by sharing his raw file of a questionable subject.  After bugging <a href="http://opensource.graphics/">David Tschumperlé</a> he managed to find a neat solution to generating a median (pixel) blend of a large number of images without making your computer throw itself out a window.</p>
<p>So much neat content being shared for everyone to play with and learn from!  Come see what everyone is doing!</p>
<!-- more -->
<h2 id="old-oak-a-tutorial"><a href="#old-oak-a-tutorial" class="header-link-alt">Old Oak - A Tutorial</a></h2>
<p>Sometimes you’re just hanging out minding your own business and talking photography with friends and other Free Software nuts when someone comes running by and drops a great tutorial in your lap.  Just as Morgan Hardwood <a href="https://discuss.pixls.us/t/old-oak-a-tutorial/1627">did on the forums</a> a few days ago!</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/06/sharing-galore/2016-06-16_oak.jpg" alt='Old Oak by Morgan Hardoowd'>
<figcaption>
<em>Old Oak by Morgan Hardwood</em> <a href='https://creativecommons.org/licenses/by-sa/4.0/' class='cc'>cbsa</a>
</figcaption>
</figure>

<p>He introduces the image and post:</p>
<blockquote>
<p>There is an old oak by the southern entrance to the <a href="https://en.wikipedia.org/wiki/S%C3%B6der%C3%A5sen_National_Park">Söderåsen National Park</a>. Rumor has it that this is the oak under which Gandalf sat as he smoked his pipe and penned the famous saga about J.R.R. Tolkien. I don’t know about that, but the valley <a href="http://lotr.wikia.com/wiki/Rhosgobel_Rabbits">rabbits</a> sure love it.</p>
</blockquote>
<p>The image itself is a treat.  I personally love images where the lighting does interesting things and there are some gorgeous things going on in this image.  The diffused light flooding in under the canopy on the right with the edge highlights from the light filtering down make this a pleasure to look at.</p>
<p>Of course, Morgan doesn’t stop there.  You should absolutely <a href="https://discuss.pixls.us/t/old-oak-a-tutorial/1627">go read his entire post</a>.  He not only walks through his entire thought process and workflow starting at his rationale for lens selection (50mm f/2.8) all the way through his corrections and post-processing choices. To top it all off, he has graciously shared his assets for anyone to follow along! He provides the raw file, the <a href="http://50.87.144.65/~rt/w/index.php?title=Flat_Field">flat-field</a>, a shot of his color target + <a href="http://www.ludd.ltu.se/~torger/dcamprof.html">DCP</a>, and finally his RawTherapee .PP3 file with all of his settings!  Whew!</p>
<p>If you’re interested I urge you to go check out (and participate!) in his topic on the forums: <a href="https://discuss.pixls.us/t/old-oak-a-tutorial/1627"><strong>Old Oak - A Tutorial</strong></a>.</p>
<h2 id="i-will-burn-this-place-to-the-ground"><a href="#i-will-burn-this-place-to-the-ground" class="header-link-alt">I Will Burn This Place to the Ground</a></h2>
<p>Speaking of sharing material, <a href="https://discuss.pixls.us/users/ofnuts/">Ofnuts</a> has decided that he apparently wants me to burn the forums to the ground, put the ashes in a spaceship, fly the spaceship into the sun, and to detonate the entire solar system into a singularity.  Why do I say this?</p>
<figure>
<img src='https://discuss.pixls.us/uploads/default/optimized/2X/4/436f016f25eb0a0f857c2cb182bb1ae55ca623ca_1_690x620.jpg' alt='Kill It With Fire!'>
<figcaption>
Kill it with fire!
</figcaption>
</figure>

<p>Because he started a topic appropriately entitled: <a href="https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644"><em>“NSFPAOA (Not Suitable for Pat and Other Arachnophobes)”</em></a>, in which he shares his raw .CR2 file for everyone to try their hand at processing that cute little spider above. There have already been quite a few awesome interpretations from folks in the community like:</p>
<figure>
<a href='https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644/3'><img src='https://discuss.pixls.us/uploads/default/optimized/2X/6/6001e6f45f51c2933f7bdbdcc67e39a740bc94d4_1_690x488.jpg' alt='CarVac Version'></a>
<figcaption>
A version by CarVac
</figcaption>
</figure>

<figure>
<a href='https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644/4'><img src='https://discuss.pixls.us/uploads/default/optimized/2X/d/d1aa2d2f753a9f318e1ff417f97d2e94f2ba7fc4_1_690x492.jpg' alt='MLC Morgin Version'></a>
<figcaption>
By MLC/Morgin
</figcaption>
</figure>

<figure>
<a href='https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644/9'><img src='https://discuss.pixls.us/uploads/default/original/2X/8/80a4c80facb6d7c677d8bf9a721eb93282c6c1c0.jpg' alt='By Jonas Wagner'></a>
<figcaption>
By Jonas Wagner
</figcaption>
</figure>

<figure>
<a href='https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644/18'><img src='https://discuss.pixls.us/uploads/default/optimized/2X/3/3ae66bbae7d97c36c153437782225feae10b1411_1_690x565.jpg' alt='iarga'></a>
<figcaption>
By iarga
</figcaption>
</figure>

<figure>
<a href='https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644/19'><img src='https://discuss.pixls.us/uploads/default/optimized/2X/6/6d27b1ec6e8cb5a8acc64d41039ef3e90a5d2f7b_1_690x460.jpg' alt='by PkmX'></a>
<figcaption>
By PkmX
</figcaption>
</figure>

<figure>
<a href='https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644/22'><img src='https://discuss.pixls.us/uploads/default/optimized/2X/9/93942c9bb786532c39a4bd47e0832dbb72c5fbbd_1_690x388.jpg' alt='by Kees Guequierre'></a>
<figcaption>
By Kees Guequierre
</figcaption>
</figure>

<p>Of course, I had a chance to try processing it as well.  Here’s what I ended up with:</p>
<figure>
<img src="https://pixls.us/blog/2016/06/sharing-galore/640px-Bonfire_Flames.JPG" alt='Flames'></figure>

<p>Ahhhh, just writing this post is a giant bag of <strong>NOPE</strong><sup>*</sup>. If you’d like to join in on the fun(?) and share your processing as well - go <a href="https://discuss.pixls.us/t/nsfpaoa-not-suitable-for-pat-and-other-arachnophobes/1644">check out the topic</a>! </p>
<p>Now let’s move on to something more cute and fuzzy, like an ALOT…</p>
<p><small><sup>*</sup> I kid, I’m not really an arachnophobe (<em>within reason</em>), but I can totally see why someone would be.</small></p>
<h2 id="median-blending-alot-of-images-with-g-mic"><a href="#median-blending-alot-of-images-with-g-mic" class="header-link-alt">Median Blending ALOT of Images with G’MIC</a></h2>
<figure>
<a href='http://hyperboleandahalf.blogspot.com/2010/04/alot-is-better-than-you-at-everything.html'><img src="https://pixls.us/blog/2016/06/sharing-galore/ALOT.png" alt='Hyperbole and a Half ALOT'></a>
<figcaption>
The ALOT. Borrowed from <a href='http://hyperboleandahalf.blogspot.com/2010/04/alot-is-better-than-you-at-everything.html'>Allie Brosh</a> and here because I really wanted an excuse to include it.
</figcaption>
</figure>

<p>I count myself lucky to have so many smart friends that I can lean on to figure out or help me do things (more on that in the next post).  One of those friends is <a href="http://gmic.eu">G’MIC</a> creator and community member <a href="http://opensource.graphics">David Tschumperlé</a>.</p>
<p>A few years back he helped me with some artwork I was generating with <a href="http://www.imagemagick.org">imagemagick</a> at the time.  I was averaging images together to see what an amalgamation would look like.  For instance, here is what all of the <a href="http://www.si.com/sports-illustrated/photo/2016/02/13/every-cover-si-swimsuit-edition">Sports Illustrated swimsuit edition</a> <small>(NSFW)</small> covers (through 2000) look like, all at once:</p>
<p><a href="https://www.flickr.com/photos/patdavid/9018489869/in/album-72157630890087884/" title="Sport Illustrated Swimsuit Covers Through 2000"><img src="https://c6.staticflickr.com/4/3767/9018489869_77875a6cc1_c.jpg" width="605" height="800" alt="Sport Illustrated Swimsuit Covers Through 2000"></a></p>
<p>A natural progression of this idea was to consider doing a median blend vs. mean.  The problem is that a mean average is very easy and fast to calculate as you advance through the image stack, but the median is not.  This is relevant because I began to look at these for videos (in particular music videos), where the image stack was 5,000+ images for a video easily (that is ALOT of frames!).</p>
<p>It’s relatively easy to generate a running average for a series of numbers, but generating the median value requires that the entire stack of numbers be loaded and sorted.  This makes it prohibitive to do on a huge number of images, particularly at HD resolutions.</p>
<p>So it’s awesome that, yet again, David has found a solution to the problem!  He explains it in greater detail on his topic:</p>
<p><a href="https://discuss.pixls.us/t/a-guide-about-computing-the-temporal-average-median-of-video-frames-with-gmic/1566">A guide about computing the temporal average/median of video frames with G’MIC</a></p>
<p>He basically chops up the image frame into regions, then computes the pixel-median value for those regions.  Here’s an example of his result:</p>
<figure>
<img src='https://discuss.pixls.us/uploads/default/original/2X/e/e5116c80eecb0554b5616f4b73443c40618d198c.jpg' alt='P!nk Try Mean/Median'>
<figcaption>
Mean/Median samples from P!nk - Try music video.
</figcaption>
</figure>

<p>Now I can start utilizing median blends more often in my experiments, and I’m quite sure folks will find other interesting uses for this type of blending!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Display Color Profiling on Linux ]]></title>
            <link>https://pixls.us/articles/display-color-profiling-on-linux/</link>
            <guid isPermaLink="true">https://pixls.us/articles/display-color-profiling-on-linux/</guid>
            <pubDate>Thu, 09 Jun 2016 22:50:08 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/display-color-profiling-on-linux/pixels.jpg" /><br/>
                 <h1>Display Color Profiling on Linux</h1>  
                 <h2>A work in progress</h2>   
                <p><small style='color:#aaa;'><em>This article by <a href="https://encrypted.pcode.nl/">Pascal de Bruijn</a> was originally <a href="https://encrypted.pcode.nl/blog/2013/11/24/display-color-profiling-on-linux/">published on his site</a> and is reproduced here with permission. &nbsp;&mdash;Pat</em></small></p>
<hr>
<p><strong>Attention:</strong> This article is a work in progress, based on my own practical experience up until the time of writing, so you may want to check back periodically to see if it has been updated.</p>
<p>This article outlines how you can calibrate and profile your display on Linux, assuming you have the right <a href="http://argyllcms.com/doc/instruments.html">equipment</a> (either a colorimeter like for example the i1 Display Pro or a spectrophotometer like for example the ColorMunki Photo). For a general overview of what color management is and details about some of its parlance you may want to read <a href="https://encrypted.pcode.nl/blog/2012/01/29/color-management-on-linux/">this</a> before continuing.</p>
<!-- more -->
<h2 id="a-fresh-start">A Fresh Start<a href="#a-fresh-start" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>First you may want to check if any kind of color management is already active on your machine, if you see the following then you’re fine:</p>
<pre><code>$ xprop -display :0.0 -len 14 -root _ICC_PROFILE
_ICC_PROFILE: no such atom on any window.
</code></pre><p>However if you see something like this, then there is already another color management system active:</p>
<pre><code>$ xprop -display :0.0 -len 14 -root _ICC_PROFILE
_ICC_PROFILE(CARDINAL) = 0, 0, 72, 212, 108, 99, 109, 115, 2, 32, 0, 0, 109, 110
</code></pre><p>If this is the case you need to figure out what and why… For GNOME/Unity based desktops this is fairly typical, since they extract a simple profile from the display hardware itself via <a href="https://encrypted.pcode.nl/blog/2013/04/14/display-profiles-generated-from-edid/">EDID</a> and use that by default. I’m guessing KDE users may want to look into <a href="http://dantti.wordpress.com/2013/05/01/colord-kde-0-3-0-released/">this</a> before proceeding. I can’t give much advice about other desktop environments though, as I’m not particularly familiar with them. That said, I tested most of the examples in this article with XFCE 4.10 on <a href="http://xubuntu.org/">Xubuntu</a> 14.04 “Trusty”.</p>
<h2 id="display-types">Display Types<a href="#display-types" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Modern flat panel displays are comprised of two major components for purposes of our discussion, the backlight and the panel itself. There are various types of backlights, White <a href="https://en.wikipedia.org/wiki/Light-emitting_diode">LED</a> (most common nowadays), <a href="https://en.wikipedia.org/wiki/Cold_cathode">CCFL</a> (most common a few years ago), RGB LED and Wide Gamut CCFL, the latter two of which you’d typically find on higher end displays. The backlight primarily defines a displays <a href="https://en.wikipedia.org/wiki/Gamut">gamut</a> and maximum brightness. The panel on the other hand primarily defines the maximum contrast and acceptable viewing angles. Most common types are variants of <a href="https://en.wikipedia.org/wiki/Liquid-crystal_display#In-plane_switching_.28IPS.29">IPS</a> (usually good contrast and viewing angles) and <a href="https://en.wikipedia.org/wiki/Liquid-crystal_display#Twisted_nematic_.28TN.29">TN</a> (typically mediocre contrast and poor viewing angles).</p>
<h2 id="display-setup">Display Setup<a href="#display-setup" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>There are two main cases, there are laptop displays, which usually allow for little configuration, and regular desktop displays. For regular displays there are a few steps to prepare your display to be profiled, first you need to reset your display to its factory defaults. We leave the contrast at its default value. If your display has a feature called dynamic contrast you need to disable it, this is <em>critical</em>, if you’re unlucky enough to have a display for which this cannot be disabled, then there is no use in proceeding any further. Then we set the color temperature setting to custom and set the R/G/B values to equal values (often 100/100/100 or 255/255/255). As for the brightness, set it to a level which is comfortable for prolonged viewing, typically this means reducing the brightness from its default setting, this will often be somewhere around 25&ndash;50 on a 0&ndash;100 scale. Laptops are a different story, often you’ll be fighting different lighting conditions, so you may want to consider profiling your laptop at its full brightness. We’ll get back to the brightness setting later on.</p>
<p>Before continuing any further, let the display settle for at least half an hour (as its color rendition may change while the backlight is warming up) and make sure the display doesn’t go into power saving mode during this time.</p>
<p>Another point worth considering is cleaning the display before starting the calibration and profiling process, do keep in mind that displays often have relatively fragile coatings, which may be deteriorated by traditional cleaning products, or easily scratched using regular cleaning cloths. There are specialist products <a href="https://www.klearscreen.com/iKlear.aspx">available</a> for safely cleaning computer displays.</p>
<p>You may also want to consider dimming the ambient lighting while running the calibration and profiling procedure to prevent (potential) glare from being an issue.</p>
<h2 id="software">Software<a href="#software" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>If you’re in a GNOME or Unity environment it’s highly recommend to use <a href="https://projects.gnome.org/gnome-color-manager/">GNOME Color Manager</a> (with <a href="http://www.freedesktop.org/software/colord/">colord</a> and <a href="http://argyllcms.com/">argyll</a>). If you have recent versions (3.8.3, 1.0.5, 1.6.2 respectively), you can profile and setup your display completely graphically via the Color applet in System Settings. It’s fully wizard driven and couldn’t be much easier in most cases. This is what I personally use and recommend. The rest of this article focuses on the case where you are not using it.</p>
<p>Xubuntu users in particular can get experimental packages for the latest <a href="http://argyllcms.com/">argyll</a> and optionally <a href="https://github.com/agalakhov/xiccd">xiccd</a> from my <a href="https://launchpad.net/~pmjdebruijn/+archive/xiccd-testing">xiccd-testing</a> PPAs. If you’re using a different distribution you’ll need to source help from its respective community.</p>
<h2 id="report-on-the-uncalibrated-display">Report On The Uncalibrated Display<a href="#report-on-the-uncalibrated-display" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>To get an idea of the displays uncalibrated capabilities we use argyll’s <a href="http://www.argyllcms.com/doc/dispcal.html">dispcal</a>:</p>
<pre><code>$ dispcal -H -y l -R
Uncalibrated response:
Black level = 0.4179 cd/m^2
50%   level = 42.93 cd/m^2
White level = 189.08 cd/m^2
Aprox. gamma = 2.14
Contrast ratio = 452:1
White     Visual Daylight Temperature = 7465K, DE 2K to locus =  3.2
</code></pre><p>Here we see the display has a fairly high uncalibrated native whitepoint at almost 7500<a href="https://en.wikipedia.org/wiki/Color_temperature#Categorizing_different_lighting">K</a>, which means the display is bluer than it should be. When we’re done you’ll notice the display becoming more yellow. If your displays uncalibrated native whitepoint is below <a href="https://en.wikipedia.org/wiki/Illuminant_D65">6500K</a> you’ll notice it becoming more blue when loading the profile.</p>
<p>Another point to note is the fairly high white level (brightness) of almost 190 <a href="https://en.wikipedia.org/wiki/Candela_per_square_metre">cd/m<sup>2</sup></a>, it’s fairly typical to target 120 <a href="https://en.wikipedia.org/wiki/Candela_per_square_metre">cd/m<sup>2</sup></a> for the final calibration, keeping in mind that we’ll lose 10 <a href="https://en.wikipedia.org/wiki/Candela_per_square_metre">cd/m<sup>2</sup></a> or so because of the calibration itself. So if your display reports a brightness significantly higher than 130 <a href="https://en.wikipedia.org/wiki/Candela_per_square_metre">cd/m<sup>2</sup></a> you may want to considering turning down the brightness another notch.</p>
<h2 id="calibrating-and-profiling-your-display">Calibrating And Profiling Your Display<a href="#calibrating-and-profiling-your-display" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>First we’ll use argyll’s <a href="http://argyllcms.com/doc/dispcal.html">dispcal</a> to measure and adjust (calibrate) the display, compensating for the displays <a href="https://en.wikipedia.org/wiki/White_point">whitepoint</a> (targeting <a href="https://en.wikipedia.org/wiki/CIE_Standard_Illuminant_D65">6500K</a>) and <a href="https://en.wikipedia.org/wiki/Gamma_correction">gamma</a> (targeting industry standard 2.2, more info on gamma <a href="http://argyllcms.com/doc/gamma.html">here</a>):</p>
<pre><code>$ dispcal -v -m -H -y l -q l -t 6500 -g 2.2 asus_eee_pc_1215p
</code></pre><p>Next we’ll use argyll’s <a href="http://argyllcms.com/doc/targen.html">targen</a> to generate measurement patches to determine its <a href="https://en.wikipedia.org/wiki/Gamut">gamut</a>:</p>
<pre><code>$ targen -v -d 3 -G -f 128 asus_eee_pc_1215p
</code></pre><p>Then we’ll use argyll’s <a href="http://argyllcms.com/doc/dispread.html">dispread</a> to apply the calibration file generated by <a href="http://argyllcms.com/doc/dispcal.html">dispcal</a>, and measure (profile) the displays gamut using the patches generated by <a href="http://argyllcms.com/doc/targen.html">targen</a>:</p>
<pre><code>$ dispread -v -N -H -y l -k asus_eee_pc_1215p.cal asus_eee_pc_1215p
</code></pre><p>Finally we’ll use argyll’s <a href="http://argyllcms.com/doc/colprof.html">colprof</a> to generate a standardized ICC (version 2) color profile:</p>
<pre><code>$ colprof -v -D &quot;Asus Eee PC 1215P&quot; -C &quot;Copyright 2013 Pascal de Bruijn&quot; \
          -q m -a G -n c asus_eee_pc_1215p
Profile check complete, peak err = 9.771535, avg err = 3.383640, RMS = 4.094142
</code></pre><p>The parameters used to generate the ICC color profile are fairly conservative and should be fairly robust. They will likely provide good results for most use-cases. If you’re after better accuracy you may want to try replacing -a G with -a S or even -a s, but I very strongly recommend starting out using -a G.</p>
<p>You can inspect the contents of a standardized ICC (version 2 only) color profile using argyll’s <a href="http://argyllcms.com/doc/iccdump.html">iccdump</a>:</p>
<pre><code>$ iccdump -v 3 asus_eee_pc_1215p.icc
</code></pre><p>To try the color profile we just generated we can quickly load it using argyll’s <a href="http://argyllcms.com/doc/dispwin.html">dispwin</a>:</p>
<pre><code>$ dispwin -I asus_eee_pc_1215p.icc
</code></pre><p>Now you’ll likely see a color shift toward the yellow side. For some possibly aged displays you may notice it shifting toward the blue side.</p>
<p>If you’ve used a colorimeter (as opposed to a spectrophotometer) to profile your display and if you feel the profile might be off, you may want to consider reading <a href="http://argyllcms.com/doc/WideGamutColmters.html">this</a> and <a href="http://argyllcms.com/doc/CrushedDisplyBlacks.html">this</a>.</p>
<h2 id="report-on-the-calibrated-display">Report On The Calibrated Display<a href="#report-on-the-calibrated-display" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Next we can use argyll’s <a href="http://www.argyllcms.com/doc/dispcal.html">dispcal</a> again to check our newly calibrated display:</p>
<pre><code>$ dispcal -H -y l -r
Current calibration response:
Black level = 0.3432 cd/m^2
50%   level = 40.44 cd/m^2
White level = 179.63 cd/m^2
Aprox. gamma = 2.15
Contrast ratio = 523:1
White     Visual Daylight Temperature = 6420K, DE 2K to locus =  1.9
</code></pre><p>Here we see the calibrated displays whitepoint nicely around 6500K as it should be.</p>
<h2 id="loading-the-profile-in-your-user-session">Loading The Profile In Your User Session<a href="#loading-the-profile-in-your-user-session" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>If your desktop environment is XDG <a href="http://standards.freedesktop.org/autostart-spec/autostart-spec-latest.html">autostart</a> compliant, you may want to considering creating a .desktop file which will load the ICC color profile during all users session login:</p>
<pre><code>$ cat /etc/xdg/autostart/dispwin.desktop
[Desktop Entry]
Encoding=UTF-8
Name=Argyll dispwin load color profile
Exec=dispwin -I /usr/share/color/icc/asus_eee_pc_1215p.icc
Terminal=false
Type=Application
Categories=
</code></pre><p>Alternatively you could use <a href="http://www.freedesktop.org/software/colord/">colord</a> and <a href="https://github.com/agalakhov/xiccd">xiccd</a> for a more sophisticated setup. If you do make sure you have recent versions of both, particularly for <a href="https://github.com/agalakhov/xiccd">xiccd</a> as it’s still a fairly young project.</p>
<p>First we’ll need to start <a href="https://github.com/agalakhov/xiccd">xiccd</a> (in the background), which detects your connected displays and adds it to <a href="http://www.freedesktop.org/software/colord/">colord</a>‘s device inventory:</p>
<pre><code>$ nohup xiccd &amp;
</code></pre><p>Then we can query <a href="http://www.freedesktop.org/software/colord/">colord</a> for its list of available devices:</p>
<pre><code>$ colormgr get-devices
</code></pre><p>Next we need to query <a href="http://www.freedesktop.org/software/colord/">colord</a> for its list of available profiles (or alternatively search by a profile’s full filename):</p>
<pre><code>$ colormgr get-profiles
$ colormgr find-profile-by-filename /usr/share/color/icc/asus_eee_pc_1215p.icc
</code></pre><p>Next we’ll need to assign our profile’s object path to our display’s object path:</p>
<pre><code>$ colormgr device-add-profile \
   /org/freedesktop/ColorManager/devices/xrandr_HSD121PHW1_70842_pmjdebruijn_1000 \
   /org/freedesktop/ColorManager/profiles/icc_e7fc40cb41ddd25c8d79f1c8d453ec3f
</code></pre><p>You should notice your displays color shift within a second or so (<a href="https://github.com/agalakhov/xiccd">xiccd</a> applies it asynchronously), assuming you haven’t already applied it via <a href="http://www.argyllcms.com/doc/dispwin.html">dispwin</a> earlier (in which case you’ll notice no change).</p>
<p>If you suspect <a href="https://github.com/agalakhov/xiccd">xiccd</a> isn’t properly working, you may be able to debug the issue by stopping all <a href="https://github.com/agalakhov/xiccd">xiccd</a> background processes, and starting it in debug mode in the foreground:</p>
<pre><code>$ killall xiccd
$ G_MESSAGES_DEBUG=all xiccd
</code></pre><p>Also in <a href="https://github.com/agalakhov/xiccd">xiccd</a>‘s case you’ll need to create a .desktop file to load <a href="https://github.com/agalakhov/xiccd">xiccd</a> during all users session login:</p>
<pre><code>$ cat /etc/xdg/autostart/xiccd.desktop
[Desktop Entry]
Encoding=UTF-8
Name=xiccd
GenericName=X11 ICC Daemon
Comment=Applies color management profiles to your session
Exec=xiccd
Terminal=false
Type=Application
Categories=
OnlyShowIn=XFCE;
</code></pre><p>You’ll note that <a href="https://github.com/agalakhov/xiccd">xiccd</a> does not need any parameters, since it will query <a href="http://www.freedesktop.org/software/colord/">colord</a>‘s database what profile to load.</p>
<p>If your desktop environment is not XDG autostart compliant, you need to ask them how to start custom commands (<a href="http://www.argyllcms.com/doc/dispwin.html">dispwin</a> or <a href="https://github.com/agalakhov/xiccd">xiccd</a> respectively) during session login.</p>
<h2 id="dual-screen-caveats">Dual Screen Caveats<a href="#dual-screen-caveats" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Currently having a dual screen color managed setup is complicated at best. Most programs use the <a href="http://www.burtonini.com/computing/x-icc-profiles-spec-0.1.html">_ICC_PROFILE</a> atom to get the system display profile, and there’s only one such atom. To resolve this issue <a href="http://www.oyranos.org/wiki/index.php?title=ICC_Profiles_in_X_Specification_0.4">new atoms</a> were defined to support multiple displays, but not all applications actually honor them. So with a dual screen setup there is always a risk of applications applying the profile for your first display to your second display or vice versa.</p>
<p>So practically speaking, if you need a <em>reliable</em> color managed setup, you should probably avoid dual screen setups altogether.</p>
<p>That said, most of argyll’s commands support a -d parameter for selecting which display to work with during calibration and profiling, but I have no personal experience with them whatsoever, since I purposefully don’t have a dual screen setup.</p>
<h2 id="application-support-caveats">Application Support Caveats<a href="#application-support-caveats" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>As my other <a href="https://encrypted.pcode.nl/blog/2012/01/29/color-management-on-linux/">article</a> explains display color profiles consist of two parts, one part (whitepoint &amp; gamma correction) is applied via X11 and thus benefits all applications. There is however a second part (gamut correction) that needs to be applied by the application. And application support for both input and display color management vary wildly. Many consumer grade applications have no color management awareness whatsoever.</p>
<p>Firefox can do color management and it’s half-enabled by default, read <a href="https://encrypted.pcode.nl/blog/2013/12/17/firefox-and-color-management/">this</a> to properly configure Firefox.</p>
<p>GIMP for example has display color management disabled by default, you need to enable it via its preferences.</p>
<p>Eye of GNOME has display color management enabled by default, but it has nasty corner case behaviors, for example when a file has no metadata no color management is done at all (instead of assuming sRGB input). Some of these issues seem to have been resolved on Ubuntu Trusty (<a href="https://bugs.launchpad.net/ubuntu/+source/eog/+bug/272584">LP #272584</a>).</p>
<p>Darktable has display color management enabled by default and is one of the few applications which directly support <a href="http://www.freedesktop.org/software/colord/">colord</a> and the display specific atoms as well as the generic _ICC_PROFILE atom as fallback. There are however a few caveats for darktable as well, documented <a href="http://www.darktable.org/2013/05/display-color-management-in-darktable/">here</a>.</p>
<hr>
<p><small style='color:#aaa;'><em>This article by <a href="https://encrypted.pcode.nl/">Pascal de Bruijn</a> was originally <a href="https://encrypted.pcode.nl/blog/2013/11/24/display-color-profiling-on-linux/">published on his site</a> and is reproduced here with permission.</em></small></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ New Rapid Photo Downloader ]]></title>
            <link>https://pixls.us/blog/2016/05/new-rapid-photo-downloader/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/05/new-rapid-photo-downloader/</guid>
            <pubDate>Sun, 22 May 2016 00:00:00 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/05/new-rapid-photo-downloader/about.jpg" /><br/>
                 <h1>New Rapid Photo Downloader</h1>  
                 <h2>Damon Lynch brings us a new release!</h2>   
                <p>Community member <a href="http://www.damonlynch.net">Damon Lynch</a> happens to make an awesome program called <a href="http://www.damonlynch.net/rapid/">Rapid Photo Downloader</a> in his “spare” time.  In fact you may have heard mention of it as part of <a href="http://www.rileybrandt.com/">Riley Brandt’s</a> <a href="http://www.rileybrandt.com/lessons/"><em>“The Open Source Photography Course”</em></a><sup>*</sup>.  It is a program that specializes in downloading photo and video from media in as efficient a manner as possible while extending the process with extra functionality.</p>
<p><small><sup>*</sup> Riley donates a portion of the proceeds from his course to various projects, and Rapid Photo Downloader is one of them!</small></p>
<!-- more -->
<h2 id="work-smart-not-dumb"><a href="#work-smart-not-dumb" class="header-link-alt">Work Smart, not Dumb</a></h2>
<p>The main features of Rapid Photo Downloader are listed on the website:</p>
<ol>
<li>Generates meaningful, user configurable <a href="http://www.damonlynch.net/rapid/features.html#generate">file and folder names</a></li>
<li>Downloads photos and videos from multiple devices <a href="http://www.damonlynch.net/rapid/features.html#download">simultaneously</a></li>
<li><a href="http://www.damonlynch.net/rapid/features.html#backup">Backs up</a> photos and videos as they are downloaded</li>
<li>Is carefully optimized to download and back up at <a href="http://www.damonlynch.net/rapid/features.html#download">high speed</a></li>
<li><a href="http://www.damonlynch.net/rapid/features.html#easy">Easy</a> to configure and use</li>
<li><a href="http://www.damonlynch.net/rapid/features.html#gnomekde">Runs</a> under Unity, Gnome, KDE and other Linux desktops</li>
<li>Available in <a href="http://www.damonlynch.net/rapid/features.html#languages">thirty</a> languages</li>
<li>Program configuration and use is <a href="http://www.damonlynch.net/rapid/documentation">fully documented</a></li>
</ol>
<p>Damon <a href="https://discuss.pixls.us/t/rapid-photo-downloader-0-9-0a1-is-now-released/1416">announced his 0.9.0a1 release on the forums</a>, and Riley Brandt even recorded a short overview of the new features:</p>
<div class="fluid-vid">
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/7D0Fz6H3R34?rel=0" frameborder="0" allowfullscreen></iframe>
</div>

<p>(Shortly after announcing the 0.9.0a1 release, he <a href="https://discuss.pixls.us/t/rapid-photo-downloader-0-9-0a2-is-released/1424">followed it up with a 0.9.0a2 release</a> with some bug fixes).</p>
<p>Some of the neat new features include being able to preview the download subfolder and storage space of devices <em>before</em> you download:</p>
<figure>
<img src="https://pixls.us/blog/2016/05/new-rapid-photo-downloader/mainwindow.png" alt='Rapid Photo Downloader Main Window'>
</figure>

<p>Also being able to download from multiple devices in parallel, including from all cameras supported by <a href="http://gphoto.sourceforge.net/">gphoto2</a>:</p>
<figure>
<img src="https://pixls.us/blog/2016/05/new-rapid-photo-downloader/downloading.png" alt='Rapid Photo Downloader Downloading'>
</figure>

<p>There is much, much more in this release.  Damon goes into much further detail on <a href="https://discuss.pixls.us/t/rapid-photo-downloader-0-9-0a1-is-now-released/1416">his post in the forum</a>, copied here:</p>
<hr>
<p>How about its <strong>Timeline</strong>, which groups photos and videos based on how much time elapsed between consecutive shots. Use it to identify photos and videos taken at different periods in a single day or
over consecutive days.</p>
<p>You can adjust the time elapsed between consecutive shots that is used to build the Timeline to match your shooting sessions.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/new-rapid-photo-downloader/timeline.png" alt='Rapid Photo Downloader timeline'>
</figure>

<p>How about a modern look?</p>
<figure>
<img src="https://pixls.us/blog/2016/05/new-rapid-photo-downloader/about.png" alt='Rapid Photo Downloader about'>
</figure>

<p>Download instructions: <a href="http://damonlynch.net/rapid/download.html">http://damonlynch.net/rapid/download.html</a></p>
<p>For those who’ve used the older version, I’m copying and pasting from the ChangeLog, which covers most but not all changes:</p>
<ul>
<li><p>New features compared to the previous release, version 0.4.11:</p>
<ul>
<li><p>Every aspect of the user interface has been revised and modernized.</p>
</li>
<li><p>Files can be downloaded from all cameras supported by gPhoto2,
including smartphones. Unfortunately the previous version could download
from only some cameras.</p>
</li>
<li><p>Files that have already been downloaded are remembered. You can still select
previously downloaded files to download again, but they are unchecked by
default, and their thumbnails are dimmed so you can differentiate them
from files that are yet to be downloaded.</p>
</li>
<li><p>The thumbnails for previously downloaded files can be hidden.</p>
</li>
<li><p>Unique to Rapid Photo Downloader is its Timeline, which groups photos and
videos based on how much time elapsed between consecutive shots. Use it
to identify photos and videos taken at different periods in a single day
or over consecutive days. A slider adjusts the time elapsed between
consecutive shots that is used to build the Timeline. Time periods can be
selected to filter which thumbnails are displayed.</p>
</li>
<li><p>Thumbnails are bigger, and different file types are easier to
distinguish.</p>
</li>
<li><p>Thumbnails can be sorted using a variety of criteria, including by device
and file type.</p>
</li>
<li><p>Destination folders are previewed before a download starts, showing which
subfolders photos and videos will be downloaded to. Newly created folders
have their names italicized.</p>
</li>
<li><p>The storage space used by photos, videos, and other files on the devices
being downloaded from is displayed for each device. The projected storage
space on the computer to be used by photos and videos about to be
downloaded is also displayed.</p>
</li>
<li><p>Downloading is disabled when the projected storage space required is more
than the capacity of the download destination.</p>
</li>
<li><p>When downloading from more than one device, thumbnails for a particular
device are briefly highlighted when the mouse is moved over the device.</p>
</li>
<li><p>The order in which thumbnails are generated prioritizes representative
samples, based on time, which is useful for those who download very large
numbers of files at a time.</p>
</li>
<li><p>Thumbnails are generated asynchronously and in parallel, using a load
balancer to assign work to processes utilizing up to 4 CPU cores.
Thumbnail generation is faster than the 0.4 series of program
releases, especially when reading from fast memory cards or SSDs.
(Unfortunately generating thumbnails for a smartphone’s photos is painfully
slow. Unlike photos produced by cameras, smartphone photos do not contain
embedded preview images, which means the entire photo must be downloaded
and cached for its thumbnail to be generated. Although Rapid Photo Downloader
does this for you, nothing can be done to speed it up).</p>
</li>
<li><p>Thumbnails generated when a device is scanned are cached, making thumbnail
generation quicker on subsequent scans.</p>
</li>
<li><p>Libraw is used to render RAW images from which a preview cannot be extracted,
which is the case with Android DNG files, for instance.</p>
</li>
<li><p><a href="https://www.freedesktop.org/wiki/">Freedesktop.org</a> thumbnails for RAW and TIFF photos are generated once they
have been downloaded, which means they will have thumbnails in programs like
Gnome Files, Nemo, Caja, Thunar, PCManFM and Dolphin. If the path files are being
downloaded to contains symbolic links, a thumbnail will be created for the
path with and without the links. While generating these thumbnails does slow the
download process a little, it’s a worthwhile tradeoff because Linux desktops
typically do not generate thumbnails for RAW images, and thumbnails only for
small TIFFs.</p>
</li>
<li><p>The program can now handle hundreds of thousands of files at a time.</p>
</li>
<li><p>Tooltips display information about the file including name, modification
time, shot taken time, and file size.</p>
</li>
<li><p>Right click on thumbnails to open the file in a file browser or copy the
path.</p>
</li>
<li><p>When downloading from a camera with dual memory cards, an emblem beneath the
thumbnail indicates which memory cards the photo or video is on</p>
</li>
<li><p>Audio files that accompany photos on professional cameras like the Canon
EOS-1D series of cameras are now also downloaded. XMP files associated with
a photo or video on any device are also downloaded.</p>
</li>
<li><p>Comprehensive log files are generated that allow easier diagnosis of
program problems in bug reports. Messages optionally logged to a
terminal window are displayed in color.</p>
</li>
<li><p>When running under <a href="http://www.ubuntu.com/">Ubuntu</a>‘s Unity desktop, a progress bar and count of files
available for download is displayed on the program’s launcher.</p>
</li>
<li><p>Status bar messages have been significantly revamped.</p>
</li>
<li><p>Determining a video’s  correct creation date and time has  been improved, using a
combination of the tools <a href="https://mediaarea.net/en/MediaInfo">MediaInfo</a> and <a href="http://www.sno.phy.queensu.ca/~phil/exiftool/">ExifTool</a>. Getting the right date and time
is trickier than it might appear. Depending on the video file and the camera that
produced it, neither MediaInfo nor ExifTool always give the correct result.
Moreover some cameras always use the UTC time zone when recording the creation
date and time in the video’s metadata, whereas other cameras use the time zone
the video was created in, while others ignore time zones altogether.</p>
</li>
<li><p>The time remaining until a download is complete (which is shown in the status
bar) is more stable and more accurate. The algorithm is modelled on that
used by Mozilla Firefox.</p>
</li>
<li><p>The installer has been totally rewritten to take advantage of <a href="https://www.python.org/">Python</a>‘s
tool pip, which installs Python packages. Rapid Photo Downloader can now
be easily installed and uninstalled. On <a href="http://www.ubuntu.com/">Ubuntu</a>, <a href="https://www.debian.org/">Debian</a> and <a href="https://getfedora.org/">Fedora</a>-like
Linux distributions, the installation of all dependencies is automated.
On other Linux distrubtions, dependency installation is partially
automated.</p>
</li>
<li><p>When choosing a Job Code, whether to remember the choice or not can be
specified.</p>
</li>
</ul>
</li>
<li><p>Removed feature:</p>
<ul>
<li>Rotate Jpeg images - to apply lossless rotation, this feature requires the
program jpegtran. Some users reported jpegtran corrupted their jpegs’ 
metadata – which is bad under any circumstances, but terrible when applied
to the only copy of a file. To preserve file integrity under all circumstances,
unfortunately the rotate jpeg option must therefore be removed.</li>
</ul>
</li>
<li><p>Under the hood, the code now uses:</p>
<ul>
<li><p>PyQt 5.4 +</p>
</li>
<li><p>gPhoto2 to download from cameras</p>
</li>
<li><p>Python 3.4 +</p>
</li>
<li><p>ZeroMQ for interprocess communication</p>
</li>
<li><p>GExiv2 for photo metadata</p>
</li>
<li><p>Exiftool for video metadata</p>
</li>
<li><p>Gstreamer for video thumbnail generation</p>
</li>
</ul>
</li>
<li><p>Please note if you use a system monitor that displays network activity,
don’t be alarmed if it shows increased local network activity while the
program is running. The program uses ZeroMQ over TCP/IP for its
interprocess messaging. Rapid Photo Downloader’s network traffic is
strictly between its own processes, all running solely on your computer.</p>
</li>
<li><p>Missing features, which will be implemented in future releases:</p>
<ul>
<li><p>Components of the user interface that are used to configure file
renaming, download subfolder generation, backups, and miscellaneous
other program preferences. While they can be configured by manually
editing the program’s configuration file, that’s far from easy and is
error prone. Meanwhile, some options can be configured using the command
line.</p>
</li>
<li><p>There are no full size photo and video previews.</p>
</li>
<li><p>There is no error log window.</p>
</li>
<li><p>Some main menu items do nothing.</p>
</li>
<li><p>Files can only be copied, not moved.</p>
</li>
</ul>
</li>
</ul>
<hr>
<p>Of course, Damon doesn’t sit still.  He quickly followed up the 0.9.0a1 announcement by <a href="https://discuss.pixls.us/t/rapid-photo-downloader-0-9-0a2-is-released/1424">announcing 0.9.0a2</a> which included a few bug fixes from the previous release:</p>
<ul>
<li><p>Added command line option to import preferences from from an old program
version (0.4.11 or earlier).</p>
</li>
<li><p>Implemented auto unmount using GIO (which is used on most Linux desktops) and
UDisks2 (all those desktops that don’t use GIO, e.g. KDE). </p>
</li>
<li><p>Fixed bug while logging processes being forcefully terminated.</p>
</li>
<li><p>Fixed bug where stored sequence number was not being correctly used when
renaming files.</p>
</li>
<li><p>Fixed bug where download would crash on Python 3.4 systems due to use of Python
3.5 only math.inf</p>
</li>
</ul>
<hr>
<p>If you’ve been considering optimizing your workflow for photo import and initial sorting now is as good a time as any - particularly with all of the great new features that have been packed into this release!  Head on over to the <a href="http://www.damonlynch.net/rapid/">Rapid Photo Downloader</a> website to have a look and see the instructions for getting a copy:</p>
<p><a href="http://damonlynch.net/rapid/download.html">http://damonlynch.net/rapid/download.html</a></p>
<p>Remember, this is <em>Alpha</em> software still (though most of the functionality is all in place).  If you do run into any problems, please drop in and let Damon know in <a href="https://discuss.pixls.us/t/rapid-photo-downloader-0-9-0a2-is-released/1424">the forums</a>!</p>
<style>
ol { max-width: 32rem; margin:0 auto; }
</style>

  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ G'MIC 1.7.1 ]]></title>
            <link>https://pixls.us/blog/2016/05/g-mic-1-7-1/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/05/g-mic-1-7-1/</guid>
            <pubDate>Wed, 18 May 2016 00:00:00 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/then_we_shall_all_burn_together.jpg" /><br/>
                 <h1>G'MIC 1.7.1</h1>  
                 <h2>When the flowers are blooming, image filters abound!</h2>   
                <p>A new version <strong>1.7.1</strong> &ldquo;<em>Spring 2016</em>&rdquo; of <a href="http://gmic.eu"><em>G’MIC</em></a> (<em>GREYC’s Magic for Image Computing</em>),
the open-source framework for image processing, has been released recently (<em>26 April 2016</em>).
This is a great opportunity to summarize some of the latest advances and features over the last 5 months.</p>
<!-- more -->
<h2 id="g-mic-a-brief-overview"><a href="#g-mic-a-brief-overview" class="header-link-alt">G’MIC: A brief overview</a></h2>
<p><a href="http://gmic.eu"><em>G’MIC</em></a> is an open-source project started in <em>August 2008</em>. It has been developed in the
<a href="https://www.greyc.fr/image"><em>IMAGE</em> team</a> of the <a href="https://www.greyc.fr/fr/node/6"><em>GREYC</em></a> laboratory
from the <a href="http://www.cnrs.fr"><em>CNRS</em></a> (one of the major French public research institutes).
This team is made up of researchers and teachers specializing in the algorithms and mathematics of image processing.
<em>G’MIC</em> is released under the free software licence <a href="http://www.cecill.info/licences/Licence_CeCILL_V2.1-en.html"><em>CeCILL</em></a>
(<em>GPL</em>-compatible) for various platforms (<em>Linux, Mac and Windows</em>). It provides a set of various user interfaces
for the manipulation of <em>generic</em> image data, that is images or image sequences of
<a href="https://en.wikipedia.org/wiki/Hyperspectral_imaging">multispectral data</a> being <em>2D</em> or <em>3D</em>, and with high-bit precision
(up to 32bits floats per channel). Of course, it manages “classical” color images as well.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/logo_gmic.png" alt='logo_gmic' width='639' height='211'>
<figcaption>
Logo and (new) mascot of the G’MIC project, the open-source framework for image processing.
</figcaption>
</figure>

<p>Note that the project just got a redesign of its mascot <em>Gmicky</em>, drawn by <a href="http://www.davidrevoy.com/static6/about-me"><em>David Revoy</em></a>, a French illustrator well-known to free graphics lovers for being responsible for the great libre webcomics <a href="http://www.peppercarrot.com/"><em>Pepper&amp;Carott</em></a>.</p>
<p><em>G’MIC</em> is probably best known for it’s <a href="http://www.gimp.org"><em>GIMP</em></a> <a href="http://gmic.eu/gimp.shtml">plug-in</a>,
first released in <em>2009</em>. Today, this popular <em>GIMP</em> extension proposes more than <em>460</em> customizable filters and effects
to apply on your images.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_gimp171_s.png" alt='gmic_gimp171_s' width='640' height='377'>
<figcaption>
Overview of the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<p>But <em>G’MIC</em> is not a plug-in for GIMP only. It also offers a <a href="http://gmic.eu/reference.shtml">command-line interface</a>, that can
be used in addition with the <em>CLI</em> tools from <a href="http://www.imagemagick.org/"><em>ImageMagick</em></a> or
<a href="http://www.graphicsmagick.org"><em>GraphicsMagick</em></a>
(this is undoubtly the most powerful and flexible interface of the framework).
<em>G’MIC</em> also has a web service <a href="https://gmicol.greyc.fr/"><em>G’MIC Online</em></a> to apply effects on your images
directly from a web browser. Other <em>G’MIC</em>-based interfaces also exist (<a href="https://www.youtube.com/watch?v=k1l3RdvwHeM"><em>ZArt</em></a>,
a plug-in for <a href="http://www.krita.org"><em>Krita</em></a>, filters for <a href="http://photoflowblog.blogspot.fr/"><em>Photoflow</em></a>…).
All these interfaces are based on the generic <em>C++</em> libraries <a href="http://cimg.eu"><em>CImg</em></a> and
<a href="http://gmic.eu/libgmic.shtml"><em>libgmic</em></a> which are portable, thread-safe and multi-threaded
(through the use of <a href="http://openmp.org/"><em>OpenMP</em></a>).
Today, <em>G’MIC</em> has more than <a href="http://gmic.eu/reference.shtml"><em>900</em> functions</a> to process images, all being
fully configurable, for a library of only  approximately <em>150 kloc</em> of source code.
It’s features cover a wide spectrum of the image processing field, with algorithms for
geometric and color manipulations, image filtering (denoising/sharpening with spectral, variational or
patch-based approaches…), motion estimation and registration, drawing of graphic primitives (up to 3d vector objects),
edge detection, object segmentation, artistic rendering, etc.
This is a <em>versatile</em> tool, useful to visualize and explore complex image data,
as well as elaborate custom image processing pipelines (see these
<a href="http://issuu.com/dtschump/docs/gmic_slides">slides</a> to get more information about
the motivations and goals of the <em>G’MIC</em> project).</p>
<h2 id="a-selection-of-some-new-filters-and-effects"><a href="#a-selection-of-some-new-filters-and-effects" class="header-link-alt">A selection of some new filters and effects</a></h2>
<p>Here we look at the descriptions of some of the most significant filters recently added. We illustrate their usage
from the <em>G’MIC</em> plug-in for <em>GIMP</em>. All of these filters are of course available from other interfaces as well
(in particular within the <em>CLI</em> tool <a href="http://gmic.eu/reference.shtml"><code>gmic</code></a>).</p>
<h3 id="painterly-rendering-of-photographs"><a href="#painterly-rendering-of-photographs" class="header-link-alt">Painterly rendering of photographs</a></h3>
<p>The filter <strong>Artistic / Brushify</strong> tries to transform an image into a <em>painting</em>.
Here, the idea is to simulate the process of painting with brushes on a white canvas. One provides a template image
and the algorithm first analyzes the image geometry (local contrasts and orientations of the contours), then
attempt to reproduce the image with a single <em>brush</em> that will be locally rotated and scaled accordingly to the
contour geometry.
By simulating enough of brushstrokes, one gets a “painted” version of the template image, which is more or less close to the original one,
depending on the brush shape, its size, the number of allowed orientations, etc.
All these settings being customizable by the user as parameters of the algorithm:
This filter allows thus to render a wide variety of painting effects.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_brushify.jpg" alt='gmic_brushify' width='640' height='399'>
<figcaption>
Overview of the filter “Brushify” in the G’MIC plug-in GIMP. The brush that will be used by the algorithmis visible on the top left.
</figcaption>
</figure>

<p>The animation below illustrates the diversity of results one can get with this filter, applied on the same
input picture of a lion. Various brush shapes and geometries have been supplied to the algorithm.
<em>Brushify</em> is computationally expensive so its implementation is parallelized (each core gives several brushstrokes simultaneously).</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/brushify2.gif" alt='brushify2' width='640' height='512'>
<figcaption>
A few examples of renderings obtained with “Brushify” from the same template image, but with different brushes and parameters.
</figcaption>
</figure>

<p>Note that it’s particularly fun to invoke this filter from the command line interface (using the option <code>-brushify</code>
available in <code>gmic</code>) to process a sequence of video frames
(<a href="https://www.youtube.com/watch?v=tf_fMzS3UyQ&amp;feature=youtu.be">see this example of “ brushified “ video</a>):</p>
<div class='fluid-vid'>
<iframe width="640" height="480" src="https://www.youtube-nocookie.com/embed/tf_fMzS3UyQ?rel=0" frameborder="0" allowfullscreen></iframe>
</div>

<p><br></p>
<h3 id="reconstructing-missing-data-from-sparse-samples"><a href="#reconstructing-missing-data-from-sparse-samples" class="header-link-alt">Reconstructing missing data from sparse samples</a></h3>
<p><em>G’MIC</em> gets a new algorithm to reconstruct missing data in images. This is a classical problem in image processing,
often named “<a href="https://en.wikipedia.org/wiki/Inpainting">Image Inpainting</a>“, and <em>G’MIC</em> already had a lot of
useful filters to solve this problem.
Here, the newly added interpolation method assumes only a sparse set of image data is known, for instance a few scattered pixels
over the image (instead of continuous chuncks of image data). The analysis and the reconstruction of the global
image geometry is then particularly tough.</p>
<p>The new option <code>-solidify</code> in <em>G’MIC</em> allows the reconstruction of dense image data from such a sparse sampling,
based on a multi-scale <a href="https://en.wikipedia.org/wiki/Diffusion_equation">diffusion PDE’s</a>-based technique.
The figure below illustrates the ability of the algorithm with an example of image reconstruction. We start from
an input <a href="https://www.flickr.com/photos/jfrogg/5810936597/in/photolist-9Ruz12-oHDr6x-8VW83C-iM2cR1-oXCyji-nTGYXY-oavqFt-5emqwQ-8Qx6Nx-pkREpT-nYhS8D-najxb9-a3XHVZ-jUq3Aw-qGTeCo-r2yj33-pvci15-p7WnqP-ajPFM1-7SquY5-6busU-7B5iLy-9Av8Kr-4jZ6zq-b2anbD-c2LF73-aiQ5Ta-cdTWpb-ob7FJx-aohzY1-razwT3-p5rXdc-fCvsV3-4N8vKM-4Nhy4z-4HVUCr-eMUCnQ-bqJnaX-6CuzQd-qCYpsk-NzLkj-hYUtqE-oVbqnh-4H1DkM-r4ArWu-drpZHp-pHbCDL-8Zr8K1-xxf3Q9-e8dK5N">image of a waterdrop</a>,
and we keep only 2.7% of the image data (a very little amount of data!). The algorithm is able to reconstruct
a whole image that looks like the input, even if all the small details have not been
fully reconstructed (of course!). The more samples we have, the finer details we can recover.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/waterdrop2.gif" alt='waterdrop2' width='640' height='346'>
<figcaption>
Reconstruction of an image from a sparse sampling.
</figcaption>
</figure>

<p>As this reconstruction technique is quite generic, several new <em>G’MIC</em> filters takes advantage of it:</p>
<ul>
<li>Filter <strong>Repair / Solidify</strong> applies the algorithm in a direct manner, by reconstructing transparent areas
from the interpolation of opaque regions.
The animation below shows how this filter can be used to create an artistic blur on the image borders.</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_sol.gif" alt='gmic_sol' width='640' height='410'>
<figcaption>
Overview of the “Solidify” filter, in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<p>From an artistic point of view, there are many possibilities offered by this filters.
For instance, it becomes really easy to generate color gradients with complex shapes, as shown with the two examples below
(also in <a href="https://www.youtube.com/watch?v=rgLQayllv-g">this video</a> that details the whole process).</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_solidify2.jpg" alt='gmic_solidify2' width='636' height='636'>
<figcaption>
Using the “Solidify” filter of G’MIC to easily create color gradients with complex shapes (input images on the left, filter results on the right).
</figcaption>
</figure>

<ul>
<li>Filter <strong>Artistic / Smooth abstract</strong> uses same idea as the one with the waterdrop image:
it purposely sub-samples the image in a sparse way, by choosing keypoints mainly on the image edges, then use the reconstruction
algorithm to get the image back. With a low number of samples, the filter can only render a piecewise smooth image,
i.e. a smooth abstraction of the input image.</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/smooth_abstract.jpg" alt='smooth_abstract' width='640' height='456'>
<figcaption>
Overview of the “Smooth abstract” filter in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<ul>
<li>Filter <strong>Rendering / Gradient [random]</strong> is able to synthetize random colored backgrounds. Here again, the filter initializes
a set of colors keypoints randomly chosen over the image, then interpolate them with the new reconstruction algorithm.
We end up with a psychedelic background composed of randomly oriented color gradients.</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gradient_random.jpg" alt='gradient_random' width='640' height='387'>
<figcaption>
Overview of the “Gradient [random]” filter in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<ul>
<li><strong>Simulation of analog films</strong> : the new reconstruction algorithm also allowed a major improvement
for all the analog film emulation filters that have been present in <em>G’MIC</em> for years.
The section <strong>Film emulation/</strong> proposes a wide variety of filters for this purpose. Their goal is to apply color transformations
to simulate the look of a picture shot by an analogue camera with a certain kind of film.
Below, you can see for instance a few of the <em>300</em> colorimetric transformations that are available in <em>G’MIC</em>.</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_clut1.jpg" alt='gmic_clut1' width='481' height='725'>
<figcaption>
A few of the 300+ color transformations available in G’MIC.
</figcaption>
</figure>

<p>From an algorithmic point of view, such a color mapping is extremely simple to implement :
for each of the <em>300+</em> presets, <em>G’MIC</em> actually has an <a href="http://www.quelsolaar.com/technology/clut.html"><em>HaldCLUT</em></a>, that is
a function defining for each possible color <em>(R,G,B)</em> (of the original image), a new color <em>(R’,G’,B’)</em> color to set
instead. As this function is not necessarily analytic, a <em>HaldCLUT</em> is stored in a discrete manner as a lookup table that gives
the result of the mapping <em>for all</em> possible colors of the <em>RGB</em> cube (that is <em>2^24 = 16777216</em> values
if we work with a <em>8bits</em> precision per color component). This <em>HaldCLUT</em>-based color mapping is illustrated below for all values of the <em>RGB</em> color cube.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_clut0.jpg" alt='gmic_clut0' width='322' height='445'>
<figcaption>
Principle of an HaldCLUT-based colorimetric transformation.
</figcaption>
</figure>

<p>This is a large amount of data: even by subsampling the <em>RGB</em> space (e.g. with <em>6 bits</em> per component) and compressing the corresponding <em>HaldCLUT</em> file,
you ends up with approximately <em>200</em> and <em>300</em> kB for each mapping file.
Multiply this number by <em>300+</em> (the number of available mappings in <em>G’MIC</em>), and you get a total of <em>85MB</em> of data, to store all these color transformations.
Definitely not convenient to spread and package!</p>
<p>The idea was then to develop a new lossy compression technique focused on <em>HaldCLUT</em> files, that is volumetric discretised vector-valued functions which are piecewise smooth by nature.
And that what has been done in <em>G’MIC</em>, thanks to the new sparse reconstruction algorithm. Indeed, the reconstruction technique also works with <em>3D</em> image data (such as a <em>HaldCLUT</em>!), so
one simply has to extract a sufficient number of significant keypoints in the <em>RGB</em> cube and interpolate them afterwards to allow the reconstruction of a whole <em>HaldCLUT</em>
(taking care to have a reconstruction error small enough to be sure that
the color mapping we get with the compressed <em>HaldCLUT</em> is indistinguishable from the non-compressed one).</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_clut2.jpg" alt='gmic_clut2' width='640' height='320'>
<figcaption>
How the decompression of an HaldCLUT now works in G’MIC, from a set of colored keypoints located in the RGB cube.
</figcaption>
</figure>

<p>Thus, <em>G’MIC</em> doesn’t need to store all the color data from a <em>HaldCLUT</em>, but only a sparse sampling of it (i.e. a sequence of <code>{ rgb_keypoint, new_rgb_color }</code>).
Depending on the geometric complexity of the <em>HaldCLUTs</em> to encode, more or less keypoints are necessary (roughly from <em>30</em> to <em>2000</em>).
As a result, the storage of the <em>300+</em> <em>HaldCLUTs</em> in <em>G’MIC</em> requires now only <em>850 KiB</em> of data (instead of <em>85 MiB</em>), that is a compression gain of <em>99%</em> !
That makes the whole <em>HaldCLUT</em> data storable in a single file that is easy to ship with the <em>G’MIC</em> package. Now, a user can then apply all the <em>G’MIC</em> color transformations
while being offline (previously, each <em>HaldCLUT</em> had to be downloaded separately from the <em>G’MIC</em> server when requested).</p>
<p>It looks like this new reconstruction algorithm from sparse samples is really great, and no doubts it will be used in other filters in the future.</p>
<h3 id="make-textures-tileable"><a href="#make-textures-tileable" class="header-link-alt">Make textures tileable</a></h3>
<p>Filter <strong>Arrays &amp; tiles / Make seamless [patch-based]</strong> tries to transform an input texture to make it <em>tileable</em>, so that it can be duplicated as <em>tiles</em> along the horizontal and vertical axes
without visible seams on the borders of adjacent tiles.
Note that this is something that can be extremely hard to achieve, if the input texture has few auto-similarity or glaring luminosity changes spatially.
That is the case for instance with the “Salmon” texture shown below as four adjacent tiles (configuration <em>2x2</em>) with a lighting that goes from dark (on the left) to bright (on the right).
Here, the algorithm modifies the texture so that the tiling shows no seams, but where the aspect of the original texture is preserved as much as possible
(only the texture borders are modified).</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/seamless1.gif" alt='seamless1' width='640' height='532'>
<figcaption>
Overview of the “Make Seamless” filter in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<p>We can imagine some great uses of this filter, for instance in video games, where texture tiling is common to render large virtual worlds.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/seamless2.gif" alt='seamless2' width='640' height='427'>
<figcaption>
Result of the “Make seamless” filter of G’MIC to make a texture tileable.
</figcaption>
</figure>


<h3 id="image-decomposition-into-several-levels-of-details"><a href="#image-decomposition-into-several-levels-of-details" class="header-link-alt">Image decomposition into several levels of details</a></h3>
<p>A “new” filter <strong>Details / Split details [wavelets]</strong> has been added to decompose an image into several levels of details.
It is based on the so-called <a href="https://en.wikipedia.org/wiki/Stationary_wavelet_transform">“à trous” wavelet decomposition</a>.
For those who already know the popular <a href="http://registry.gimp.org/node/11742"><em>Wavelet Decompose</em></a> plug-in for <em>GIMP</em>, there won’t be so much novelty here, as it is mainly the same kind of
decomposition technique that has been implemented.
Having it directly in <em>G’MIC</em> is still a great news: it offers now a preview of the different scales that will be computed, and the implementation is parallelized to take advantage of multiple cores.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_wavelets.jpg" alt='gmic_wavelets' width='640' height='448'>
<figcaption>
Overview of the wavelet-based image decomposition filter, in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<p>The filter outputs several layers, so that each layer contains the details of the image at a given scale. All those layers blended together gives the original image back.</p>
<p>Thus, one can work on those output layers separately and modify the image details only for a given scale. There are a lot of applications for this kind of image decomposition,
one of the most spectacular being the ability to retouch the skin in portraits : the flaws of the skin are indeed often present in layers with middle-sized scales, while
the natural skin texture (the pores) are present in the fine details. By selectively removing the flaws while keeping the pores, the skin aspect stays natural after the retouch
(see <a href="http://blog.patdavid.net/2011/12/getting-around-in-gimp-skin-retouching.html">this wonderful link</a> for a detailed tutorial about skin retouching techniques, with <em>GIMP</em>).</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/skin.gif" alt='skin' width='480' height='480'>
<figcaption>
Using the wavelet decomposition filter in G’MIC for removing visible skin flaws on a portrait.
</figcaption>
</figure>


<h3 id="image-denoising-based-on-patch-pca-"><a href="#image-denoising-based-on-patch-pca-" class="header-link-alt">Image denoising based on “Patch-PCA”</a></h3>
<p><em>G’MIC</em> is also well known to offer a wide range of algorithms for image <em>denoising</em> and <em>smoothing</em> (currently more than a dozen). And he got one more !
Filter <strong>Repair / Smooth [patch-pca]</strong> proposed a new image denoising algorithm that is both efficient and computationally intensive (despite its multi-threaded implementation, you
probably should avoid it on a machine with less than 8 cores…).
In return, it sometimes does magic to suppress noise while preserving small details.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/patchpca.jpg" alt='patchpca' width='640' height='291'>
<figcaption>
Result of the new patch-based denoising algorithm added to G’MIC.
</figcaption>
</figure>


<h3 id="the-droste-effect"><a href="#the-droste-effect" class="header-link-alt">The “Droste” effect</a></h3>
<p><a href="https://en.wikipedia.org/wiki/Droste_effect">The Droste effect</a> (also known as “<em>mise en abyme</em>“ in art) is the effect of a picture appearing within itself recursively.
To achieve this, a new filter <strong>Deformations / Continuous droste</strong> has been added into <em>G’MIC</em>. It’s actually a complete rewrite of the popular Mathmap’s
<a href="https://www.flickr.com/groups/88221799@N00/discuss/72157601071820707/">Droste filter</a> that has existed for years.
<em>Mathmap</em> was a very popular plug-in for <em>GIMP</em>, but it seems to be not maintained anymore. The Droste effect was one of its most iconic and complex filter.
<em>Martin “Souphead”</em>, one former user of <em>Mathmap</em> then took the bull by the horns and converted the complex code of this filter specifically into a <em>G’MIC</em> script,
resulting in a parallelized implementation of the filter.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/droste0.jpg" alt='droste0' width='640' height='373'>
<figcaption>
Overview of the converted “Droste” filter, in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<p>This filter allows all artistic delusions. For instance, it becomes trivial to create the result below in a few steps: create a selection around the clock, move it on a transparent background, run the <em>Droste</em> filter,
<em>et voilà!</em>.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/droste2.jpg" alt='droste2' width='488' height='736'>
<figcaption>
A simple example of what the G’MIC “Droste” filter can do.
</figcaption>
</figure>


<h3 id="equirectangular-to-nadir-zenith-transformation"><a href="#equirectangular-to-nadir-zenith-transformation" class="header-link-alt">Equirectangular to nadir-zenith transformation</a></h3>
<p>The filter <strong>Deformations / Equirectangular to nadir-zenith</strong> is another filter converted from <em>Mathmap</em> to <em>G’MIC</em>.
It is specifically used for the processing of panoramas: it reconstructs both the
<a href="https://en.wikipedia.org/wiki/Zenith"><em>Zenith</em></a> and the
<a href="https://en.wikipedia.org/wiki/Nadir"><em>Nadir</em></a> regions of a panorama so that they can be easily modified
(for instance to reconstruct missing parts), before being reprojected back into the input panorama.</p>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/zenith1.jpg" alt='zenith1' width='640' height='318'>
<figcaption>
Overview of the “Deformations / Equirectangular to nadir-zenith” filter in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<p><a href="https://plus.google.com/u/0/b/117441237982283011318/115320419935722486008/posts"><em>Morgan Hardwood</em></a> has wrote a quite detailled tutorial,
<a href="https://discuss.pixls.us/t/panography-patching-the-zenith-and-nadir/585">here on pixls.us</a>,
about the reconstruction of missing parts in the Zenith/Nadir of an equirectangular panorama. Check it out!</p>
<h2 id="other-various-improvements"><a href="#other-various-improvements" class="header-link-alt">Other various improvements</a></h2>
<p>Finally, here are other highlights about the <em>G’MIC</em> project:</p>
<ul>
<li>Filter <strong>Rendering / Kitaoka Spin Illusion</strong> is another <em>Mathmap</em> filter converted to <em>G’MIC</em> by <em>Martin “Souphead”</em>. It generates a certain kind of
<a href="http://www.ritsumei.ac.jp/~akitaoka/index-e.html">optical illusions</a> as shown below (close your eyes if you are epileptic!)</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/spin2.jpg" alt='spin2' width='422' height='422'>
<figcaption>
Result of the “Kitaoka Spin Illusion” filter.
</figcaption>
</figure>

<ul>
<li>Filter <strong>Colors / Color blindness</strong> transforms the colors of an image to simulate different types of <a href="https://en.wikipedia.org/wiki/Color_blindness">color blindness</a>.
This can be very helpful to check the accessibility of a web site or a graphical document for colorblind people.
The color transformations used here are the same as defined on <a href="http://www.color-blindness.com/coblis-color-blindness-simulator/"><em>Coblis</em></a>,
a website that proposes to apply this kind of simulation online. The <em>G’MIC</em> filter gives strictly identical results, but it ease
the batch processing of several images at once.</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_cb.jpg" alt='gmic_cb' width='640' height='397'>
<figcaption>
Overview of the colorblindness simulation filter, in the G’MIC plug-in for GIMP.
</figcaption>
</figure>

<ul>
<li>Since a few years now, <em>G’MIC</em> has its own parser of mathematical expression, a really convenient module to perform complex calculations when applying image filters
This core feature gets new functionalities: the ability to manage variables that can be complex, vector or matrix-valued, but also the creation of
user-defined mathematical functions. For instance, the classical rendering of the <a href="https://en.wikipedia.org/wiki/Mandelbrot_set"><em>Mandelbrot</em> fractal set</a>
(done by estimating the divergence of a sequence of complex numbers) can be implemented like this, directly on the command line:<pre><code class="lang-sh">$ gmic 512,512,1,1,&quot;c = 2.4*[x/w,y/h] - [1.8,1.2]; z = [0,0]; for (iter = 0, cabs(z)&lt;=2 &amp;&amp; ++iter&lt;256, z = z**z + c); 6*iter&quot; -map 7,2
</code></pre>
</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_mand.jpg" alt='gmic_mand' width='512' height='512'>
<figcaption>
Using the G’MIC math evaluator to implement the rendering of the Mandelbrot set, directly from the command line!_
</figcaption>
</figure>

<p>This clearly enlarge the math evaluator ability, as you are not limited to scalar variables anymore. You can now create complex filters which are able to
solve linear systems or compute eigenvalues/eigenvectors, and this, for each pixel of an input image.
It’s a bit like having a micro-(micro!)-<a href="https://www.gnu.org/software/octave/"><em>Octave</em></a> inside <em>G’MIC</em>.
Note that the <em>Brushify</em> filter described earlier uses these new features extensively.
It’s also interesting to know that the <em>G’MIC</em> math expression evaluator has its own <a href="https://en.wikipedia.org/wiki/Just-in-time_compilation"><em>JIT</em> compiler</a>
to achieve a fast evaluation of expressions when applied on thousands of image values simultaneously.</p>
<ul>
<li>Another great contribution has been proposed by <a href="https://plus.google.com/+TobiasFleischer/posts"><em>Tobias Fleischer</em></a>, with the creation of a new <em>C</em>
<a href="https://en.wikipedia.org/wiki/Application_programming_interface"><em>API</em></a> to invoke the functions of the <a href="http://gmic.eu/libgmic.shtml"><em>libgmic</em></a> library
(which is the library containing all the <em>G’MIC</em> features, initially available through a <em>C++</em> <em>API</em> only).
As the <em>C</em> <a href="https://fr.wikipedia.org/wiki/Application_binary_interface"><em>ABI</em></a> is standardized (unlike <em>C++</em>),
this basically means <em>G’MIC</em> can be interfaced more easily with languages other than <em>C++</em>.
In the future, we can imagine the development of <em>G’MIC</em> <em>APIs</em> for languages such as <em>Python</em> for instance.
<em>Tobias</em> is currently using this new <em>C</em> <em>API</em> to develop <em>G’MIC</em>-based plug-ins compatible with the <a href="https://en.wikipedia.org/wiki/OpenFX_%28API%29"><em>OpenFX</em></a> standard.
Those plug-ins should be usable indifferently in video editing software such as <a href="https://fr.wikipedia.org/wiki/Adobe_After_Effects">After effects</a>, <a href="https://fr.wikipedia.org/wiki/Sony_Vegas_Pro">Sony Vegas Pro</a>
or <a href="http://www.natron.fr/">Natron</a>. This is still an on-going work though.</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_natron.jpg" alt='gmic_natron' width='640' height='391'>
<figcaption>
Overview of some G’MIC-based OpenFX plug-ins, running under Natron.
</figcaption>
</figure>

<ul>
<li>Another contributor <a href="https://github.com/Starfall-Robles"><em>Robin “Starfall Robles”</em></a> started to develop a <a href="https://github.com/Starfall-Robles/Blender-2-G-MIC">Python script</a>
to provide some of the <em>G’MIC</em> filters directly in the <a href="http://www.blendernation.com/2016/04/27/creative-imagery-blender-2-gmic/"><em>Blender</em> video sequence editor</a>.
This work is still in a early stage, but you can already apply different <em>G’MIC</em> effects on image sequences (see <a href="https://www.youtube.com/watch?v=TSzoEXAV1zs">this video</a> for a demonstration).</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_blender2.jpg" alt='gmic_blender2' width='640' height='325'>
<figcaption>
Overview of a dedicated G’MIC script running within the Blender VSE.
</figcaption>
</figure>

<ul>
<li>You can find out <em>G’MIC</em> filters also in the opensource nonlinear video editor <a href="https://github.com/jliljebl/flowblade"><em>Flowblade</em></a>, thanks to the hard work of
<a href="https://plus.google.com/u/0/b/117441237982283011318/102624418925189345577/posts"><em>Janne Liljeblad</em></a> (<em>Flowblade</em> project leader).
Here again, the goal is to allow the application of <em>G’MIC</em> effects and filters directly on image sequences, mainly for artistic purposes
(as shown in <a href="https://vimeo.com/157364651">this video</a> or <a href="https://vimeo.com/164331676">this one</a>).</li>
</ul>
<figure>
<img src="https://pixls.us/blog/2016/05/g-mic-1-7-1/gmic_flowblade.jpg" alt='gmic_flowblade' width='640' height='530'>
<figcaption>
Overview of a G’MIC filter applied under Flowblade, a nonlinear video editor.
</figcaption>
</figure>



<h2 id="what-s-next-"><a href="#what-s-next-" class="header-link-alt">What’s next ?</a></h2>
<p>As you see, the <em>G’MIC</em> project is doing well, with an active development and cool new features added months after months.
You can find and use interfaces to <em>G’MIC</em> in more and more opensource software, as
<a href="http://www.gimp.org"><em>GIMP</em></a>,
<a href="https://krita.org/"><em>Krita</em></a>,
<a href="https://www.blender.org/"><em>Blender</em></a>,
<a href="https://aferrero2707.github.io/PhotoFlow/"><em>Photoflow</em></a>,
<a href="https://github.com/jliljebl/flowblade"><em>Flowblade</em></a>,
<a href="http://veejayhq.net/">Veejay</a>,
<a href="http://ekd.tuxfamily.org/"><em>EKD</em></a> and
<a href="http://natron.fr/"><em>Natron</em></a> in a near future (at least we hope so!).</p>
<p>At the same time, we can see more and more external resources available for <em>G’MIC</em> : tutorials, blog articles
(<a href="https://discuss.pixls.us/t/fourier-transform-for-fixing-regular-pattern-noise/586">here</a>,
<a href="https://paulsphotopalace.wordpress.com/the-color-mixers-3/">here</a>,
<a href="http://lapizybits.blogspot.com/2015/12/efecto-esbozo.html">here</a>,…),
or demonstration videos
(<a href="https://www.youtube.com/watch?v=YjqMT7Mn2ac">here</a>,
<a href="https://www.youtube.com/watch?v=VPG1dkPlyvo">here</a>,
<a href="https://www.youtube.com/watch?v=N3KqWTmkgB8">here</a>,
<a href="https://www.youtube.com/watch?v=w6Sr1nO5gFo">here</a>,…).
This shows the project becoming more useful to users of opensource software for graphics and photography.</p>
<p>The development of version <em>1.7.2</em> already hit the ground running, so stay tuned and visit the official <em>G’MIC</em> <a href="https://discuss.pixls.us/c/software/gmic">forum on pixls.us</a>
to get more info about the project developement and get answers to your questions.
Meanwhile, feel the power of <em>free software</em> for image processing!</p>
<h2 id="links-"><a href="#links-" class="header-link-alt">Links:</a></h2>
<ul>
<li><a href="http://gmic.eu">G’MIC home page</a></li>
<li><a href="http://gmic.eu/gimp.shtml">G’MIC plug-in for GIMP</a></li>
<li><a href="http://gmic.eu/tutorial/basics.shtml">Introduction to the CLI interface of G’MIC</a></li>
<li><a href="http://gmic.eu/reference.shtml">Technical reference documentation</a></li>
<li><a href="https://linuxfr.org/news/g-mic-1-7-1-quand-les-fleurs-bourgeonnent-les-filtres-d-images-foisonnent">G’MIC 1.7.1 release article on linuxfr.org</a></li>
</ul>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Post Libre Graphics Meeting ]]></title>
            <link>https://pixls.us/blog/2016/04/post-libre-graphics-meeting/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/04/post-libre-graphics-meeting/</guid>
            <pubDate>Fri, 29 Apr 2016 22:12:46 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/Mairi-Finsbury.jpg" /><br/>
                 <h1>Post Libre Graphics Meeting</h1>  
                 <h2>What a trip!</h2>   
                <p>What a blast!</p>
<p>This trip report is long overdue, but I wanted to process some of my images to share with everyone before I posted.</p>
<p>It had been a couple of years since I had an opportunity to travel and meet with the <a href="https://www.gimp.org">GIMP</a> team again (<a href="https://www.flickr.com/photos/patdavid/albums/72157643712169045">Leipzig</a> was awesome) so I was really looking forward to this trip.  I missed the opportunity to head up to the great white North for last years meeting in Toronto.</p>
<!-- more -->
<h2 id="london-calling"><a href="#london-calling" class="header-link-alt">London Calling</a></h2>
<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/to_LGM.jpg" alt='Passport to LGM'>
<figcaption>
Passport? Check! Magazine? Check! Ready to head to London!
</figcaption>
</figure>

<p>I was going to attend the pre-LGM photowalk again this year so this time I decided to pack some bigger off-camera lighting modifiers for everyone to play with.  Here’s a neat travelling photographer pro-tip: most airlines will let you carry on an umbrella as a “freebie” item.  They just don’t specify that it <em>has</em> to be an umbrella to keep the rain off you.  So I carried on my big Photek Softlighter II (luckily my light stands fit in my checked luggage).  Just be sure not to leave it behind somewhere (which I was paranoid about for most of my trip).  Luckily I was only changing planes in Atlanta.</p>
<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/ATL.jpg" alt='Atlanta Airport International Terminal'>
<figcaption>
The new ‘futristic’ looking Atlanta airport international terminal.
</figcaption>
</figure>

<p>A couple of (<em>bad</em>) movies and hours later I was in Heathrow.  I figured it wouldn’t be much trouble getting through border control.  </p>
<p>I may have been a little optimistic about that.  </p>
<p>The <strong>Border Force</strong> agent was quite nice and <em>super</em> inquisitive.  So much so that I actually began to worry at some point (I think I must have spent almost 20 minutes talking to her) that she might not let me in!</p>
<p>She kept asking what I was coming to London for and I kept trying to explain to her what a “<em>Libre Graphics Meeting</em>“ was.  This was almost a tragic comedy.  The idea of Free Software did not seem to compute to her and I was sorry I had even made the passing mention.  Her attention then turned to my umbrella and photography.  What was I there to photograph?  Who?  Why?  (Come to think of it, I should start asking myself those same questions more often… It was an existential visit to the border control.)</p>
<p>In the end I think she got bored with my answers and figured that I was far too awkward to be a threat to anything.  Which pretty much sums up my entire college dating life.</p>
<h2 id="photowalk"><a href="#photowalk" class="header-link-alt">Photowalk</a></h2>
<p>In what I hope will become a tradition we had our photowalk the day before LGM officially kicked off and we could not have asked for a better day of weather!  It was partly cloudy and just gorgeous (pretty much the complete <em>opposite</em> to what I was expecting for London weather). </p>
<h3 id="furtherfield-commons"><a href="#furtherfield-commons" class="header-link-alt">Furtherfield Commons</a></h3>
<p><a href='http://www.furtherfield.org/'>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/furtherfield_header.png" alt='Furtherfield Logo' style='background-color: #D3DBD5;'>
</a></p>
<p>I want to thank <a href="http://ruthcatlow.net">Ruth Catlow</a> (<a href="http://ruthcatlow.net/">http://ruthcatlow.net/</a>) for allowing us to use the awesome space at <a href="http://www.furtherfield.org">Furtherfield Commons</a> in Finsbury Park as a base for our photowalk!  They were amazingly accommodating and we had a wonderful time chatting in general about art and what they were up to at the gallery and space.</p>
<p>They have some really neat things going on at the gallery and space so be sure to check them out if you can!</p>
<h3 id="going-for-a-walk-with-friends"><a href="#going-for-a-walk-with-friends" class="header-link-alt">Going for a Walk with Friends</a></h3>
<p>This is one of my favorite things about being able to attend LGM.  I get to take a stroll and talk about photography with friends that I only usually get to interact with through an IRC window. I also feel like I can finally contribute something back to these awesome people that provide software I use every day.</p>
<figure >
<a href="https://www.flickr.com/photos/schumaml/25858162683/in/dateposted/" title="IMGP6089"><img src="https://farm2.staticflickr.com/1443/25858162683_47061b2074_z.jpg" width="640" height="426" alt="IMGP6089"></a>
<figcaption>
Mairi between Simon and myself (I’m holding a reflector for him).<br>
Photo by <a href="https://www.flickr.com/photos/schumaml/25858162683/in/dateposted/">Michael Schumacher</a> <span class='cc'><a href="https://www.flickr.com/photos/103724284@N02/26526017851">cbna</a></span>
</figcaption>
</figure>

<p>We meandered through the park and chatted a bit about various things.  Simon had brought along his external flash and wanted to play with off-camera lighting.  So we convinced Liam to stand in front of a tree for us and Simon ended up taking one of my favorite images from the entire trip.  This was Liam standing in front of the tree under the shade with me holding the flash slightly above him and to the camera right.</p>
<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/liam_by_nomis-500.jpg" alt='Liam by nomis'>
<figcaption>
Liam by Simon
</figcaption>
</figure>

<p>We even managed to run into Barrie Minney while on our way back to the Commons building.  Aryeom and I started talking a little bit while walking when we crossed paths with some locals hanging out in the park.  One man in particular was quite outgoing and let Aryeom take his photo, leading to another fun image!</p>
<p>Upon returning to the Commons building we experimented with some of the pretty window light coming into the building along with some black panels and a model (Mairi).  This was quite fun as we were experimenting with various setups for the black panels and speedlights.  Everyone had a chance to try some shots out and to direct Mairi (who was <em>super</em> patient and accommodating while we played).</p>
<figure>
<a href="https://www.flickr.com/photos/patdavid/26059429014/in/dateposted-public/" title="Mairi Natural Light"><img src="https://farm2.staticflickr.com/1456/26059429014_c00b1b6d63_c.jpg" width="598" height="800" alt="Mairi Natural Light"></a>
<figcaption>
I was having so much fun talking and trying things out with everyone that I didn’t even take that many photos of my own!  This is one of my only images of Mairi inside the Commons.<br>
<i>Mairi Natural Light</i> <span class='cc'><a href="https://creativecommons.org/licenses/by-sa/2.0/">cba</a></span>
</figcaption>
</figure>

<p>Towards the end of our day I decided get my big Softlighter out and to try a few things in the lane outside the Commons building.  Luckily Michael Schumacher grabbed an image of us while we were testing some shots with Mairi outside.</p>
<figure>
<a data-flickr-embed="true"  href="https://www.flickr.com/photos/schumaml/26395969771/in/dateposted/" title="IMGP6108"><img src="https://farm2.staticflickr.com/1612/26395969771_b4a404b072_z.jpg" width="640" height="426" alt="IMGP6108"></a>
<figcaption>
A nice behind-the-scenes image from schumaml of the lighting setup used below.<br>
Yes, that’s <a href='http://www.darktable.org'>darktable</a> developer hanatos bracing the umbrella from the wind for me!<br>
<i>Photo by <a href="https://www.flickr.com/photos/schumaml/25858162683/in/dateposted/">Michael Schumacher</a> </i><span class='cc'><a href="https://www.flickr.com/photos/103724284@N02/26526017851">cbna</a></span>
</figcaption>
</figure>

<p>I loved the lane receding in the background and thought it might make for some fun images of Mairi.  I had two YN-560 flashes in the Softlighter both firing around &frac34; power.  I had to balance the ambient sky with the softlighter so needed the extra power of a second flash (it also helps to keep the cycle times down).</p>
<figure>
<a href="https://www.flickr.com/photos/patdavid/26581376895/in/dateposted-public/" title="Mairi Finsbury"><img src="https://farm2.staticflickr.com/1565/26581376895_a716383b7e_z.jpg" width="640" height="360" alt="Mairi Finsbury"></a>
<figcaption>
Mairi waiting patiently while we set things up.<br>
<i>Mairi Finsbury</i> <span class='cc'><a href="https://creativecommons.org/licenses/by-sa/2.0/">cba</a></span><br>
50mm <i style='font-family:serif;'>f</i>/8.0 <sup style='margin-right:-0.1rem;'>1</sup>&frasl;<sub style='margin-left:-0.1rem;'>200</sub> ISO200
</figcaption>
</figure>

<figure>
<a href="https://www.flickr.com/photos/patdavid/26365329850/in/dateposted-public/" title="Mairi Finsbury Park (In the Lane)"><img src="https://farm2.staticflickr.com/1443/26365329850_3b9e044e57_z.jpg" width="640" height="640" alt="Mairi Finsbury Park (In the Lane)"></a>
<figcaption>
<i>Mairi Finsbury Park (In the Lane)</i> <span class='cc'><a href="https://creativecommons.org/licenses/by-sa/2.0/">cba</a></span>
</figcaption>
</figure>

<p>The day was awesome and I really enjoyed being able to just hang out with everyone and take some neat photos.  The evening at the pub was pretty great also (I got to hang out with Barrie and his friend and have a couple of pints - <em>thanks again Barrie</em>!).</p>
<h2 id="lgm"><a href="#lgm" class="header-link-alt">LGM</a></h2>
<p>It never fails to amaze me how every year the LGM organizers manage to put together such a great meeting for everyone.  The venue was great and the people were just fantastic at the University of Westminster.</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/UoW.jpg" alt='University of Westminster'>
<figcaption>
View of the lobby and meeting rooms (on the second floor).
</figcaption>
</figure>

<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/LGM_Auditorium.jpg" alt='LGM Auditorium'>
<figcaption>
Andrea Ferrero (<a href="https://discuss.pixls.us/users/carmelo_drraw/activity">@Carmelo_DrRaw</a>) presenting <a href='http://aferrero2707.github.io/PhotoFlow/' title='PhotoFlow website'>PhotoFlow</a> in the auditorium!
</figcaption>
</figure>


<p>The opening “<em>State of the Libre Graphics</em>“ presentation was done by our (the GIMP teams) very own João Bueno who did a fantastic job! João will also be the local organizer for the 2017 LGM in Rio.</p>
<p>Thanks to contributions from community members <a href="https://www.flickr.com/photos/andabata">Kees Guequierre</a>, <a href="https://29a.ch/">Jonas Wagner</a>, and <a href="https://www.flickr.com/photos/philipphaegi">Philipp Haegi</a> I had some great images to use for the PIXLS.US community slides for the “<em>State of the Libre Graphics</em>“.  If anyone is curious, here is what I submitted:</p>
<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/PIXLS-0.min.png" alt='PIXLS State of Libre Graphics 0'>
<figcaption>
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/PIXLS-1.min.png" alt='PIXLS State of Libre Graphics 0'>
<figcaption>
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/PIXLS-2.min.png" alt='PIXLS State of Libre Graphics 0'>
<figcaption>
</figcaption>
</figure>

<p>These slides can be found on our <a href="https://github.com/pixlsus/Presentations">Github PIXLS.US Presentations</a> page (along with all of our other presentations that relate to PIXLS.US and promoting the community).  </p>
<p>Speaking of presentations…</p>
<h3 id="presentation"><a href="#presentation" class="header-link-alt">Presentation</a></h3>
<p>I was given some time to talk about and present our community to everyone at the meeting. (See embedded slides below):</p>
<figure>
<a data-flickr-embed="true"  href="https://www.flickr.com/photos/patdavid/albums/72157668276522285" title="LGM2016 PIXLS.US Presentation"><img src="https://farm8.staticflickr.com/7116/26864395042_62177a54de_z.jpg" width="640" height="480" alt="LGM2016 PIXLS.US Presentation"></a><script async src="https://pixls.us//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
</figure>

<p>I started by looking at what my primary motivation was to begin the site and what the state of free software photography was like at that time (or not like).  Mainly that the majority of resources online for photographers that were high quality (and focused on high-quality results) were usually aimed at proprietary software users.  Worse still, in some cases these websites locked away their best tutorials and learning content behind paywalls and subscriptions.  I finished by looking at what was done to build this site and forum as a community for everyone to learn and share with each other freely.</p>
<p>I think the presentation went well and people seemed to be interested in what we were doing!  Nate Willis even published an article about the presentation at <a href="http://lwn.net">LWN.net</a>, <a href="http://lwn.net/Articles/684279/"><em>“Refactoring the open-source photography community”</em></a>:</p>
<figure>
<a href='http://lwn.net/Articles/684279/' title='Refactoring the open-source photography community on LWN.net'>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/04-lgm-david-sm.jpg" alt='Pat David presenting on PIXLS.US at LGM 2016'>
</a>
<figcaption>
A photo of me I <i>don’t</i> hate! :)
</figcaption>
</figure>


<h3 id="exhibition"><a href="#exhibition" class="header-link-alt">Exhibition</a></h3>
<p>A nice change this year was the inclusion of an exhibition space to display works by LGM members and artists.  We even got an opportunity to hang a couple of prints (for some reason they really wanted my quad-print of pippin).  I was particularly happy that we were able to print and display the <a href="https://www.flickr.com/photos/andabata/20025243436"><em>Green Tiger Beetle</em></a> by community member <a href="https://www.flickr.com/photos/andabata">Kees Guequierre</a>:</p>
<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/hanatos-houz-lgm.jpg" alt='hanatos and houz at LGM'>
<figcaption>
Hanatos and houz inspecting the prints at the exhibition.
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/lgm-exhibition.jpg" alt='View of the LGM Exhibition'>
<figcaption>
View of the Exhibition.  Well attended!
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/pippin-meta.jpg" alt='Pippin x5'>
<figcaption>
pippin x5
</figcaption>
</figure>

<h3 id="portraits"><a href="#portraits" class="header-link-alt">Portraits</a></h3>
<p>In Leipzig I thought it would be nice to offer portraits/headshots of folks that attended the meeting.  I think it’s a great opportunity to get a (hopefully) nice photograph that people can use in social media, avatars, websites, etc.  Here’s a sample of portraits from LGM2014 of the GIMP team that sat for me:</p>
<p><a data-flickr-embed="true" data-footer="true"  href="https://www.flickr.com/photos/patdavid/albums/72157644439419931" title="GIMPers"><img src="https://farm3.staticflickr.com/2900/14075907755_5224004a7c_z.jpg" width="640" height="640" alt="GIMPers"></a><script async src="https://pixls.us//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script></p>
<p>In 2014 I was lucky that houz had brought along an umbrella and stand to use, so this time I figured it was only fair that I bring along some gear myself.  I had the Softlighter setup on the last couple of days for anyone that was interested in sitting for us.  I say us because Marek Kubica (<a href="https://discuss.pixls.us/users/leonidas/activity">@Leonidas</a>) from the community was right there to shoot with me along with the very famous <a href="https://discuss.pixls.us/users/ofnuts/activity">@Ofnuts</a> (well - famous to me - I’ve lost count of the neat things I’ve picked up from his advice)!  Marek took quite a few portraits and managed the subjects very well - he was conversational, engaged, and managed to get some great personality from them.</p>
<figure>
<a  href="https://www.flickr.com/photos/103724284@N02/26526026171/in/pool-libregfx/" title="Still don&#x27;t know your name"><img src="https://farm2.staticflickr.com/1515/26526026171_fbf23edb01_z.jpg" width="640" height="396" alt="Still don&#x27;t know your name"></a>
<figcaption>
A sample portrait by <a href="https://www.flickr.com/photos/103724284@N02/">Marek Kubica</a> <span class='cc'><a href="https://creativecommons.org/licenses/by-sa/2.0/">cba</a></span>
</figcaption>
</figure>

<figure>
<a href="https://www.flickr.com/photos/103724284@N02/26526017851/in/pool-libregfx/" title="Better with glasses"><img src="https://farm2.staticflickr.com/1562/26526017851_dc57d13f50_z.jpg" width="640" height="396" alt="Better with glasses"></a>
<figcaption>
<a href="https://www.flickr.com/photos/103724284@N02/26526017851">Better with glasses</a> by <a href="https://www.flickr.com/photos/103724284@N02/">Marek Kubica</a> <span class='cc'><a href="https://creativecommons.org/licenses/by-sa/2.0/">cba</a></span>
</figcaption>
</figure>

<p>A couple of samples from the images that I got are here as well, and they are the local organizer Lara with students from the University!  I simply can’t thank them enough for the efforts and generosity in making us feel so welcome.</p>
<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/P4170268-rt.jpg" alt='Lara University of Westminster'>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/P4170276-rt.jpg" alt='Lara University of Westminster'>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/P4170267-rt.jpg" alt='Lara University of Westminster'>
</figure>

<p>I’m still working through the portraits I took, but I’ll have them uploaded to <a href="https://flickr.com/photos/patdavid">my Flickr</a> soon to share with everyone!</p>
<h2 id="gimpers"><a href="#gimpers" class="header-link-alt">GIMPers</a></h2>
<p>One of the best parts of attendance is getting to spend some time with the rest of the GIMP crew.  Here’s an action shot during the GIMP meeting over lunch with a neat, glitchy schumaml:</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/GIMP-pano.jpg" alt='GIMP Meeting Panorama'>
<figcaption>
There’s even some <a href="https://www.darktable.org">darktable</a> nerds thrown in there!
</figcaption>
</figure>

<p>It was great to see everyone at the flat on our last evening there as well…</p>
<figure>
<img src="https://pixls.us/blog/2016/04/post-libre-graphics-meeting/LGM-flat.jpg" alt='GIMP and darktable at LGM'>
<figcaption>
Everyone spending the evening together!  Mitch is missing from his seat in this shot (back there by pippin).
</figcaption>
</figure>


<h2 id="wrap-up"><a href="#wrap-up" class="header-link-alt">Wrap up</a></h2>
<p>Overall this was another incredible meeting bringing together great folks who are building and supporting Free Software and Libre Graphics.  Just my kind of crowd!</p>
<p>I even got a chance to speak a bit with the wonderful <a href="https://github.com/tusuzu">Susan Spencer</a> of the <a href="http://valentinaproject.bitbucket.org/">Valentina</a> project and we roughed out some thoughts about getting together at some point.  It turns out she lives just up the same state as me (Alabama)!  This is simply too great to not take advantage of - Free Software Fashion + Photography?!  That will have to be a fun story (and photos) for another day…</p>
<p>Keep watching the blog for some more images from the trip - up next are the portraits of everyone and some more shots of the venue and exhibition!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Pre-LGM Photowalk ]]></title>
            <link>https://pixls.us/blog/2016/04/pre-lgm-photowalk/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/04/pre-lgm-photowalk/</guid>
            <pubDate>Fri, 08 Apr 2016 21:41:36 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/04/pre-lgm-photowalk/at_thomaskirche.jpg" /><br/>
                 <h1>Pre-LGM Photowalk</h1>  
                 <h2>Time to take some photos!</h2>   
                <p>It’s that time of year again!  The weather is turning mild, the days are smelling fresh, and a bunch of photography nerds are all going to get together in a new country to roam around and (<em>possibly</em>) annoy locals by taking a <em>ton</em> of photographs! It’s the Pre-<a href="http://www.libregraphicsmeeting.org/2016/"><em>Libre Graphics Meeting</em></a> photowalk of 2016!</p>
<p>Come join us the day before LGM kicks off to have a stroll through a lovely park and get a chance to shoot some photos between making new friends and having a pint. </p>
<!-- more -->
<p>Thanks to the wonderful work by the local LGM organizing team, we are able to invite everyone out to the photowalk on <strong>Thursday, April 14<sup>th</sup></strong> the day before LGM kicks off.</p>
<p><a href='http://www.furtherfield.org/gallery/about'>
<img src="https://pixls.us/blog/2016/04/pre-lgm-photowalk/furtherfield_header.png" alt='Furtherfield Logo' style='background-color: #D3DBD5;'>
</a></p>
<p>They were able to get us in touch with the kind folks at <a href="http://www.furtherfield.org/gallery/visit">Furtherfield Gallery &amp; Commons</a> in Finsbury Park.  They’ve graciously offered us the use of their facilities at the Furtherfield Commons as a base to start from.  So we will meet at the Commons building at <strong>10:00 on Thursday morning</strong>.</p>
<blockquote>
<p><strong>Pre-LGM Photowalk</strong><br>10:00 (AM), Thursday, April 14<sup>th</sup><br>Furtherfield Commons<br>Finsbury Gate - Finsbury Park<br>Finsbury Park, London, N4 2NQ</p>
</blockquote>
<div class='fluid-vid'>
<figure class='big-vid'>
<iframe width="576" height="350" frameborder="0" scrolling="no" marginheight="0" marginwidth="0" src="https://www.openstreetmap.org/export/embed.html?bbox=-0.10637909173965454%2C51.56489127967849%2C-0.1036781072616577%2C51.566525239509325&amp;layer=mapnik&amp;marker=51.56570826693375%2C-0.10502859950065613" style="border: 1px solid black"></iframe>
<figcaption style='margin-top: 0.5rem;'>
<a href="http://www.openstreetmap.org/?mlat=51.56571&amp;mlon=-0.10503#map=19/51.56571/-0.10503">View Larger Map</a>
</figcaption>
</figure>
</div>

<p>An overview of the photowalk venue relative to the LGM venue at the University of Westminster, Harrow:</p>
<div class='fluid-vid'>
<iframe src="https://www.google.com/maps/d/embed?mid=zYKepeQNftPo.koxL6CFw1nPk" width="640" height="480"></iframe>
</div>

<p>If you would like to join us but may not make it to the Commons by 10:00, email me and let me know.  I’ll try my best to make arrangements to meet up so you can join us a little later.  I can’t imagine we’d be very far away (likely somewhere relatively near by in the park).</p>
<p>We’ll plan on meandering through the park with frequent stops to shoot images that strike our fancy.  I will personally be bringing along my off-camera lighting equipment and a model (Mairi) to pose for us during the day.  In case anyone wanted to play/learn a little about that type of photography.</p>
<p>There is no set time for finishing up.  I figured we would play it by ear through lunch and to possibly all finish up at a nice pub together. (Taking advantage of the golden hour light at the end of the day hopefully).</p>
<p>In the spirit of saying “Thank you!” and sharing, I have also offered the Furtherfield folks our services for headshots and architectural/environmental shots of the Commons and Gallery spaces.  For sure I will be taking these images for them but if anyone else wanted to pitch in and try, help, or assist the effort would be very welcome!</p>
<figure>
<img src="https://pixls.us/blog/2016/04/pre-lgm-photowalk/dot-leipzig-market.jpg" alt='Dot in the Leipzig Market, 2014'>
<figcaption>
Dot in the Leipzig Market from the 2014 Pre-LGM photowalk.
</figcaption>
</figure>

<p>Speaking of which, if you plan on attending and would like to explore some particular aspect of photography please feel free to let me know.  I’ll do my best to match folks up based on interest.  I sincerely hope this will be a fun opportunity to learn some neat new things, make some new friends, and to maybe grab some great images at the same time!</p>
<p>If there are any questions, please don’t hesitate to reach out to me!<br><code>patdavid@gmail.com</code><br>patdavid on irc://irc.gimp.org/#gimp</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Happy Birthday DISCUSS.PIXLS.US ]]></title>
            <link>https://pixls.us/blog/2016/04/happy-birthday-discuss-pixls-us/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/04/happy-birthday-discuss-pixls-us/</guid>
            <pubDate>Wed, 06 Apr 2016 16:01:30 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/04/happy-birthday-discuss-pixls-us/birthday-cake_1920.jpg" /><br/>
                 <h1>Happy Birthday DISCUSS.PIXLS.US</h1>  
                 <h2>Where did the time go?!</h2>   
                <p>For some reason I was checking my account on the forums earlier today and noticed that it was created in April, 2015.  On further inspection it looks like my, and @darix, accounts were created on April 2<sup>nd</sup> 2015.</p>
<p>(Not to be confused with the main site because apparently it took me about 8 months to get a forum stood up…)</p>
<p>Which means that the forums have been around for just over a year now?!</p>
<p>So, <strong>Happy Birthday</strong> <a href="https://discuss.pixls.us">discuss</a>!</p>
<!-- more -->
<p>We’re just over a year old and just under <em>500</em> users on the forum!</p>
<p>For fun, I looked for the oldest (public) post we had and it looks like it’s the “<a href="https://discuss.pixls.us/t/welcome-to-pixls-us-discussion/8?u=patdavid">Welcome to PIXLS.US Discussion</a>“ thread.  In case anyone wanted to revisit a classic…</p>
<p><strong>THANK YOU</strong> so much to everyone who has made this an awesome place to be and nerd out about photography and software and more!  Since we started we migrated the official <a href="http://gmic.eu">G’MIC</a> forums here as well as our friends at <a href="http://rawtherapee.com">RawTherapee</a>!
We’ve been introduced to some awesome projects like <a href="http://aferrero2707.github.io/PhotoFlow/">PhotoFlow</a> as well as <a href="https://github.com/CarVac/filmulator-gui">Filmulator</a>.  And everyone has just been amazing, supportive, and fun to be around.</p>
<p>As I posted in the original <em>Welcome</em> thread…</p>
<div class='big-vid'>
<div class='fluid-vid'>
<iframe width="1280" height="720" src="https://www.youtube-nocookie.com/embed/StTqXEQ2l-Y?rel=0" frameborder="0" allowfullscreen></iframe>
</div>
</div>

  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Lighting Diagrams ]]></title>
            <link>https://pixls.us/blog/2016/04/lighting-diagrams/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/04/lighting-diagrams/</guid>
            <pubDate>Mon, 04 Apr 2016 22:23:36 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/04/lighting-diagrams/lighting-lede.png" /><br/>
                 <h1>Lighting Diagrams</h1>  
                 <h2>Help Us Build Some Assets!</h2>   
                <p>Community member <a href="http://www.ericsbinaryworld.com/">Eric Mesa</a> asked on <a href="https://discuss.pixls.us/t/is-there-a-good-lighting-setup-template-for-gimp/1179/">the forums</a> the other day if there might be some Free resources for photographers that want to build a lighting diagram of their work.  These are the diagrams that show how a shot might be set up with the locations of lights, what types of modifiers might be used, and where the camera/photographer might be positioned with respect to the subject.  These diagrams usually also include lighting power details and notes to help the production.</p>
<p>It turns out there wasn’t really anything openly available and permissively licensed.  So we need to fix that…</p>
<!-- more -->
<p>These diagrams are particularly handy for planning a shoot conceptually or explaining what the lighting setup was to someone after the fact.  For instance, here’s a look at the lighting setup for <a href="https://www.flickr.com/photos/patdavid/14297966412">Sarah (Glance)</a>:</p>
<figure>
<img src="https://pixls.us/blog/2016/04/lighting-diagrams/sarah-glance.jpg" alt='Sarah (Glance) by Pat David'>
<figcaption>
Sarah (Glance)
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2016/04/lighting-diagrams/sarah-glance.png" alt='Sarah (Glance) Lighting Diagram'>
<figcaption>
YN560 full power into a 60” Photek Softlighter, about 20” from subject.<br>
She was actually a bit further from the rear wall…
</figcaption>
</figure>

<p>There are a few different commercial or restrictive-licensed options for photographers to create a lighting diagram, but nothing truly <a href="http://www.gnu.org/philosophy/free-sw.en.html">Free</a>.</p>
<p>So thanks to the prodding by Eric, I thought it was something we should work on as a community!</p>
<p>I already had a couple of simple, basic shapes created in <a href="https://inkscape.org">Inkscape</a> for another tutorial so I figured I could at least get those files published for everyone to use.</p>
<p>I don’t have much to start with but that shouldn’t be a problem!  I already had a backdrop, person, camera, octabox (+grid), and a softbox (+grid):</p>
<figure>
<img src="https://pixls.us/blog/2016/04/lighting-diagrams/lighting-assets.png" alt='Lighting Diagram Assets'>
</figure>

<h2 id="pixls-us-github-organization"><a href="#pixls-us-github-organization" class="header-link-alt">PIXLS.US Github Organization</a></h2>
<p>I already have a <a href="https://github.com/pixlsus">GitHub organization</a> setup just for PIXLS.US, you can find the lighting-diagram assets there:</p>
<p><a href="https://github.com/pixlsus/pixls-lighting-diagram">https://github.com/pixlsus/pixls-lighting-diagram</a></p>
<p>Feel free to join the organization!</p>
<p>Even better: join the organization and fork the repo to add your own additions and to help us flesh out the available diagram assets for all to use!
From the README.md on that repo, I compiled a list of things I thought might be helpful to create:</p>
<ul>
<li>Cameras<ul>
<li>DSLR</li>
<li>Mirrorless</li>
<li>MF</li>
</ul>
</li>
<li>Strobes<ul>
<li>Speedlight</li>
<li>Monoblock</li>
</ul>
</li>
<li>Lighting Modifiers<ul>
<li>Softbox (+ grid?)</li>
<li>Umbrella (+ grid?)</li>
<li>Octabox (+ grid?)</li>
<li>Brolly</li>
</ul>
</li>
<li>Reflectors</li>
<li>Flags</li>
<li>Barn Doors / Gobo</li>
<li>Light stands? (C-Stands?)</li>
<li>Environmental<ul>
<li>Chairs</li>
<li>Stools</li>
<li>Boxes</li>
<li>Backgrounds (+ stands)</li>
</ul>
</li>
<li>Models</li>
</ul>
<p>If you don’t want to create something from scratch, perhaps grabbing the files and tweaking the existing assets to make them better in some way?</p>
<p>Hopefully we can fill out the list fairly quickly (as it’s a fairly limited subset of required shapes).  Even better would be if someone picked up the momentum to possibly create a nice lighting diagram application of some sort!</p>
<p>The files that are there now are all licensed <a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons By-Attribution, Share-Alike 4.0</a>.</p>
<style>
li {
    margin-bottom: initial;
}
</style>

  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ PlayRaw (Again) ]]></title>
            <link>https://pixls.us/blog/2016/03/playraw-again/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/03/playraw-again/</guid>
            <pubDate>Mon, 21 Mar 2016 22:00:45 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/03/playraw-again/mairi-troisieme-lede.jpg" /><br/>
                 <h1>PlayRaw (Again)</h1>  
                 <h2>The Resurrectioning</h2>   
                <p>On the old <a href="http://rawtherapee.com/">RawTherapee</a> forums they used to have a contest sharing a single raw file amongst the members to see how everyone would approach processing from the same starting point.  They called it <strong>PlayRaw</strong>.  This seemed to really bring out some great work from the community so I thought it might be fun to start doing something similar again here.</p>
<p>I took a (<em>relatively</em>) recent image of <a href="https://www.flickr.com/photos/patdavid/albums/72157632799856846" title="Mairi Album on Flickr">Mairi</a> and decided to see how it would be received (I’d say fairly well given the responses).  This was my result from the raw file that I called <a href="https://www.flickr.com/photos/patdavid/16259030889/in/album-72157632799856846/" title="Mairi Troisieme on Flickr"><em>Mairi Troisième</em></a>:</p>
<!-- more -->
<figure>
<img src="https://pixls.us/blog/2016/03/playraw-again/Mairi Troisieme.jpg" alt='Mairi Troisieme' width='640' height='800'>
</figure>

<p>I made the raw file available under a <a href="https://creativecommons.org/licenses/by-nc-sa/3.0/" title="Creative Commons BY-SA-NC">Creative Commons, By-Attribution, Non-Commercial, Share-Alike license</a> so that anyone could freely download and process the file as they wanted to.</p>
<p>The only things I asked for was to see the results and possibly the processing steps through either an XMP or PP3 sidecar file (<a href="http://www.darktable.org/">darktable</a> and <a href="http://rawtherapee.com/">RawTherapee</a> respectively).</p>
<p>Here’s a montage of the results from everyone:</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2016/03/playraw-again/Mairi-combined.jpg" width='960' height='1896'>
</figure>

<p>I loved being able to see what everyone’s approaches looked like.  It’s neat to get a feel for all the different visions out there among the users and there were some truly beautiful results!</p>
<p>If you haven’t given it a try yourself yet, head on over to the <a href="https://discuss.pixls.us/t/playraw-mairi-troisieme">[PlayRaw] Mairi Troisieme</a> thread to get the raw file and try it out yourself!  Just don’t forget to show us <em>your</em> results in the topic.</p>
<p>I’ll be soliciting options for a new image to kick off another round of processing again soon.</p>
<h2 id="speaking-of-mairi"><a href="#speaking-of-mairi" class="header-link-alt">Speaking of Mairi</a></h2>
<p>Don’t forget that we still have a <a href="https://pledgie.com/campaigns/30905">Pledgie Campaign</a> going on to help us offset the costs of getting everyone together at the <a href="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/">2016 Libre Graphics Meeting in London</a> this April!</p>
<p><a href='https://pledgie.com/campaigns/30905'><img alt='Click here to lend your support to: PIXLS.US at Libre Graphics Meeting 2016 and make a donation at pledgie.com !' src='https://pledgie.com/campaigns/30905.png?skin_name=chrome' border='0' ></a></p>
<p>Donations go to help cover to costs of various projects to come together and meet, photograph, discuss, and hack at things.  Please consider donating as every little bit helps us immensely!  If you can’t donate then please consider helping us to raise awareness of what we’re trying to do!  Either link the Pledgie campaign to others or let them know we’re here to help and share!</p>
<p>Even better is if you’re in the vicinity of London this April 15&ndash;18! Come out and join us as well as many other awesome Free Software projects all focused on the graphics community!  We (PIXLS) will be conducting photowalks and meet-ups the Thursday before LGM kicks off as well!</p>
<p>Oh, and I finally did convince Mairi to join us through the weekend to model for us as needed.  She’s super awesome and worth raising a glass to/with!  Even more reason to come out and join us!</p>
<figure>
<img src="https://pixls.us/blog/2016/03/playraw-again/Mairi Hedcut.jpg" alt='Mairi Deux'>
</figure>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Shimming an Adapter to be Parallel ]]></title>
            <link>https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/</guid>
            <pubDate>Fri, 11 Mar 2016 19:03:48 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/carvac-lede.jpg" /><br/>
                 <h1>Shimming an Adapter to be Parallel</h1>  
                 <h2>Achieving perfect infinity focus</h2>   
                <p>Some of you may know I exclusively use Contax manual focus lenses on my Canon cameras. I have had one reliable adapter from the start, that just happened to be perfect in every way: perfectly parallel, and lets my lenses focus <em>exactly</em> to infinity, and none of my lenses hit the mirror on my 5D.</p>
<p>However, swapping adapters between cameras gets mighty tedious, so recently I have been trying a variety of different adapters for my cameras, several quality tiers ranging from the cheapest ($15) up to the most expensive ($70).</p>
<!-- more -->
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/39cc6bc295d7b8fb61f7f30bddb439236c3c07ba.jpg" alt='39cc6bc295d7b8fb61f7f30bddb439236c3c07ba.jpg'>
</figure>

<p>However, I wasn’t satisfied with any of them. In order to assure that the adapted lenses can focus to infinity even with manufacturing tolerances, they’re made thinner than necessary. This means that they focus <em>past</em> infinity, and with some lenses the mirror of my 5D would hit the back of the lens, needing me to wiggle it to free the mirror after taking a photo.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/e2d3556dfa31bafeebe55be3503cd31d320ca418.jpg" alt='e2d3556dfa31bafeebe55be3503cd31d320ca418.jpg'>
</figure>

<p>I measured my fancier Fotodiox Pro adapter, and found that not only was it too thin, but it was unevenly thick! The top was 8 thousandths of an inch thin, the bottom right was 2 thousandth of an inch thin, and the bottom left was exactly the right thickness.</p>
<p>I decided I could do something about it.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/c8f2904056b5956c424217eac2e5ff8c071bcd35.jpg" alt='c8f2904056b5956c424217eac2e5ff8c071bcd35.jpg'>
</figure>

<p>I bought some shim stock from McMaster Carr, plastic and 2 thousandths of an inch thick, figuring I might be able to fold it to build up thickness if necessary. (Spoiler: it does fold.) It comes as a giant sheet five by twenty inches, but you’ll only need the tiniest amount of it.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/9e62a1fa5ec3df578b5068e04c06bf70826cea6c.jpg" alt='9e62a1fa5ec3df578b5068e04c06bf70826cea6c.jpg'>
</figure>

<p>Then I went about removing the screws that hold the two sides together.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/23fcb9581ed7ba5b4b1ab8dc8f6d6abbd1b1edd5.jpg" alt='23fcb9581ed7ba5b4b1ab8dc8f6d6abbd1b1edd5.jpg'>
</figure>

<p>The screws are incredibly small.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/3328b75d620272e42a07e6d923012e762f244736.jpg" alt='3328b75d620272e42a07e6d923012e762f244736.jpg'>
</figure>

<p>Here you can see that there are only three points on the ring that actually control the thickness; I point to one with the scissors. I had to be careful when measuring the thickness to only measure it between the screws, and that was challenging because the EF mount diameter is larger than the C/Y mount diameter, and there was only the slightest overlap between the outside of the C/Y registration surface and the inside of the EF mount.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/630d554c266458e194fa65c77c21d00b2426cfe7.jpg" alt='630d554c266458e194fa65c77c21d00b2426cfe7.jpg'>
</figure>

<p>Next I just cut a narrow strip out of this piece of shim stock using scissors, and put slits in it so it could fold more easily.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/bc177c29ec559927f3f1b8df373a53dea4d2270a.jpg" alt='bc177c29ec559927f3f1b8df373a53dea4d2270a.jpg'>
</figure>

<p>The right hand shim is folded in the shape of a W, and the left hand shim is only one layer.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/b7b673db42db682c8681e11363500892230d11f6.jpg" alt='b7b673db42db682c8681e11363500892230d11f6.jpg'>
</figure>

<p>The thicker shim went on the top, and the thinner shim went on the bottom-right.</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/2c15b643aabc97d65f6fce6547d80e769391d70c.jpg" alt='2c15b643aabc97d65f6fce6547d80e769391d70c.jpg'>
</figure>

<p>Put the ring back on, and then…</p>
<figure>
<img src="https://pixls.us/blog/2016/03/shimming-an-adapter-to-be-parallel/201a553a455b9780fc4120632b4db51bb2bf3a6c.jpg" alt='201a553a455b9780fc4120632b4db51bb2bf3a6c.jpg'>
</figure>

<p>Reinstall the screws.</p>
<p>Test your lenses for infinity focus and, if applicable, mirror slap, and rejoice if they’re good!</p>
<hr>
<p>If you don’t have a perfect adapter as a reference for the proper thickness, you can first adjust the adapter to be perfectly even thickness all the way around, and then you can add thickness uniformly until your lenses just barely focus to infinity. It might be time consuming, but it’s very rewarding being able to trust the infinity stop on your lenses.</p>
<p>This method isn’t only applicable to the two-part SLR-&gt;SLR Fotodiox adapters; it should also work for SLR or rangefinder to mirrorless adapters as well.</p>
<p>I’ve seen it written that you can’t be sure whether or not your adapters are even thickness all the way around, but with this technique, you can <em>make</em> sure that your adapters are perfect.</p>
<hr>
<p><em>Carlo originally posted this as a thread on the forums but I thought it would be useful as a post.  He has graciously allowed us to re-publish it here. <strong>–Pat</strong></em></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ jpeg2RAW Guest Spot ]]></title>
            <link>https://pixls.us/blog/2016/02/jpeg2raw-guest-spot/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/02/jpeg2raw-guest-spot/</guid>
            <pubDate>Sat, 20 Feb 2016 19:54:54 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/02/jpeg2raw-guest-spot/andabata-tiger-beetle.jpg" /><br/>
                 <h1>jpeg2RAW Guest Spot</h1>  
                 <h2>An interview! LGM update! And Github?</h2>   
                <p><a href="http://www.jpeg2raw.com/your-jpeg2raw-host/">Mike Howard</a>, the host and creator of the <a href="http://www.jpeg2raw.com/">jpeg2RAW podcast</a> reached out to me last week to see if I might be able to come on the show to talk about Free Software Photography and what we’ve been up to here. 
One of the primary reasons for creating this site was to be able to raise awareness of the Free Software community to a wider audience.</p>
<p><em>So this is a great opportunity for us to expose ourselves!</em></p>
<!-- more -->
<h2 id="exposing-ourselves"><a href="#exposing-ourselves" class="header-link-alt">Exposing Ourselves</a></h2>
<p>The podcast airs <strong>live</strong> this Tuesday, February 23<sup>rd</sup> at 8PM Eastern (-0500). You can join us at the <a href="http://www.jpeg2raw.com/live/">jpeg2RAW live podcast page</a>!
Mike has the live feed available to watch on that page and also has a chat server set up so viewers can interact with us live during the broadcast.</p>
<div class='fluid-vid'>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/SZ2jPqWXClQ" frameborder="0" allowfullscreen></iframe>
</div>

<p>If you are free on Tuesday night then come on by and join us! I’ll be happy to field any questions you want answered (and that Mike asks) and will do my best to not embarrass myself (or our community). If you would like to make sure I address something in particular (or just don’t forget something), I also have a <a href="https://discuss.pixls.us/t/interview-for-jpeg2raw-podcast/871/1">thread on discuss</a> where you can make sure I know it.</p>
<p>I’m also looking for community members to submit some photos to help highlight our work and what’s possible with Free Software. Feel free to link them in the <a href="https://discuss.pixls.us/t/interview-for-jpeg2raw-podcast/871/1">same thread as above</a>.  I’ve already convinced <a href="https://kees.nl/">andabata</a> to point us to some of his great macro shots (like that awesome lede image) and I’ll be submitting a few of my own images as well.  If you have some works that you’d like to share please let me know!</p>
<h3 id="in-case-you-miss-it"><a href="#in-case-you-miss-it" class="header-link-alt">In Case You Miss It</a></h3>
<p>Mike has all of his prior podcasts archived on <a href="http://www.jpeg2raw.com/podcasts/">his <em>Podcasts</em> page</a>. So if you miss the live show it looks like you’ll be able to catch up later at your convenience.</p>
<h2 id="lgm-update"><a href="#lgm-update" class="header-link-alt">LGM Update</a></h2>
<p>As <a href="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/">mentioned previously</a> we are heading to London for Libre Graphics Meeting 2016! We’ve got a flat rented for a great crew to be able to stay together and we’re on track for a <a href="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/#pixls-meet-up">PIXLS meet up</a> before LGM!</p>
<p>Speaking of people, I’m looking forward to being able to spend some time with some great folks again this year!  We’ve got Tobias, Johannes, and Pascal making it out (I’m not sure that Simon, top below, will be making it out) from <a href="http://www.darktable.org">darktable</a>, DrSlony and qogniw from <a href="http://www.rawtherapee.com">RawTherapee</a>, <a href="https://pixls.us/articles/a-blended-panorama-with-photoflow/">Andrea Ferrero</a> creator of <a href="http://photoflowblog.blogspot.com/">PhotoFlow</a>, even <a href="https://discuss.pixls.us/users/ofnuts/activity">Ofnuts</a> (how cool is that?) may make it out!</p>
<figure>
<a href="https://www.flickr.com/photos/patdavid/14050852344/in/dateposted-public/" title="Darktable II"><img src="https://farm3.staticflickr.com/2930/14050852344_d7fe5dd73d.jpg" width="500" height="500" alt="Darktable II"></a>
<figcaption>
Pascal, Johannes, and Tobias (left to right, bottom row) will be there!
</figcaption>
</figure>

<p>We’ve also already had a great response so far on <a href="https://pledgie.com/campaigns/30905">our Pledgie campaign</a>. The campaign is still running if you want to help out!</p>
<p><a href='https://pledgie.com/campaigns/30905'>
<img alt='Click here to lend your support to: PIXLS.US at Libre Graphics Meeting 2016 and make a donation at pledgie.com !' src='https://pledgie.com/campaigns/30905.png?skin_name=chrome' border='0' style='width: initial;'>
</a></p>
<p>If anyone is thinking they’d like to make it out to join us, please let me know as soon as possible so we can plan for space!</p>
<figure>
<a href="https://www.flickr.com/photos/patdavid/16706076622/in/album-72157632799856846/" title="Mairi (Further)"><img src="https://farm9.staticflickr.com/8613/16706076622_7217ced886_c.jpg" width="622" height="800" alt="Mairi (Further)"></a>
<figcaption>
Looks like <a href="https://www.flickr.com/photos/patdavid/albums/72157632799856846">Mairi</a> will be joining us!
</figcaption>
</figure>

<p>My friend and model Mairi will also be making it out for the meeting. She’ll be on hand to help us practice lighting setups, model interactions, and will likely be shooting right along with the rest of us as well!</p>
<p>I’ll also be assembling slides for my presentation during LGM.  I’ve got a 20 minute time slot to talk about the community we’ve been building here and the neat things our members have been up to (<a href="https://github.com/CarVac/filmulator-gui">Filmulator</a>, <a href="http://photoflowblog.blogspot.com/">PhotoFlow</a>, and more).</p>
<p>Speaking of slides and sharing information…</p>
<h3 id="github-organization"><a href="#github-organization" class="header-link-alt">Github Organization</a></h3>
<p>I’ve setup a <a href="https://github.com/pixlsus">Github Pixls organization</a> so that we can begin to share various things. This came about after talking with <a href="https://discuss.pixls.us/users/paperdigits/activity">@paperdigits</a> on the post about the upcoming podcast at jpeg2RAW.  We were talking about ways to <a href="https://discuss.pixls.us/t/pixls-us-github-organization/893">share information and assets</a> for creating/delivering presentations about Free Software photography.</p>
<p>At the moment there is only the single repository <a href="https://github.com/pixlsus/Presentations"><em>Presentations</em></a> as we are figuring out structure. I’ve uploaded my slides and notes from the <a href="https://github.com/pixlsus/Presentations/tree/master/LGM2015_State_Of">LGM2015 <em>State of the Libre Graphics</em></a> presentation announcing PIXLS. If you’re on <a href="http://www.github.com">Github</a> and want to join us just let me know!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ HDR Photography with Free Software (LuminanceHDR) ]]></title>
            <link>https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/</link>
            <guid isPermaLink="true">https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/</guid>
            <pubDate>Tue, 26 Jan 2016 19:57:59 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/HDRLayers.jpg" /><br/>
                 <h1>HDR Photography with Free Software (LuminanceHDR)</h1>  
                 <h2>A first approach to creating and mapping HDR images</h2>   
                <p>I have a mostly love/hate relationship with HDR images (well, tonemapping HDR more than the HDR themselves).
I think the problem is that it’s very easy to create really bad HDR images that the photographer <em>thinks look really good</em>.
I know because I’ve been there:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/226464161_2a792c925d_z.jpg" alt="Hayleys - Mobile, AL" height="369" width="640">
<figcaption>Don’t judge me, it was a weird time in my life…</figcaption>
</figure> 

<p>The best term I’ve heard used to describe over-processed images created from an HDR is <i>“clown vomit” </i>(which would also be a great name for a band, by the way).
They are easily spotted with some tell-tale signs such as the halos at high-contrast edges, the unrealistically hyper-saturated colors that make your eyes bleed, and a general affront to good taste.
In fact, while I’m putting up embarrassing images that I’ve done in the past, here’s one that scores on all the points for a crappy image from an HDR:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/210251868_26c6041c62_o.jpg" alt="Tractor" width="600" height="874">
<figcaption><a target="_blank" href="http://www.youtube.com/watch?v=juFZh92MUOY">“My Eyes! The goggles do nothing!”</a></figcaption>
</figure> 

<p>Crap-tastic! 
Of course, the allure here is that it provides first timers a glimpse into something new, and they feel the desire to crank every setting up to 11 with no regards to good taste or aesthetics.</p>
<p>If you take anything away from this post, let it be this:  <strong>“Turn it <em>DOWN</em>“</strong>. 
If it looks good to you, then it’s too much. ;)</p>
<!-- more -->
<p class='aside' style='font-size: 1rem;'>HDR lightprobes are used in movie fx compositing to ensure that the lighting on CG models matches exactly the lighting for a live-action scene.  By using an HDR lightprobe, you can match the lighting exactly to what is filmed.
<br>
<br>
I originally learned about, and used, HDR images when I would use them to illuminate a scene in <a href="http://www.blender.org/">Blender</a>.  In fact, I will still often use <a href="http://www.pauldebevec.com/Probes/">Paul Debevec’s Uffizi gallery lightprobe</a> to light scene renders in Blender today.</p>

<p>For example, you may be able to record 10-12 stops of light information using a modern camera.  Some old films could record 12-13 stops of light, while your eyes can approximately see up to 14 stops.</p>
<p>HDR images are intended to capture <em>more</em> than this number of stops.  (Depending on your patience, significantly more in some cases).</p>
<p>I can go on a bit about the technical aspects of HDR imaging, but I won’t.  It’s boring.  Plus, I’m sure you can <a href="http://en.wikipedia.org/wiki/High-dynamic-range_imaging">use Wikipedia</a>, or <a href="http://lmgtfy.com/?q=HDR">Google </a>yourselves. :)
In the end, just realize that an HDR image is simply one where there is a greater amount of light information being stored than is able to be captured by your camera sensor in one shot.</p>
<h2 id="taking-an-hdr-image-s-">Taking an HDR image(s)<a href="#taking-an-hdr-image-s-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>More light information than my camera can record in one shot?<br>Then how do I take an HDR photo?</p>
<p>You don’t.</p>
<p>You take multiple photos of a scene, and <em>combine</em> them to create the final HDR image.
Before I get into the process of capturing these photos to create an HDR with, consider something:</p>
<h3 id="when-why-to-use-hdr">When/Why to use HDR<a href="#when-why-to-use-hdr" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>An HDR image is most useful to you when the scene you want to capture has bright and dark areas that fall outside the range of a single exposure, <em>and you feel that there is something important enough outside that range to include in your final image</em>.</p>
<p>That last part is important, because sometimes it’s OK to have some of your photo be too dark for details (or too light).  This is an aesthetic decision of course, but keep it in mind…</p>
<p>Here’s what happens.  Say you have a pretty scene you would like to photograph.  Maybe it’s the <a href="http://www.flickr.com/photos/jp_photo_online/7369521956/">Lower Chapel of Sainte Chapelle</a>:</p>
<figure class='big-vid'>
<a href="http://www.flickr.com/photos/jp_photo_online/7369521956/">
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/7369521956_95d6a3003c_k.jpg" alt="Sainte Chapelle Lower Chapel" height="640" width="960">
</a>
<figcaption><a href="http://www.flickr.com/photos/jp_photo_online/7369521956/">Sainte Chapelle Lower Chapel</a> by <a href="http://www.flickr.com/photos/jp_photo_online/with/7369521956/">iwillbehomesoon</a> on Flickr (<a href='https://creativecommons.org/licenses/by-nc-sa/2.0/'><span class='cc'>cbsna</span></a>)</figcaption>
</figure> 

<p>You may setup to take the shot, but when you are setting your exposure you may run into a problem.  To expose for the brighter parts of the image means that the shadows fall to black too quickly, crushing out the details there.</p>
<p>If you expose for the shadows, then the brighter parts of the image quickly clip beyond white.</p>
<p>The use case for an HDR is when you can’t find a happy medium between those two exposures.</p>
<p>A similar situation comes up when you want to shoot any ground details against a bright sky, but you want to keep the details in both.  Have a look at this example:</p>
<figure class='big-vid'>
<a href="http://www.flickr.com/photos/fredvdd/236863839/">
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/236863839_8722c5f2dd_b.jpg" alt="HDR Layers by dontmindme, on Flickr" height='640' width="960">
</a>
<figcaption>
<a href="http://www.flickr.com/photos/fredvdd/236863839/">HDR Layers</a> 
by <a href="http://www.flickr.com/photos/fredvdd">dontmindme</a>, on Flickr 
(<a href="https://creativecommons.org/licenses/by-nc-sa/2.0/" title="Creative Commons, BY-NC-SA"><span class='cc'>cbna</span></a>)
</figcaption>
</figure>

<p>In the first column, if you expose for the ground, the sky blows out.</p>
<p>In the second, you can drop the exposure to bring the sky in a bit, but the ground is getting too dark.</p>
<p>In the third, the sky is exposed nicely, but the ground has gone to mostly black.</p>
<p>If you wanted to keep the details in the sky and ground at the same time, you might use an HDR (you could technically also use exposure blending with just a couple of exposures and blend them by hand, but I digress) to arrive at the last column.</p>
<h3 id="shooting-images-for-an-hdr">Shooting Images for an HDR<a href="#shooting-images-for-an-hdr" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Many cameras have an auto-bracketing feature that will let you quickly shoot a number of photos while changing the exposure value (EV) of each.  You can also do this by hand simply by changing one parameter of your exposure each time.</p>
<p>You can technically change any of ISO, shutter speed, or aperture to modify the exposure, but <strong>I’d recommend you change only the shutter speed</strong> (or EV value when in Aperture Priority modes).</p>
<p>The reason is that changing the shutter speed will not alter the depth-of-field (DoF) of your view or introduce any extra noise the way changing the aperture or ISO would.</p>
<p>When considering your scene, you will also want to try to stick to static scenes if possible.
The reason is that objects that move around (swaying trees, people, cars, fast moving clouds, etc.) could end up as ghosts or mis-alignments in your final image.
So as you’re starting out, choose your scene to help you achieve success.</p>
<p>Set up your camera someplace very steady (like a tripod), dial in your exposure and take a shot.
If you let your camera meter your scene for you then this is a good middle starting point.</p>
<p>For example, if you setup your camera and meter your scene, it might report a <sup>1</sup>⁄<sub>160</sub> second exposure.  This is our starting point (<strong>0EV</strong>).</p>
<figure>
<img border="0" src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/B1010235.jpg" width='600' height='452'>
<figcaption>The base exposure, <sup>1</sup>&frasl;<sub>160</sub> s, 0EV</figcaption>
</figure>

<p>To capture the lower values, just cut your shutter speed in half ( <sup>1</sup>&frasl;<sub>80</sub> second, +1EV), and take a photo.  Repeat if you’d like ( <sup>1</sup>&frasl;<sub>40</sub> second, +2EV).</p>
<figure>
<img border="0" src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/B1010234.jpg" width="300" height="226" style='display:inline; width: 300px;'>
<img border="0" src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/B1010233.jpg" width="300" height="226" style='display:inline; width: 300px; margin-left: 0.5rem;'>
<figcaption>
<sup>1</sup>⁄<sub>80</sub> second, +1EV (left), <sup>1</sup>⁄<sub>40</sub> second, +2EV (right)
</figcaption>
</figure>

<p>To capture the upper values, just double your starting point shutter speed ( <sup>1</sup>⁄<sub>320</sub>, -1EV) and take a photo. Repeat if you’d like again ( <sup>1</sup>⁄<sub>640</sub>, -2EV).</p>
<figure>
<img border="0" src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/B1010236.jpg" width="300" height="226" style='display:inline; width: 300px;'>
<img border="0" src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/B1010237.jpg" width="300" height="226" style='display:inline; width: 300px; margin-left:0.5rem;'>
<figcaption>
<sup>1</sup>⁄<sub>320</sub>, -1EV (left), <sup>1</sup>⁄<sub>640</sub>, -2EV (right)
</figcaption>
</figure>

<p>This will give you 5 images covering a range of -2EV to +2EV:</p>
<style>
table#EVs {
    border-collapse: collapse;
    border: solid 1px gray;
    margin-left: auto;
    margin-right: auto;
} 

#EVs th, #EVs td {
    border: solid 1px gray;
    padding: 0.5rem 0.5em;
    text-align:center;
}
</style>

<table id="EVs"><tbody><tr><th>Shutter Speed</th><th>Exposure Value</th></tr>
<tr><td><sup>1</sup>⁄<sub>640</sub></td><td>-2EV</td></tr>
<tr><td><sup>1</sup>⁄<sub>320</sub></td><td>-1EV</td></tr>
<tr><td><sup>1</sup>⁄<sub>160</sub></td><td>0EV</td></tr>
<tr><td><sup>1</sup>⁄<sub>80</sub></td><td>+1EV</td></tr>
<tr><td><sup>1</sup>⁄<sub>40</sub></td><td>+2EV</td></tr>
</tbody></table>

<p>Your values don’t have to be exactly 1EV each time, LuminanceHDR is usually smart enough to figure out what’s going on from the EXIF data in your image - I chose full EV stops here to simplify the example.</p>
<p>So armed with your images, it’s time to turn them into an HDR image!</p>
<h2 id="creating-an-hdr-image">Creating an HDR Image<a href="#creating-an-hdr-image" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>You kids have it too easy these days.  We used to have to bring all the images into Hugin and align them before we could save an hdr/exr file.  Nowadays you’ve got a phenomenal piece of Free/Open Source Software to handle this for you:</p>
<p><a href="http://qtpfsgui.sourceforge.net/" style="font-size:1.5rem;">LuminanceHDR</a><br>(Previously qtpfsgui. Seriously.)</p>
<p>After installing it, open it up and hit “<strong>New HDR Image</strong>“:</p>
<figure>
<img border="0" src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-Open.png" alt="LuminanceHDR startup screen" width='475' height='263'>
</figure>

<p>This will open up the <em>“HDR Creation Wizard”</em> that will walk you through the steps of creating the HDR.  The splash screen notes a couple of constraints.</p>
<figure>
<img border="0" src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-wizard-1.png" alt="LuminanceHDR wizard splash screen" width='600' height='358'>
</figure>

<p>On the next screen, you’ll be able to load up all of the images in your stack.  Just hit the big green “<b style="color:green; font-size:1.5em;">+</b>“ button in the middle, and choose all of your images:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-wizard-load.png" alt="LuminanceHDR load wizard" width='600' height='358'>
</figure>

<p>LuminanceHDR will load up each of your files, and investigate them to try and determine the EV values for each one.  It usually does a good job of this on its own, but if there a problem you can always manually specify what the actual EV value is for each image.</p>
<p>Also notice that because I only adjusted my shutter speed by half or double, that each of the relative EV values is neatly spaced 1EV apart.  They don’t have to be, though.  I could have just as easily done &frac12; EV or &frac13; EV steps as well.</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-wizard-loaded.png" alt="LuminanceHDR creation wizard" width='600' height='358'>
</figure>

<p>If there is even the remotest question about how well your images will line up, I’d recommend that you check the box for <em>“Autoalign images”</em>, and let <a href="http://hugin.sourceforge.net/">Hugin’s </a>align_image_stack do it’s magic.
You really need all of your images to line up perfectly for the best results.</p>
<p>Hit “<strong>Next</strong>“, and if you are aligning the images be patient.
Hugin’s align_image_stack will find control points between the images and remap them so they are all aligned.
When it’s done you’ll be presented with some editing tools to tweak the final result before the HDR is created.</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-wizard-editing.png" alt="LuminanceHDR Creation Wizard" width='600' height='355'>
</figure>

<p>You are basically looking at a difference view between images in your stack at the moment.  You can choose which two images to difference compare by choosing them in the list on the left.  You can now shift an image horizontally/vertically if it’s needed, or even generate a ghosting mask (a mask to handle portions of an image where objects may have shifted between frames).</p>
<p>If you are careful, and there’s not much movement in your image stacks, then you can safely click through this screen.  Hit the “<strong>Next</strong>“ button.</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-wizard-final.png" alt="LuminanceHDR Creation Wizard" width='600' height='403'>
</figure>

<p>This is the final screen of the HDR Creation Wizard.
There are a few different ways to calculate the pixel values that make up an HDR image, and this is where you can choose which ones to use.
For the most part, people far smarter than I had a look at a bunch of creation methods, and created the predefined profiles.
Unless you know what you’re doing, I would stick with those.</p>
<p>Hit “<strong>Finish</strong>“, and you’re all done!</p>
<p>You’ll now be presented with your HDR image in LuminanceHDR, ready to be tonemapped so us mere mortals can actually make sense of the HDR values present in the image.
At this point, I would hit the “Save As…” button, and save your work.</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-Main.png" alt="LuminanceHDR Main" width='600' height='340'>
</figure>



<h2 id="tonemapping-the-hdr">Tonemapping the HDR<a href="#tonemapping-the-hdr" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>So now you’ve got an HDR image.  Congratulations!</p>
<p>The problem is, you can’t really view it with your puny little monitor.</p>
<p>The reason is that the HDRi now contains more information than can be represented within the limited range of your monitor (and eyeballs, likely).  So we need to find a way to represent all of that extra light-goodness so that we can actually view it on our monitors.  This is where <a href="http://en.wikipedia.org/wiki/Tone_mapping">tonemapping </a>comes in.</p>
<p>We basically have to take our HDRi and use a method for compressing all of that radiance data down into something we can view on our monitors/prints/eyeballs.  We need to create a Low Dynamic Range (LDR) image from our HDR.</p>
<p>Yes - we just went through all the trouble of stacking together a bunch of LDR images to create the HDRi, and now we’re going <i>back to LDR </i>?  We are - but this time we are armed with <b><i>way </i></b>more radiance data than we had to begin with!</p>
<p>The question is, how do we represent all that extra data in an LDR?  Well, there’s quite a few different ways.  LuminanceHDR provides for 9 different tonemapping operators (TMO’s) to represent your HDRi as an LDR image:</p>
<ul>
<li><a href="#mantiuk-06">Mantiuk ‘06</a></li>
<li><a href="#mantiuk-08">Mantiuk ‘08</a></li>
<li><a href="#fattal">Fattal</a></li>
<li><a href="#drago">Drago</a></li>
<li><a href="#durand">Durand</a></li>
<li><a href="#reinhard-02">Reinhard ‘02</a></li>
<li><a href="reinhard-05">Reinhard ‘05</a></li>
<li><a href="askikhmin">Ashikhmin</a></li>
<li><a href="pattanaik">Pattanaik</a></li>
</ul>
<p>Just a small reminder, there’s a ton of math involved in how to map these values to an LDR image.
I’m going to skip the math.
The <a href="http://www.mpi-inf.mpg.de/resources/tmo/">references are out there</a> if you want them.</p>
<p>I’ll try to give examples of each of the operators below, and a little comment here and there.  If you want more information, you can always check out the list on the <a href="http://osp.wikidot.com/parameters-for-photographers">Open Source Photography wikidot page</a>.</p>
<p>Before we get started, let’s have a look at the window we’ll be working in:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Luminance-TMO.png" alt="LuminanceHDR Main Window" width='960' height='544'>
</figure>

<p><span style="color:#00FF00;">Tonemap</span> is the section where you can choose which TMO you want to use, and will expose the various parameters you can change for each TMO.  This is the section you will likely be spending most of your time, tweaking the settings for whichever TMO you decide to play with.</p>
<p><span style="color:#00FFFF;">Process</span> gives you two things you’ll want to adjust.  The first is the size of the output that you want to create (<i>Result Size</i>).  While you are trying things out and dialing in settings you’ll probably want to use a smaller size here (some operators will take a while to run against the full resolution image).  The second is any pre-gamma you want to apply to the image.  I’ll talk about this setting a bit later on.</p>
<p>Oh, and this section also has the “Tonemap” button to apply your settings and generate a preview.  I’ll also usually keep the “Update current LDR” checked while I rough in parameters.  When I’m fine-tuning I may uncheck this (it will create a new image every time you hit the “Tonemap” button).</p>
<p><span style="color:#FF0000;">Results</span> are shown in this big center section of the window.  The result will be whatever <i>Result Size</i> you set in the previous section.</p>
<p><span style="color:#0000FF;">Previews</span> are automatically generated and shown in this column for each of the TMO.  If you click on one, it will automatically apply that TMO to your image and display it (at a reduced resolution - I think the default is 400px, but you can change it if you want).  It’s a nice way to quickly get a preview overview of what all the different TMOs are doing to your image.</p>
<p>Ok, with that out of the way, let’s dive into the TMOs and have a look at what we can do.  I’m going to try to aim for a reasonably realistic output here that (hopefully) won’t make your eyeballs bleed.  No promises, though.</p>
<p class='aside'>
<span>Need an HDR to follow along?</span>
I figured it might be more fun (easier?) to follow along if you had the same file I do.
<br>

So here it is, don’t say I never gave you anything (This hdr is licensed <a href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">cc-by-sa-nc</a> by me):
<br>

<span>
<a href="https://docs.google.com/uc?export=download&amp;id=0B21lPI7Ov4CVMTJwSS14aGtCc1U">Download from Google Drive (41MB .hdr)</a>
</span>
</p>


<p>Another note - all of the operators can have their results tweaked by modification of the pre-gamma value ahead of time.  This is applied the image <i>before </i>the TMO is applied, and will make a difference in the final output.  Usually pushing the pre-gamma value down will increase contrast/brightness in the image, while increasing it will do the opposite.  I find it better to start with pre-gamma set to 1 as I experiment, just remember that it is another factor that you use to modify your final result.</p>
<h3 id="mantiuk-06">Mantiuk ‘06<a href="#mantiuk-06" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I’m starting with this one because it’s the first in the list of TMOs.  Let’s see what the defaults from this operator look like against our base HDRi:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk06_default.jpg" alt="Mantiuk 06 default" width='600' height='452'>
<figcaption>
Default Mantiuk ‘06 applied
</figcaption>
</figure>

<p>By default Mantiuk ‘06 produces a muted color result that seems pleasing to my eye.  Overall the image feels like it’s almost “dirty” or “gritty” with these results.  The default settings produce a bit of extra local contrast boosting as well.</p>
<p>Let’s see what the parameters do to our image.</p>
<h4 id="contrast-factor">Contrast Factor<a href="#contrast-factor" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default factor is 0.10.</p>
<p>Pushing this value down to as low as 0.01 produces just a slight increase in contrast across the image from the default.  Not that much overall.</p>
<p>Pushing this value up, though, will tone down the contrast overall.  I think this helps to add some moderation to the image, as hard contrasts can be jarring to the eyes sometimes.  Here is the image with only the <i>Contrast Factor</i> pushed up to 0.40:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk06_contrast_mapping_0.4_default.jpg" data-swap-src="untitled_pregamma_1_mantiuk06_default.jpg" alt='Mantiuk 06 Contrast Factor 0.4' width='600' height='452'>
<figcaption>
Mantiuk ‘06 - Contrast Factor increased to 0.40<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="saturation-factor">Saturation Factor<a href="#saturation-factor" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 0.80.</p>
<p>This factor just scales the saturation in the image, and behaves as expected.  If you find the colors a bit muted using this TMO, you can bump this value a bit (don’t get crazy).  For example, here is the <em>Saturation Factor</em> bumped to 1.10:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk06_saturation_factor_1.1_default.jpg" data-swap-src="untitled_pregamma_1_mantiuk06_default.jpg" width='600' height='452' alt='Mantiuk 06 Saturation 1.10'>
<figcaption>
Mantiuk ‘06 - Saturation Factor increased to 1.10<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>Of course, you can also go the other way if you want to mute the colors a bit more:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk06_saturation_factor_0.4_default.jpg" data-swap-src="untitled_pregamma_1_mantiuk06_default.jpg" width='600' height='452' alt='Mantiuk 06 Saturation 0.40'>
<figcaption>
Mantiuk ‘06 - Saturation Factor decreased to 0.40<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="detail-factor">Detail Factor<a href="#detail-factor" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 1.0.</p>
<p>The <em>Detail Factor</em> appears to control local contrast intensity.  It gets overpowering very quickly, so make small movements here (if at all).  Here is what pushing the <em>Detail Factor</em> up to 10.0 produces:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk06_detail_factor_10_default.jpg" data-swap-src="untitled_pregamma_1_mantiuk06_default.jpg" width='600' height='452' alt='Mantiuk 06 Detail Factor' >
<figcaption>
<strong><em>Don’t</em></strong> do this.  Mantiuk ‘06 - Detail Factor increased to 10.0<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="contrast-equalization">Contrast Equalization<a href="#contrast-equalization" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This is supposed to equalize the contrast if there are heavy swings of light/dark across the image on a global scale, but in my example did little to the image (other than a strange lightening in the upper left corner).</p>
<h4 id="my-final-version-2">My Final Version<a href="#my-final-version-2" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>I played a bit starting from the defaults.  First I wanted to push down the contrast a bit to make everything just a bit more realistic, so I pushed <em>Contrast Factor</em> up to 0.30.  I slightly bumped the <em>Saturation Factor</em> to 0.95 as well.</p>
<p>I liked the textures of the tree and house, so I wanted to bring those back up a bit after decreasing the Contrast Factor, so I pushed the <em>Detail Factor</em> up to 5.0.</p>
<p>Here is what I ended up with in the end:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk06_contrast_mapping_0.3_saturation_factor_0.95_detail_factor_5_FINAL.jpg" data-swap-src="untitled_pregamma_1_mantiuk06_default-960.jpg" width='960' height='723' alt='Mantiuk 06 Final Result'>
<figcaption>
My final output (Contrast 0.3, Saturation 0.95, Detail 5.0)<br>
(click to compare to defaults)
</figcaption>
</figure>


<h3 id="mantiuk-08">Mantiuk ‘08<a href="#mantiuk-08" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Mantiuk ‘08 is a global contrast TMO (for comparison, Mantiuk ‘06 uses local contrast heavily).  Being a global operator, it’s very quick to apply.</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk08_default.jpg" alt="Mantiuk 08 default" height='' width=''>
<figcaption>
Default Mantiuk ‘08 applied
</figcaption>
</figure>

<p>As you can see, the effect of this TMO is to compress the dynamic range into an LDR output using a function that operates across the entire image globally.  This will produce a more realistic result I think, overall.</p>
<p>The default output is not bad at all, where brights seem appropriately bright, and darks are dark while still retaining details.  It does feel like the resulting output is a little over-sharp to my eye, however.</p>
<p>There are only a couple of parameters for this TMO (unless you specifically override the <em>Luminance Level</em> with the checkbox, Mantiuk ‘08 will automatically adjust it for you):</p>
<h4 id="predefined-display">Predefined Display<a href="#predefined-display" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>There are options for <em>LCD Office, LCD, LCD Bright,</em> and <em>CRT</em> but they didn’t seem to make any difference in my final output at all.</p>
<h4 id="color-saturation-2">Color Saturation<a href="#color-saturation-2" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 1.0.</p>
<p><em>Color Saturation</em> operates exactly how you’d expect.  Dropping this value decreases the saturation, and vice versa.  Here’s a version with the <em>Color Saturation</em> bumped to 1.50:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk08_colorsaturation_1.5_default.jpg" data-swap-src="untitled_pregamma_1_mantiuk08_default.jpg" width='600' height='452'>
<figcaption>
Mantiuk ‘08 - Color Saturation increased to 1.50<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="contrast-enhancement">Contrast Enhancement<a href="#contrast-enhancement" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 1.0.</p>
<p>This will affect the global contrast across the image.  The default seemed to have a bit too much contrast, so it’s worth it to dial this value in.  For instance, here is the <em>Contrast Enhancement</em>  dialed down to 0.51:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_mantiuk08_contrastenhancement_0.51_default.jpg" data-swap-src="untitled_pregamma_1_mantiuk08_default.jpg" width='600' height='452' alt='Mantiuk 08 Contrast Enhancement 0.51'>
<figcaption>
Mantiuk ‘08 - Contrast Enhancement decreased to 0.51<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>Compared to the default settings I feel like this operator can work better if the contrast is turned down just a bit to make it all a little less harsh.</p>
<h4 id="enable-luminance-level">Enable Luminance Level<a href="#enable-luminance-level" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This checkbox/slider allows you to manually specify the Luminance Level in the image.  The problem that I ran into was that with this enabled, I couldn’t adjust the Luminance far enough to keep bright areas in the image from blowing out.  if I let the default behavior of automatically adjusting Luminanace, then it kept things more under control.</p>
<h4 id="my-final-version-3">My Final Version<a href="#my-final-version-3" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Starting from the defaults, I pushed down the <em>Contrast Enhancement</em> to 0.61 to even out the overall contrast.  I bumped the <em>Color Saturation</em> to 1.10 to bring out the colors a bit more as well.</p>
<p>I also dropped the pre-gamma correction to 0.91 in order to bring back some of the contrast lost from the <em>Contrast Enhancement</em>.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_0.91_mantiuk08_auto_luminancecolorsaturation_1.1_contrastenhancement_0.61_FINAL.jpg" data-swap-src="untitled_pregamma_1_mantiuk08_default-960.jpg" width='960' height='723' alt='Mantiuk 08 final result'>
<figcaption>
My final Mantiuk ‘08 output<br>
(pre-gamma 0.91, Contrast Enhancement 0.61, Color Saturation 1.10)<br>
(click to compare to defaults)
</figcaption>
</figure>



<h3 id="fattal">Fattal<a href="#fattal" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Crap.  Time for this TMO I guess…</p>
<p><strong>THIS</strong> is the TMO responsible for some of the greatest sins of HDR images.
Did you see the first two images in this post?  Those were Fattal.
The problem is that it’s really easy to get stupid with this TMO.</p>
<p>Fattal (like the other local contrast operators) is dependent on the final output size of the image.
When testing this operator, do it at the full resolution you will want to export.
The results will not match up if you change size.
I’m also going to focus on using only the newer v.2.3.0 version, not the old one.</p>
<p>Here is what the default values look like on our image:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_fattal_default.jpg" alt="Fattal default" height='' width=''>
<figcaption>
Default Fattal applied
</figcaption>
</figure>

<p>The defaults are pretty contrasty, and the color seems saturated quite a bit as well.  Maybe we can get something useful out of this operator.  Let’s have a look at the parameters.</p>
<h4 id="alpha">Alpha<a href="#alpha" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 1.00.</p>
<p>This parameter is supposed to be a threshold against which to apply the effect. According to the wikidot, decreasing this value should increase the level of details in the output and vice versa.  Here is an example with the <em>Alpha</em> turned down to 0.25:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_fattal_alpha_0.25_default.jpg" data-swap-src="untitled_pregamma_1_fattal_default.jpg" width='600' height='452'>
<figcaption>
Fattal - Alpha decreased to 0.25<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>Increasing the <em>Alpha</em> value seems to darken the image a bit as well.</p>
<h4 id="beta">Beta<a href="#beta" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 0.90.</p>
<p>This parameter is supposed to control the amount of the algorithm applied on the image.  A value of 1 is no effect on the image (straight gamma=1 mapping).  Lower values will increase the amount of the effect.  Recommended values are between 0.8 and 0.9.  As the values get lower, the image gets more cartoonish looking.</p>
<p>Here is an example with <em>Beta</em> dropped down to 0.75:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_fattal_beta_0.75_default.jpg" data-swap-src="untitled_pregamma_1_fattal_default.jpg" width='600' height='452' alt='Fattal Beta 0.75'>
<figcaption>
Fattal - Beta decreased to 0.75<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="color-saturation">Color Saturation<a href="#color-saturation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 1.0.</p>
<p>This parameter does exactly what’s described.  Nothing interesting to see here.</p>
<h4 id="noise-reduction">Noise Reduction<a href="#noise-reduction" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 0.</p>
<p>This should suppress fine detail noise from being picked up by the algorithm for enhancement.  I’ve noticed that it will slightly affect the image brightness as well.  Fine details may be lost if this value is too high.  Here the <i>Noise Reduction</i> has been turned up to 0.15:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_fattal_noiseredux_0.15_default.jpg" data-swap-src="untitled_pregamma_1_fattal_default.jpg" width='600' height='452' alt='Fattal NR 0.15'>
<figcaption>
Fattal - Noise Reduction increased to 0.15<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="my-final-version-4">My Final Version<a href="#my-final-version-4" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This TMO is sensitive to changes in its parameters.  Small changes can swing the results far, so proceed lightly.</p>
<p>I increased the <em>Noise Reduction</em> a little bit up front, which lightened up the image.  Then I dropped the <em>Beta</em> value to let the algorithm work to brighten up the image even further.  To offset the increase, I pushed <em>Alpha</em> up a bit to keep the local contrasts from getting too harsh.  A few minutes of adjustments yielded this:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_fattal_alpha_1.07_beta_0.86_saturation_0.7_noiseredux_0.02_fftsolver_1_FINAL.jpg" data-swap-src="untitled_pregamma_1_fattal_default-960.jpg" width='960' height='723' alt='Fattal Final Result'>
<figcaption>
My Fattal output - Alpha 1.07, Beta 0.86, Saturation 0.7, Noise red. 0.02<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>Overall, Fattal can be easily abused.  Don’t abuse the Fattal TMO.  If you find your values sliding too far outside of the norm, step away from your computer, get a coffee, take a walk, then come back and see if it still hurts your eyes.</p>
<h3 id="drago">Drago<a href="#drago" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Drago is another of the global TMOs.  It also has just one control: bias.</p>
<p>Here is what the default values produce:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_drago_default.jpg" alt="" height='' width=''>
<figcaption>
Default Drago applied
</figcaption>
</figure>

<p>The default values produced a very washed out appearance to the image.  The black points are heavily lifted, resulting in a muddy gray in dark areas.</p>
<p><em>Bias</em> is the only parameter for this operator.  The default value is 0.85.  Decreasing this value will lighten the image significantly, while increasing it will darken it.  For my image, even pushing the <em>Bias</em> value all the way up to 1.0 only produced marginal results:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_drago_bias_1.jpg" data-swap-src="untitled_pregamma_1_drago_default.jpg" width='600' height='452' alt='Drago Bias 1.0'>
<figcaption>
Drago - Bias 1.0<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>Even at this level the image still appears very washed out.  The only other parameter to change would be the pre-gamma before the TMO can operate.  After adjusting values for a bit, I settled on a pre-gamma of 0.67 in addition to the <em>Bias</em> being set to 1:</p>
<h4 id="my-final-version-5">My Final Version<a href="#my-final-version-5" class="header-link"><i class="fa fa-link"></i></a></h4>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_0.67_drago_bias_1.jpg" data-swap-src="untitled_pregamma_1_drago_default-960.jpg" width='960' height='723' alt='Drago final result'>
<figcaption>
My result: Drago - Bias 1.0, pre-gamma 0.67<br>
(click to compare to defaults)
</figcaption>
</figure>



<h3 id="durand">Durand<a href="#durand" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Most of the older documentation/posts that I can find describe Durand as the most realistic of the TMOs, yielding good results that do not appear overly processed.</p>
<p>Indeed the default settings immediately look reasonably natural, though it does exhibit a bit of blowing out in very bright areas - which I imagine can be fixed by adjustment of the correct parameters.  Here is the default Durand output:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_durand_default.jpg" alt="" height='' width=''>
<figcaption>
Default Durand applied
</figcaption>
</figure>

<p>There are three parameters that can be adjusted for this TMO, let’s have a look:</p>
<h4 id="base-contrast">Base Contrast<a href="#base-contrast" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 5.00.</p>
<p>This value is considered a little high from most sources I’ve read.  Usually recommending to drop this value to the 3-4 range.  Here is the image with the <i>Base Contrast </i> dropped to 3.0:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_durand_base_3.5_default.jpg" data-swap-src="untitled_pregamma_1_durand_default.jpg" width='600' height='452' alt='Durand Base Contrast 3.5'>
<figcaption>
Durand - Base Contrast decreased to 3.5<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>The <em>Base Contrast</em> does appear to drop the contrast in the image, but it also drops the blown-out high values on the house to more reasonable levels.</p>
<h4 id="spatial-kernel-sigma">Spatial Kernel Sigma<a href="#spatial-kernel-sigma" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 2.00.</p>
<p>This parameter seems to produce a change to contrast in the image.  Large value swings are required to notice some changes, depending on the other parameter values.  Pushing the value up to 65.00 looks like this:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_1_durand_spatial_65_default.jpg" data-swap-src="untitled_pregamma_1_durand_default.jpg" width='600' height='452' alt='Durand Spatial Kernel 65.00'>
<figcaption>
Durand - Spatial Kernel Sigma increased to 65.00<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="range-kernel-sigma">Range Kernel Sigma<a href="#range-kernel-sigma" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 2.00.</p>
<p>My limited testing shows that this parameters doesn’t quite operate correctly.  Changes will not modify the output image until you reach a certain threshold in the upper bounds, where it will overexpose the image.  I am assuming there is a bug in the implementation, but will have to test further before filing a bug report.</p>
<h4 id="my-final-version-6">My Final Version<a href="#my-final-version-6" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>In experiment I found that pre-gamma adjustments can affect the saturation in the output image.  Pushing pre-gamma down a bit will increase the saturation.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/untitled_pregamma_0.88_durand_spatial_5_range_1.01_base_3.6_FINAL.jpg" data-swap-src="untitled_pregamma_1_durand_default-960.jpg" width='960' height='723' alt='Durand final result'>
<figcaption>
My Durand results - pre-gamma 0.88, Contrast 3.6, Spatial Sigma 5.00<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>I pulled the <em>Base Contrast</em> back to keep the sides of the house from blowing out.  Once I had done that, I also dropped the pre-gamma to 0.88 to bump the saturation slightly in the colors.  A slight boost to <em>Spatial Kernel Sigma</em> let me increase local contrasts slightly as well.</p>
<p>Finally, I used the <em>Adjust Levels</em> dialog to modify the levels slightly by raising the black point a small amount (hey - I’m the one writing about all these #@$%ing operators, I deserve a chance to cheat a little).</p>
<h3 id="reinhard-02">Reinhard ‘02<a href="#reinhard-02" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This is supposed to be another very natural looking operator.  The initial default result looks good with medium-low contrast and nothing blowing out immediately:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_reinhard02_default.jpg" alt="" height='' width=''>
<figcaption>
Default Reinhard ‘02 applied
</figcaption>
</figure>

<p>Even though many parameters are listed, they don’t really appear to make a difference.  At least with my test HDR.  Even worse, attempting to use the “Use Scales” option usually just crashes my LuminanceHDR.</p>
<h4 id="key-value">Key Value<a href="#key-value" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 0.18.</p>
<p>This appears to be the only operator that does anything in my image at the moment.  Increasing it will increase the brightness of the image, and decreasing it will darken the image.</p>
<p>Here is the image with <em>Key Value</em> turned down to 0.05:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_reinhard02_key_0.05_default.jpg" data-swap-src="Cabin_pregamma_1_reinhard02_default.jpg" width='600' height='452' alt='Reinhard 02 Key Value 0.05'>
<figcaption>
Reinhard ‘02 - Key Value 0.05<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="phi">Phi<a href="#phi" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 1.00.</p>
<p>This parameter does not appear to have any affect on my image.</p>
<h4 id="use-scales">Use Scales<a href="#use-scales" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Turning this option on currently crashes my session in LuminanceHDR.</p>
<h4 id="my-final-version-7">My Final Version<a href="#my-final-version-7" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>I started by setting the <i>Key Value </i> very low (0.01), and adjusted it up slowly until I got the highlights about where I wanted them.  Due to this being the only parameter that modified the image, I then started adjusting pre-gamma up until I got to roughly the exposure I thought looked best (1.09).</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1.09_reinhard02_key_0.09_phi_1_FINAL.jpg" data-swap-src="Cabin_pregamma_1_reinhard02_default-960.jpg" width='960' height='723' alt='Reinhard 02 final result'>
<figcaption>
Final Reinhard ‘02 version - Key Value 0.09, pre-gamma 1.09<br>
(click to compare to defaults)
</figcaption>
</figure>



<h3 id="reinhard-05">Reinhard ‘05<a href="#reinhard-05" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Reinhard ‘05 is supposed to be another more ‘natural’ looking TMO, and also operates globally on the image.  The default settings produce an image that looks under-exposed and very saturated:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_reinhard05_default.jpg" alt="" height='' width=''>
<figcaption>
Default Reinhard ‘05 applied
</figcaption>
</figure>

<p>There are three parameters for this TMO that can be adjusted.</p>
<h4 id="brightness">Brightness<a href="#brightness" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is -10.00.</p>
<p>Interestingly, pushing this parameter down (all the way to its lowest setting, -20) did not darken my image at all.  Pulling it up, however, did increase the brightness overall.  Here the brightness is increased to -2.00:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_reinhard05_brightness_-2_default.jpg" data-swap-src="Cabin_pregamma_1_reinhard05_default.jpg" width='600' height='452' alt='Reinhard 05 brightness -2.00'>
<figcaption>
Reinhard ‘05 - Brightness increased to -2.00<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="chromatic-adaptation">Chromatic Adaptation<a href="#chromatic-adaptation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 0.00.</p>
<p>This parameter appears to affect the saturation in the image.  Increasing it desaturates the results, which is fine given that the default value of 0.00 shows a fairly saturated image to begin with.  Here is the <i>Chromatic Adaptation </i> turned up to 0.60:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_reinhard05_chromatic_adaptation_0.6_default.jpg" data-swap-src="Cabin_pregamma_1_reinhard05_default.jpg" width='600' height='452' alt='Reinhard 05 chromatic adaptation 0.6'>
<figcaption>
Reinhard ‘05 - Chromatic Adaptation increased to 0.6<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="light-adaptation">Light Adaptation<a href="#light-adaptation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is 1.00.</p>
<p>This parameter modifies the global contrast in the final output.  It starts at the maximum of 1.00, and decreasing this value will increase the contrast in the image.  Pushing the value down to 0.5 does this to the test image:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_reinhard05_light_adaptation_0.5_default.jpg" data-swap-src="Cabin_pregamma_1_reinhard05_default.jpg" width='600' height='452' alt='Reinhard 05 light adaptation 0.50'>
<figcaption>
Reinhard ‘05 - Light Adaptation decreased to 0.50<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="my-final-version-8">My Final Version<a href="#my-final-version-8" class="header-link"><i class="fa fa-link"></i></a></h4>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_reinhard05_brightness_-3_chromatic_adaptation_0.6_light_adaptation_0.75_FINAL.jpg" data-swap-src="Cabin_pregamma_1_reinhard05_default-960.jpg" width='960' height='723' alt='Reinhard 05 final result'>
<figcaption>
My Reinhard ‘05 - Brightness -5.00, Chromatic Adapt. 0.60, Light Adapt. 0.75<br>
(click to compare to defaults)
</figcaption>
</figure>


<p>Starting from the defaults, I raised the <em>Brightness</em> to -5.00 to lift the darker areas of the image, while keeping an eye on the highlights to keep them from blowing out.  I then decreased the <em>Light Adaptation</em> until the scene had a reasonable amount of contrast without becoming overpowering to 0.75.  At that point I turned up the <em>Chromatic Adaptation</em> to reduce the saturation in the image to be more realistic, and finished at 0.60.</p>
<h3 id="ashikhmin">Ashikhmin<a href="#ashikhmin" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This TMO has little in the way of controls - just options for two different equations that can be used, and a slider.  The default (Eqn. 2) image is very dark and heavily saturated:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_ashikhmin_default.jpg" alt="Ashikhmin default" height='' width=''>
<figcaption>
Default Ashikhmin applied
</figcaption>
</figure>

<p>There is a checkbox option for using a “Simple” method (that produces identical results regardless of which Eqn is checked - I’m thinking it doesn’t use that information).</p>
<h4 id="simple">Simple<a href="#simple" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Checking the <em>Simple</em> checkbox removes any control over the image parameters, and yields this image:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_ashikhmin_-simple.jpg" data-swap-src="Cabin_pregamma_1_ashikhmin_default.jpg" width='600' height='452' alt='Ashikhmin simple'>
<figcaption>
Ashikhmin - Simple<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>Fairly saturated, but exposed reasonably well.  It lacks some contrast, but the tones are all there.  This result could use some further massaging to knock down the saturation and to bump the contrast slightly (or adjust pre-gamma).</p>
<h4 id="equation-4">Equation 4<a href="#equation-4" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This is the result of choosing <i>Equation 4 </i> instead:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_ashikhmin_-eq4_default.jpg" data-swap-src="Cabin_pregamma_1_ashikhmin_default.jpg" width='600' height='452' alt='Ashikhmin equation 4'>
<figcaption>
Ashikhmin - Equation 4<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>There is a large loss of local contrast details in the scene, and some of the edges appear very soft.  Overall the exposure remains very similar.</p>
<h4 id="local-contrast-threshold">Local Contrast Threshold<a href="#local-contrast-threshold" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 0.50.</p>
<p>This parameter modifies the local contrast being applied to the image.  The result will be different depending on which <em>Equation</em> is being used.</p>
<p>Here is <em>Equation 2</em> with the <em>Local Contrast Threshold</em> reduced to 0.20:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_ashikhmin_-eq2_local_0.2.jpg" data-swap-src="Cabin_pregamma_1_ashikhmin_default.jpg" width='600' height='452' alt='Ashikhmin eqn 2 local contrast 0.20'>
<figcaption>
Ashikhmin - Eqn 2, Local Contrast Threshold 0.20<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>Lower values will decrease the amount of local contrast in the final output.</p>
<p><em>Equation 4</em> with <em>Local Contrast Threshold</em> reduced to 0.20:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_ashikhmin_-eq4_local_0.2.jpg" data-swap-src="Cabin_pregamma_1_ashikhmin_-eq4_default.jpg" width='600' height='452' alt='Ashikhmin eqn 4 local contrast 0.20'>
<figcaption>
Ashikhmin - Eqn 4, Local Contrast Threshold 0.20<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="my-final-version-9">My Final Version<a href="#my-final-version-9" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>After playing with the options, the overall best version I feel is had by just using the <i>Simple </i> option.  Further tweaking may be necessary to get usable results beyond this.</p>
<h3 id="pattanaik">Pattanaik<a href="#pattanaik" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This TMO appears to attempt to mimic the behavior of human eyes with the inclusion of terminology like “Rod” and “Cone”.  There are quite a few different parameters to adjust if wanted.  The default TMO results in an image like this:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_pattanaik00_default.jpg" alt="" height='' width=''>
<figcaption>
Default Pattanaik applied
</figcaption>
</figure>

<p>The default results are very desaturated, and tends to blow out in the highlights.  The dark areas appear well exposed, with the problems (in my test hdr) being mostly constrained to highlights for this example.  On first glance, the results look like something that could be worked with.</p>
<p>There are quite a few different parameters for this TMO.  Let’s have a look at them:</p>
<h4 id="multiplier">Multiplier<a href="#multiplier" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default value is 1.00.</p>
<p>This parameter appears to modify the overall contrast in the image.  Decreasing the value will decrease contrast, and vice versa.  It also appears to slightly modify the brightness in the image as well (pushing the highlights to a less blown-out value).  Here is the <em>Multiplier</em> decreased to 0.03:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_pattanaik00_mul_0.03_autolum.jpg" data-swap-src="Cabin_pregamma_1_pattanaik00_default.jpg" width='600' height='452' alt='Pattanaik multiplier 0.03'>
<figcaption>
Pattanaik - Multiplier 0.03<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="local-tone-mapping">Local Tone Mapping<a href="#local-tone-mapping" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This parameter is just a checkbox, with no controls.  The result is a washed out image with heavy local contrast adjustments:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_pattanaik00_mul_1_local.jpg" data-swap-src="Cabin_pregamma_1_pattanaik00_default.jpg" width='600' height='452' alt='Pattanaik local tone mapping'>
<figcaption>
Pattanaik - Local Tone Mapping<br>
(click to compare to defaults)
</figcaption>
</figure>



<h4 id="cone-rod-levels">Cone/Rod Levels<a href="#cone-rod-levels" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The default is to have <em>Auto Cone/Rod</em> checked, greying out the options to change the parameters manually.</p>
<p>Turning off <em>Auto Cone/Rod</em> will get the default manual values of 0.50 for both applied:</p>
<figure>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_1_pattanaik00_mul_1_cone_0.5_rod_0.5_.jpg" data-swap-src="Cabin_pregamma_1_pattanaik00_default.jpg" width='600' height='452' alt='Pattanaik manual cone/rod 0.5 each'>
<figcaption>
Pattanaik - Manual Cone/Rod (0.50 for each)<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>The image gets very blown out everywhere, and modification of the Cone/Rod values does not significantly reduce brightness across the image.</p>
<h4 id="my-final-version">My Final Version<a href="#my-final-version" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Starting with the defaults, I reduced the <i>Multiplier </i> to bring the highlights under control.  This reduced contrast and saturation in the image.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/Cabin_pregamma_0.91_pattanaik00_mul_0.03_autolum_FINAL.jpg" data-swap-src="Cabin_pregamma_1_pattanaik00_default-960.jpg" width='960' height='723' alt='Pattanaik final result'>
<figcaption>
My final Pattanaik - Multiplier 0.03, pre-gamma 0.91<br>
(click to compare to defaults)
</figcaption>
</figure>

<p>To bring back contrast and some saturation, I decreased the pre-gamma to 0.91.  The results are not too far off of the defualt settings.  The results could still use some further help with global contrast and saturation, and might benefit from layering or modifications in GIMP.</p>
<h2 id="closing-thoughts">Closing Thoughts<a href="#closing-thoughts" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Looking through all of the results shows just how different each TMO will operate across the same image.  Here are all of the final results in a single image:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/hdr-photography-with-free-software-luminancehdr/All-Finals.png" alt="" height='1600' width='850' style='max-height: initial;'>
</figure>

<p>I personally like the results from Mantiuk ‘06.  The problem is that it’s still a little more extreme than I would care for in a final result.  For a really good, realistic result that I think can be massaged into a great image, I would go to Mantiuk ‘08 or Reinhard.</p>
<p>I could also do something with Fattal, but would have to tone a few things down a bit.</p>
<p>While you’re working, remember to occasionally open up the <strong>Levels Adjustment</strong> to keep an eye on the histogram.  Look for highlights blowing out, and shadows becoming too murky.  All the normal rules of image processing still apply here - so use them!</p>
<p>You’re trying to use HDR as a tool for you to capture more information, but remember to still keep it looking realistic.  If you’re new to HDR processing, then I can’t recommend enough to stop occasionally, get away from the monitor, and come back to look at your progress.</p>
<p>If it hurts your eyes, dial it all back.  Heck, if <em>you</em> think it looks good, <em><strong>still dial it back</strong></em> .</p>
<p>If I can head off even one clown-vomit image, then I’ll consider my mission accomplished with this post.</p>
<h3 id="a-couple-of-further-resources">A Couple of Further Resources<a href="#a-couple-of-further-resources" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Here’s a few things I’ve found scattered around the internet if you want to read more.</p>
<ul>
<li><a href="http://osp.wikidot.com/parameters-for-photographers">The Open Source Photography wikidot</a> page has some information as well</li>
<li>Cambridge in Colour user David has written about many of the operators:<ul>
<li><a href="http://www.cambridgeincolour.com/forums/thread1513.htm">Mantiuk</a></li>
<li><a href="http://www.cambridgeincolour.com/forums/thread1625.htm">Fattal</a></li>
<li><a href="http://www.cambridgeincolour.com/forums/thread1499.htm">Drago</a></li>
<li><a href="http://www.cambridgeincolour.com/forums/thread1514.htm">Durand</a></li>
<li><a href="http://www.cambridgeincolour.com/forums/thread1630.htm">Reinhard 05</a></li>
<li><a href="http://www.cambridgeincolour.com/forums/thread1681.htm">Reinhard 02</a></li>
<li><a href="http://www.cambridgeincolour.com/forums/thread1651.htm">Ashikhmin</a></li>
<li><a href="http://www.cambridgeincolour.com/forums/thread1612.htm">Pattanaik</a></li>
</ul>
</li>
<li><a href="http://pallopanoraama.blogspot.com/2011/05/realistinen-tonemappaus-luminance-hdr.html">A little Finnish exploration</a> of global vs. local operators</li>
</ul>
<p>We also have a sub-category on the <a href="https://discuss.pixls.us">forums</a> dedicated entirely to LuminanceHDR and HDR processing in general: <a href="https://discuss.pixls.us/c/software/luminancehdr">https://discuss.pixls.us/c/software/luminancehdr</a>.</p>
<p>This tutorial was originally published <a href="http://blog.patdavid.net/2013/05/hdr-photography-with-foss-tools.html">here</a>.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Libre Graphics Meeting London ]]></title>
            <link>https://pixls.us/blog/2016/01/libre-graphics-meeting-london/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2016/01/libre-graphics-meeting-london/</guid>
            <pubDate>Fri, 08 Jan 2016 14:36:06 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/london-calling-2048.jpg" /><br/>
                 <h1>Libre Graphics Meeting London</h1>  
                 <h2>Join us in London for a PIXLS meet-up!</h2>   
                <p>We’re heading to London!</p>
<figure>
<a href='http://libregraphicsmeeting.org/2016/'>
<img src="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/banner_glitch_1.png" alt='LGM/London Logo' />
</a>
</figure>

<p>I missed LGM last year in Toronto (having a baby - well, my wife was).
I <em>am</em> going to be there this year for <a href="http://libregraphicsmeeting.org/2016/">LGM/London</a>!</p>
<!-- more -->
<h2 id="help-support-us"><a href="#help-support-us" class="header-link-alt">Help Support Us</a></h2>
<p>I don’t ever do this normally, but you’ve got to start somewhere, right?</p>
<p>It’s my long-term desire to be able to hold a PIXLS meetup/event every year where the community can get together.
Where we can hold workshops, photowalks, and generally share knowledge and information.
For free, for anyone.</p>
<p><em>For now though, we need support.</em>
LGM is a great opportunity for us to meet with many different projects usually having representatives there.  </p>
<p>Donations will help us to offset travel costs to attend LGM as well as a pre-LGM meetup we are holding (<a href="#pixls-meet-up">more below</a>).
Anything further will go to creating new content and to cover hosting costs for the site.</p>
<h3 id="pledgie"><a href="#pledgie" class="header-link-alt">Pledgie</a></h3>
<p>I have started a <a href="https://pledgie.com/campaigns/30905">Pledgie campaign</a> to help ease the solicitation of donations:<br><a href="https://pledgie.com/campaigns/30905">https://pledgie.com/campaigns/30905</a></p>
<p>Here’s the fancy little widget they make available:</p>
<p><a href='https://pledgie.com/campaigns/30905'><img alt='Click here to lend your support to: PIXLS.US at Libre Graphics Meeting 2016 and make a donation at pledgie.com !' src='https://pledgie.com/campaigns/30905.png?skin_name=chrome' border='0' style='width: initial;'></a></p>
<p>If you want to help by adding this button places, here’s the code to do it:</p>
<pre><code>&lt;a href=&#39;https://pledgie.com/campaigns/30905&#39;&gt;
&lt;img alt=&#39;Click here to lend your support to: PIXLS.US at Libre Graphics Meeting 2016 and make a donation at pledgie.com !&#39; src=&#39;https://pledgie.com/campaigns/30905.png?skin_name=chrome&#39; border=&#39;0&#39; style=&#39;width: initial;&#39;&gt;
&lt;/a&gt;
</code></pre><p>Feel free to use it wherever you think it might help. :)</p>
<h3 id="paypal"><a href="#paypal" class="header-link-alt">PayPal</a></h3>
<p>You can also donate directly via <a href="https://www.paypal.com/cgi-bin/webscr?cmd=_donations&amp;business=patdavid%40gmail%2ecom&amp;lc=US&amp;item_name=PIXLS%2eUS%20LGM%2FLondon&amp;item_number=pixls-london&amp;currency_code=USD&amp;bn=PP%2dDonationsBF%3abtn_donate_SM%2egif%3aNonHosted">PayPal</a> if you want:</p>
<p><a href="https://www.paypal.com/cgi-bin/webscr?cmd=_donations&amp;business=patdavid%40gmail%2ecom&amp;lc=US&amp;item_name=PIXLS%2eUS%20LGM%2FLondon&amp;item_number=pixls-london&amp;currency_code=USD&amp;bn=PP%2dDonationsBF%3abtn_donate_SM%2egif%3aNonHosted"><img src="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/donate.png" alt='Lend a hand via PayPal' style='width: 33%;'/></a></p>
<h3 id="awareness"><a href="#awareness" class="header-link-alt">Awareness</a></h3>
<p>I realize that not everyone will be able to donate funds.  No sweat!
If you’d still like to help out then perhaps you can help us raise awareness for the campaign?
The more folks that know about it the better!</p>
<p>Re-tweeting, blogging, linking, yelling on a street corner all help to raise awareness of what we are doing here.
Heck, just invite folks to come read and participate in the community.  Let’s help even more people learn about free software!</p>
<h2 id="come-join-us"><a href="#come-join-us" class="header-link-alt">Come Join Us</a></h2>
<p>Of course, even better if you are able to make your way to London and actually join us at the <a href="http://libregraphicsmeeting.org/2016/">Libre Graphics Meeting 2016</a>!</p>
<p>The event will be April 15<sup>th</sup> &mdash; 18<sup>th</sup>, hosted by <a href="http://www.westminster.ac.uk/about-us/faculties/media">Westminster School of Media Arts and Design</a>, University of Westminster at the Harrow Campus (red marker on the map).</p>
<div class='fluid-vid'>
<iframe src="https://www.google.com/maps/d/embed?mid=zYKepeQNftPo.koxL6CFw1nPk" width="640" height="480" style='border: none;'></iframe>
</div>

<p>The little checkered flag on the map is for something really neat: a PIXLS meetup!</p>
<h3 id="pixls-meet-up"><a href="#pixls-meet-up" class="header-link-alt">PIXLS Meet Up</a></h3>
<p>I am going to arrive a day early so that we can have a gathering of PIXLS community folks and anyone else who wants to join us for some photographic fun!</p>
<p>Thanks to the local organizers in London (yay Lara!), we have facilities for us to use.
We will be meeting on Thursday, April 14<sup>th</sup> at the <a href="http://www.furtherfield.org/gallery/visit">Furtherfield Commons</a>.
The facilities will be available from 1000 &ndash; 1800 for us to use.</p>
<p><a href="http://www.furtherfield.org/gallery/visit">Furtherfield Commons</a><br>
Finsbury Gate &ndash; Finsbury Park<br>
Finsbury Park, London, N4 2NQ<br></p>
<p>As near as I can tell, here’s a street view of the Finsbury Gate:</p>
<div class='fluid-vid'>
<iframe src="https://www.google.com/maps/embed?pb=!1m0!3m2!1sen!2sus!4v1452283931744!6m8!1m7!1sOP5bSwtG8XL-Rdoz2M-RyQ!2m2!1d51.56506385511825!2d-0.1037885701573437!3f315.2912956391929!4f-1.9344543679182067!5f0.7820865974627469" width="600" height="450" frameborder="0" style="border:0" allowfullscreen></iframe>
</div>

<p>I believe the <a href="http://www.furtherfield.org/gallery/visit">Commons</a> building is just inside this gate, and on the left.</p>
<p>In 2014 I held a photowalk with LGM attendees in Leipzig the day before the event that was great fun.
Let’s expand the idea and do even more!</p>
<figure>
<img src="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/nikolaikirche.jpg" alt='Nikolaikirche, Leipzig, LGM 2014'/>
<figcaption>
Nikolaikirche, Leipzig, from the 2014 LGM photowalk.<br/>
(That’s houz in the bottom right)
</figcaption>
</figure>

<p>Here’s a Flickr <a href="https://www.flickr.com/photos/patdavid/albums/72157643712169045">album of my images from LGM2014 in Leipzig</a>:</p>
<figure>
<a data-flickr-embed="true" data-header="true" data-footer="true"  href="https://www.flickr.com/photos/patdavid/albums/72157643712169045" title="LGM2014"><img src="https://farm8.staticflickr.com/7214/13781228444_956fcee5ef_z.jpg" width="640" height="640" alt="LGM2014"></a><script async src="https://pixls.us//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
</figure>

<p>This year I plan on bringing a model along to shoot while we are out and about (my friend <a href="https://www.flickr.com/photos/patdavid/albums/72157632799856846">Mairi</a> if she’s available - or a local model if not).
I will also be doing a photowalk again, either in the morning or afternoon.</p>
<p>I am also looking for folks from the community to suggest holding their own photoshoots or workshops, so please step forward and let me know if you’d be interested in doing something!
The facilities have bench seating for approximately 20 people, a big desk, and a projector as well.</p>
<p>Three things that I personally will be doing are (in no particular order):</p>
<ul>
<li>Natural + flash portraits and model shooting workshop.</li>
<li>Photowalk around the park + surrounding environs.</li>
<li>Portraits + architectural photos for Furtherfield (the hosts).</li>
</ul>
<p>I am hoping to possibly record some of these workshops and interactions for posterity and others that might not be able to make it to London.
It might be fun to record some shoots for the community to be able to use!</p>
<p>I am also 100% open to suggestions for content that you, the community, might be interested in seeing.
If you have something you’d like me to try (and record), please let me know!</p>
<figure>
<img src="https://pixls.us/blog/2016/01/libre-graphics-meeting-london/mairi-troisieme.jpg" alt='Mairi Troisieme'/>
<figcaption>
Hopefully <a href='https://www.flickr.com/photos/patdavid/16259030889/in/album-72157632799856846/'>Mairi</a> will be able to make it to London to model for us!
</figcaption>
</figure>



  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ darktable 2.0 ]]></title>
            <link>https://pixls.us/blog/2015/12/darktable-2-0/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/12/darktable-2-0/</guid>
            <pubDate>Fri, 25 Dec 2015 02:56:56 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/12/darktable-2-0/Lying in Ambush.jpg" /><br/>
                 <h1>darktable 2.0</h1>  
                 <h2>An awesome present for the end of 2015!</h2>   
                <style>
li {  margin-bottom: 0.25rem; }
ul + h3 { margin-top: 1.5rem; }
</style>

<p>Sneaking a release out on Christmas Eve, the <a href="https://www.darktable.org">darktable</a> team have announced their feature release of <a href="https://www.darktable.org/2015/12/darktable-2-0-released/">darktable 2.0</a>!
After quite a few months of Release Candidates the 2.0 is finally here.
Please join me in saying <em><strong>Congratulations</strong></em> and a hearty <em><strong>Thank You!</strong></em> for all of their work bringing this release to us.</p>
<!-- more -->
<p>Alex Prokoudine of <a href="http://libregraphicsworld.org">Libre Graphics World</a> has a more <a href="http://libregraphicsworld.org/blog/entry/darktable-2-0-released-with-printing-support">in-depth look at the release</a> including a nice interview with part of the team: Johannes Hanika, Tobias Ellinghaus, Roman Lebedev, and Jeremy Rosen.  My favorite tidbit from the interview:</p>
<blockquote>
<p>There is a lot less planning involved than many might think.</p>
<div style="text-align: right; font-size: 0.85rem;">&mdash; Tobias Ellinghaus</div>

</blockquote>
<p><a href="https://www.roberthutton.net/">Robert Hutton</a> has taken the time to produce a <a href="https://www.youtube.com/watch?v=VJbJ0btlui0">video covering the new features</a> and other changes between 1.6 and 2.0 as well:</p>
<div class='fluid-vid'>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/VJbJ0btlui0" frameborder="0" allowfullscreen></iframe>
</div>

<p>A high-level look at the changes and improvements from the <a href="https://www.darktable.org/2015/12/darktable-2-0-released/">release post on the darktable site</a>:</p>
<h3 id="gui-"><a href="#gui-" class="header-link-alt">gui:</a></h3>
<ul>
<li>darktable has been ported to gtk-3.0</li>
<li>the viewport in darkroom mode is now dynamically sized, you specify the border width</li>
<li>side panels now default to a width of 350px in dt 2.0 instead of 300px in dt 1.6</li>
<li>further hidpi enhancements</li>
<li>navigating lighttable with arrow keys and space/enter</li>
<li>brush size/hardness/opacity have key accels</li>
<li>allow adding tone- and basecurve nodes with ctrl-click</li>
<li>the facebook login procedure is a little different now</li>
<li>image information now supports gps altitude</li>
</ul>
<h3 id="features-"><a href="#features-" class="header-link-alt">features:</a></h3>
<ul>
<li>new print mode</li>
<li>reworked screen color management (softproof, gamut check etc.)</li>
<li>delete/trash feature</li>
<li>pdf export</li>
<li>export can upscale</li>
<li>new “mode” parameter in the export panel to fine tune application of styles upon export</li>
</ul>
<h3 id="core-improvements-"><a href="#core-improvements-" class="header-link-alt">core improvements:</a></h3>
<ul>
<li>new thumbnail cache replaces mipmap cache (much improved speed, stability and seamless support for even up to 4K/5K screens)</li>
<li>all thumbnails are now properly fully color-managed</li>
<li>it is now possible to generate thumbnails for all images in the library using new darktable-generate-cache tool</li>
<li>we no longer drop history entries above the selected one when leaving darkroom mode or switching images</li>
<li>high quality export now downsamples before watermark and framing to guarantee consistent results</li>
<li>optimizations to loading jpeg’s when using libjpeg-turbo with its custom features</li>
<li>asynchronous camera and printer detection, prevents deadlocks in some cases</li>
<li>noiseprofiles are in external JSON file now</li>
<li>aspect ratios for crop&amp;rotate can be added to config file</li>
</ul>
<h3 id="image-operations-"><a href="#image-operations-" class="header-link-alt">image operations:</a></h3>
<ul>
<li>color reconstruction module</li>
<li>magic lantern-style deflicker was added to the exposure module (extremely useful for timelapses)</li>
<li>text watermarks</li>
<li>shadows&amp;highlights: add option for white point adjustment</li>
<li>more proper Kelvin temperature, fine-tuning preset interpolation in white balance iop</li>
<li>monochrome raw demosaicing (for cameras with color filter array physically removed)</li>
<li>raw black/white point module</li>
</ul>
<h3 id="packaging-"><a href="#packaging-" class="header-link-alt">packaging:</a></h3>
<ul>
<li>removed dependency on libraw</li>
<li>removed dependency on libsquish (solves patent issues as a side effect)</li>
<li>unbundled pugixml, osm-gps-map and colord-gtk</li>
</ul>
<h3 id="generic-"><a href="#generic-" class="header-link-alt">generic:</a></h3>
<ul>
<li>32-bit support is soft-deprecated due to limited virtual address space</li>
<li>support for building with gcc earlier than 4.8 is soft-deprecated</li>
<li>numerous memory leaks were exterminated</li>
<li>overall stability enhancements</li>
</ul>
<h3 id="scripting-"><a href="#scripting-" class="header-link-alt">scripting:</a></h3>
<ul>
<li>lua scripts can now add UI elements to the lighttable view (buttons, sliders etc…)</li>
<li>a new repository for external lua scripts was started: <a href="https://github.com/darktable-org/lua-scripts">https://github.com/darktable-org/lua-scripts</a></li>
<li>it is now possible to edit the collection filters via lua</li>
<li>it is now possible to add new cropping guides via lua</li>
<li>it is now possible to run background tasks in lua</li>
<li>a lua event is generated when the mouse under the cursor changes</li>
</ul>
<p>The source is <a href="https://www.darktable.org/install/">available now</a> as well as a .dmg for OS X.<br>Various Linux distro builds are either already available or will be soon!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Let's Encrypt! ]]></title>
            <link>https://pixls.us/blog/2015/12/let-s-encrypt/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/12/let-s-encrypt/</guid>
            <pubDate>Tue, 15 Dec 2015 18:53:26 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/12/let-s-encrypt/LE.jpg" /><br/>
                 <h1>Let's Encrypt!</h1>  
                 <h2>Also a neat 2.5D parallax video for Wikipedia.</h2>   
                <p>I finally got off my butt to get a process in place to obtain and update security certificates using Let’s Encrypt for both <a href="https://pixls.us//pixls.us">pixls.us</a> and <a href="https://pixls.us//discuss.pixls.us">discuss.pixls.us</a>.
I also did some (<em>more</em>) work with <a href="https://commons.wikimedia.org/wiki/User:Victorgrigas">Victor Grigas</a> and <a href="http://www.wikipedia.org">Wikipedia</a> to support their <a href="https://www.youtube.com/watch?v=Rm1LKcHD1VE">#Edit2015</a> video this year.</p>
<!-- more -->
<h2 id="wikipedia-edit2015"><a href="#wikipedia-edit2015" class="header-link-alt">Wikipedia #Edit2015</a></h2>
<p>Last year, I did some 2.5 parallax animations for Wikipedia to help with their first-ever <a href="http://blog.wikimedia.org/2014/12/17/wikipedias-first-ever-annual-video-reflects-contributions-from-people-around-the-world/">end-of-the-year retrospective video</a> (<a href="http://blog.patdavid.net/2014/12/wikipedia-edit2014-video.html">see the blog post from last year</a>).
Here is the retrospective from #Edit2014:</p>
<div class='fluid-vid'>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/ci0Pihl2zXY?rel=0" frameborder="0" allowfullscreen></iframe>
</div>


<p>So it was an honor to hear from <a href="https://commons.wikimedia.org/wiki/User:Victorgrigas">Victor Grigas</a> again this year!
This time around there was a neat new crop of images he wanted to animate for the video.
Below you’ll find my contributions (they were all used in the final edit, just shortened to fit appropriately):</p>
<figure style='width: 100%;'>
<div class='fluid-vid'><iframe src="https://player.vimeo.com/video/146782845?portrait=0" width="500" height="281" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe></div>
<figcaption>
<a href="https://vimeo.com/146782845">Wiki #Edit2015 Bel</a> from <a href="https://vimeo.com/patdavid">Pat David</a> on <a href="https://vimeo.com">Vimeo</a>.
</figcaption>
</figure>

<figure style='width: 100%;'>
<div class='fluid-vid'><iframe src="https://player.vimeo.com/video/146784000?portrait=0" width="500" height="281" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe></div> 
<figcaption><a href="https://vimeo.com/146784000">Wiki #Edit2015 Je Suis Charlie</a> from <a href="https://vimeo.com/patdavid">Pat David</a> on <a href="https://vimeo.com">Vimeo</a>.</figcaption>
</figure>

<figure style='width: 100%;'>
<div class='fluid-vid'><iframe src="https://player.vimeo.com/video/146790790?portrait=0" width="500" height="281" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe></div> 
<figcaption><a href="https://vimeo.com/146790790">Wiki #Edit2015 Samantha Cristoforetti Nimoy Tribute</a> from <a href="https://vimeo.com/patdavid">Pat David</a> on <a href="https://vimeo.com">Vimeo</a>.</figcaption>
</figure>

<figure style='width: 100%;'>
<div class='fluid-vid'><iframe src="https://player.vimeo.com/video/146791049?portrait=0" width="500" height="281" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe></div> 
<figcaption><a href="https://vimeo.com/146791049">Wiki #Edit2015 SCOTUS LGBQT</a> from <a href="https://vimeo.com/patdavid">Pat David</a> on <a href="https://vimeo.com">Vimeo</a>.</figcaption>
</figure>

<p>Here is the final cut of the video, just released today:</p>
<figure class='big-vid'>
<div class='fluid-vid'>
<iframe width="1280" height="720" src="https://www.youtube-nocookie.com/embed/Rm1LKcHD1VE?rel=0" frameborder="0" allowfullscreen></iframe>
</div>
</figure>

<p>Victor chose some really neat images that were fun to work on!
Of course, all free software was used in this creation (<a href="https://www.gimp.org">GIMP</a> for cutting up the images into sections and rebuilding textures as needed and <a href="http://www.blender.org">Blender</a> for re-assembling the planes and animating the camera movements).
I had previously <a href="http://blog.patdavid.net/2014/02/25d-parallax-animated-photo-tutorial.html">written a tutorial</a> on doing this with free software on my blog.</p>
<p>You can <a href="http://blog.wikimedia.org/2015/12/15/edit2015/">read more on the wikimedia.org blog</a>!</p>
<h2 id="new-certificates"><a href="#new-certificates" class="header-link-alt">New Certificates</a></h2>
<p><img src="https://pixls.us/blog/2015/12/let-s-encrypt/letsencrypt-logo-horizontal.png" alt="Let's Encrypt Logo" style='width:initial;' width='550' height='131'/></p>
<p>Yes, this is not very exciting I’ll concede.
I think it <em>is</em> important though.</p>
<p>I recently took advantage of my beta invite to <a href="https://letsencrypt.org">Let’s Encrypt</a>.
It’s a certificate authority that provides free X.509 certs for domain owners that was founded by the <a href="https://www.eff.org/">Electronic Frontier Foundation</a>, <a href="www.mozilla.org">Mozilla</a>, and the <a href="https://www.umich.edu/">University of Michigan</a>.</p>
<p>The key principles behind <em>Let’s Encrypt</em> are:</p>
<ul>
<li><strong>Free:</strong> Anyone who owns a domain name can use Let’s Encrypt to obtain a trusted certificate at zero cost.</li>
<li><strong>Automatic:</strong> Software running on a web server can interact with Let’s Encrypt to painlessly obtain a certificate, securely configure it for use, and automatically take care of renewal.</li>
<li><strong>Secure:</strong> Let’s Encrypt will serve as a platform for advancing TLS security best practices, both on the CA side and by helping site operators properly secure their servers.</li>
<li><strong>Transparent:</strong> All certificates issued or revoked will be publicly recorded and available for anyone to inspect.</li>
<li><strong>Open:</strong> The automatic issuance and renewal protocol will be published as an open standard that others can adopt.</li>
<li><strong>Cooperative:</strong> Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the community, beyond the control of any one organization.</li>
</ul>
<p>It was relatively painless to obtain the certs.
I only had to run their program to use ACME to verify my domain ownership through placing a file on my web root.
Once the certs were generated I only had to make some small changes for it to work automatically on <a href="https://discuss.pixls.us">https://discuss.pixls.us</a>.
(And to automatically get picked up when I update the certs within 90 days).</p>
<p>I still had to manually copy/paste the certs into cpanel for <a href="https://pixls.us">https://pixls.us</a>, though.
Not automated (<em>or elegant</em>) but it works and only takes an extra moment to do.</p>
<!-- more -->
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Users Guide to High Bit Depth GIMP 2.9.2, Part 2 ]]></title>
            <link>https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/</link>
            <guid isPermaLink="true">https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/</guid>
            <pubDate>Wed, 02 Dec 2015 18:00:00 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/flying-bird-between-trees.jpg" /><br/>
                 <h1>Users Guide to High Bit Depth GIMP 2.9.2, Part 2</h1>  
                 <h2>Part 2: Radiometrically correct editing, unbounded ICC profile conversions, and unclamped editing</h2>   
                <p class='aside'>
This is Part 2 of a two-part guide to high bit depth editing in GIMP 2.9.2 with Elle Stone.<br>The first part of this article can be found here: <a href="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/"><em>Part 1</em></a>.
</p>


<h3 id="contents">Contents<a href="#contents" class="header-link"><i class="fa fa-link"></i></a></h3>
<ol class='toc'>
<li><a href="#radiometrically-correct-editing">Using GIMP 2.9.2 for radiometrically correct editing</a>

    <ol>
    <li><a href="#linearized-srgb-channel-values-and-radiometrically-correct-editing">Linearized sRGB channel values and radiometrically correct editing</a></li>
    <li><a href="#using-the-linear-light-option-in-the-image-precision-menu">Using the “Linear light” option in the “Image/Precision” menu</a></li>
    <li><a href="#a-note-on-interoperability-between-krita-and-gimp">A note on interoperability between Krita and GIMP</a></li>
    </ol>
</li>

<li><a href="#gimp-2-9-2-s-unbounded-floating-point-icc-profile-conversions-handle-with-care-">GIMP 2.9.2’s unbounded floating point ICC profile conversions (handle with care!)</a></li>

<li><a href="#using-gimp-2-9-2-s-floating-point-precision-for-unclamped-editing">Using GIMP 2.9.2’s floating point precision for unclamped editing</a>

    <ol>
    <li><a href="#high-bit-depth-gimp-s-unclamped-editing-a-whole-realm-of-new-editing-possibilities">High bit depth GIMP’s unclamped editing: a whole realm of new editing possibilities</a></li>
    <li><a href="#if-the-thought-of-working-with-unclamped-rgb-data-is-unappealing-use-integer-precision">If the thought of working with unclamped RGB data is unappealing, use integer precision</a></li>
    </ol>
</li>

<li>
<a href="#looking-to-the-future-gimp-3-0-and-beyond">Looking to the future: GIMP 3.0 and beyond</a>
</li>
</ol>


<hr>
<h2 id="radiometrically-correct-editing">Radiometrically correct editing<a href="#radiometrically-correct-editing" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="linearized-srgb-channel-values-and-radiometrically-correct-editing">Linearized sRGB channel values and radiometrically correct editing<a href="#linearized-srgb-channel-values-and-radiometrically-correct-editing" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>One goal for GIMP 2.10 is to make it easy for users to produce radiometrically correct editing results. “Radiometrically correct editing” reflects the way light and color combine out there in the real world, and so requires that the relevant editing operations be done on linearized RGB.</p>

<p>Like many commonly used RGB working spaces, the sRGB color space is encoded using perceptually uniform RGB. Unfortunately colors simply don’t blend properly in perceptually uniform color spaces. So when you open an sRGB image using GIMP 2.9.2 and start to edit, in order to produce radiometrically correct results, many GIMP 2.9 editing operations will silently linearize the RGB channel information before the editing operation is actually done.</p>

<p>GIMP 2.9.2 editing operations that automatically linearize the RGB channel values include scaling the image, Gaussian blur, UnSharp Mask, Channel Mixer, Auto Stretch Contrast, decomposing to LAB and LCH, all of the LCH blend modes, and quite a few other editing operations.</p>

<p>GIMP 2.9.2 editing operations that <a title="GIMP bug report:  Curves and Levels should operate by default on linear RGB and present linear RGB Histograms" href="https://bugzilla.gnome.org/show_bug.cgi?id=757444">ought to, but don’t yet, linearize the RGB channels include the all-important Curves and Levels operations.</a> For Levels and Curves, to operate on linearized RGB, change the precision to “Linear light” and use the Gamma hack. However, <a title="Jpeg attachment to bug757444 illustrating the problem. with the Curves histogram" href="https://bug757444.bugzilla-attachments.gnome.org/attachment.cgi?id=314590">the displayed histogram will be misleading</a>.</p>

<p>The GIMP 2.9.2 editing operations that automatically linearize the RGB channel values do this regardless of whether you choose “Perceptual gamma (sRGB)” or “Linear light” precision. The only thing that changes when you switch between the “Perceptual gamma (sRGB)” and “Linear light” precisions is <em>how colors blend when painting and when blending different layers together</em>.</p>

<p>(Well, what the Gamma hack actually does changes when you switch between the “Perceptual gamma (sRGB)” and “Linear light” precisions, but the way it changes varies from one operation to the next, which is why I advise to not use the Gamma hack unless you know exactly what you are doing.)</p>

<h3 id="using-the-linear-light-option-in-the-image-precision-menu">Using the “Linear light” option in the “Image/Precision” menu<a href="#using-the-linear-light-option-in-the-image-precision-menu" class="header-link"><i class="fa fa-link"></i></a></h3>
<figure class='big-vid' style='max-width:768px;'>
<img width="768" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/normal-blend-perceptual-vs-linear-cyan-background.jpg" alt="normal-blend-perceptual-vs-linear-cyan-background">
<figcaption><strong>Large soft disks painted on a cyan background.</strong><br/>
 <ol><li><i>Top row:</i> Painted using “Perceptual gamma (sRGB)” precision. Notice the darker colors surrounding the red and magenta disks, and the green surrounding the yellow disk: those are “gamma” artifacts.</li> <li><i>Bottom row:</i> Painted using “Linear Light” precision. This is how light waves blend to make colors out there in the real world.</li></ol>
</figcaption>
</figure>

<figure class='big-vid' style='max-width: 768px;'>
<img width="768" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/normal-blend-perceptual-vs-linear.jpg" alt="normal-blend-perceptual-vs-linear">
<figcaption><strong>Circles painted on a red background.</strong><br/>
 <ol><li><i>Top row:</i> Painted using “Perceptual gamma (sRGB)” precision. The dark edges surrounding the paint strokes are “gamma” artifacts.</li> <li><i>Bottom row:</i> Painted using “Linear Light” precision. This is how light waves blend to make colors out there in the real world.</li></ol>
</figcaption>
</figure>

<p>In GIMP 2.9.2, when using the Normal, Multiply, Divide, Addition, and Subtract painting and Layer blending:</p>
<ul class="double-space">
<li>For radiometrically correct Layer blending and painting, use the “Image/Precision” menu to select the “Linear light” precision option.</li> 

<li>When “Perceptual gamma (sRGB)” is selected, layers and colors will blend and paint like they blend in GIMP 2.8, which is to say there will be “gamma” artifacts.</li> </ul>

<p>The LCH painting and Layer blend modes will <em>always</em> blend using Linear light precision, regardless of what you choose in the “Image/Precision” menu.</p>

<p>What about all the other Layer and painting blend modes? The concept of “radiometrically correct” doesn’t really apply to those other blend modes, so choosing between “Perceptual gamma (sRGB)” and “Linear light” depends entirely on what you, the artist or photographer, actually want to accomplish. Switching back and forth is time-consuming so I tend to stay at “Linear light” precision all the time, unless I really, really, really want a blend mode to operate on perceptually uniform RGB.</p>

<h3 id="a-note-on-interoperability-between-krita-and-gimp">A note on interoperability between Krita and GIMP<a href="#a-note-on-interoperability-between-krita-and-gimp" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Many digital artists and photographers are switching to linear gamma image editing. Let’s say you use Krita for digital painting in a true linear gamma sRGB profile, specifically <a title="Krita/Manual/ColorManagement, section on Linear and Gamma corrected colours. The whole tutorial is very well worth reading." href="https://userbase.kde.org/Krita/Manual/ColorManagement">the “sRGB-elle-V4-g10.icc” profile that is supplied with recent Krita installations</a>, and you want to export your image from Krita and open it with GIMP 2.9.2.</p> 

<p>Upon opening the image, GIMP will automatically detect that the image is in a linear gamma color space, and will offer you the option to keep the embedded profile or convert to the GIMP built-in sRGB profile. Either way, GIMP will automatically mark the image as using “Linear light” precision.</p> 

<p>For interoperability between Krita and GIMP, when editing a linear gamma sRGB image that was exported to disk by Krita:</p> 
<ol>
<li>Upon importing the Krita-exported linear gamma sRGB image into GIMP, elect to <em>keep</em> the embedded “sRGB-elle-V4-g10.icc” profile.</li> 
<li><em>Keep the precision at “Linear light”</em>. </li>
<li>Then <em>assign</em> the GIMP built-in Linear RGB profile (“Image/Color management/Assign”). The GIMP built-in Linear RGB profile is functionally exactly the same as Krita’s supplied “sRGB-elle-V4-g10.icc” profile (as are the GIMP built-in sRGB profile and Krita’s “sRGB-elle-V4-srgbtrc.icc” profile).</li></ol>

<p>Once you’ve assigned the GIMP built-in Linear RGB profile to the imported linear gamma sRGB Krita image, then feel free to change the precision back and forth between “Linear light” and “Perceptual gamma (sRGB)”, as suits your editing goal.</p>

<p>When you are finished editing the image that was imported from Krita to GIMP:</p>

<ol>
<li>Convert the image to one of the “Perceptual gamma (sRGB) precisions (“Image/Precision”).</li>
<li>Convert the image to the Krita-supplied “sRGB-elle-V4-g10.icc” profile (“Image/Color management/Convert”).</li>
<li>Export the image to disk and import it into Krita.</li>
</ol>

<p>If your Krita image is in a color space other than sRGB, I would suggest that you simply not try to edit non-sRGB images in GIMP 2.9.2 because many GIMP 2.9.2 editing operations do depend on hard-coded sRGB color space parameters.</p>


<h2 id="gimp-2-9-2-s-unbounded-floating-point-icc-profile-conversions-handle-with-care-">GIMP 2.9.2’s unbounded floating point ICC profile conversions (handle with care!)<a href="#gimp-2-9-2-s-unbounded-floating-point-icc-profile-conversions-handle-with-care-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Compared to most other RGB color spaces, the sRGB color space gamut is very small. When shooting raw, it’s <a title="Nine Degrees Below Photography: Photographic colors that exceed the very small sRGB color gamut" href="http://ninedegreesbelow.com/photography/srgb-versus-photographic-colors.html">incredibly easy to capture colors that exceed the sRGB color space</a>.</p> 

<figure class='big-vid' style='max-width: 768px;'>
<img width="768" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/srgb-inside-prophoto-3-views.jpg" alt="srgb-inside-prophoto-3-views">
<figcaption><strong>The sRGB (the gray blob) and ProPhotoRGB (the multicolored wire-frame) color spaces as seen from different viewing angles inside the CIELAB reference color space.</strong> <em>(Images produced using ArgyllCMS and View3DScene).</em></figcaption>
</figure>


<p>Every time you convert saturated colors from larger gamut RGB working spaces to GIMP’s built-in sRGB working space <em>using floating point precision</em>, you run the risk of producing out of gamut RGB channel values. Rather than just explaining how this works, it’s better if you experiment and see for yourself:</p>

<ol class="double-space">
<li>Download this 16-bit integer ProPhotoRGB png, “<a href="http://ninedegreesbelow.com/photography/gimp/users-guide/saturated-colors.png">saturated-colors.png</a>“.</li>

<li>Open “saturated-colors.png” with GIMP 2.9.2. GIMP will report the color space profile as “LargeRGB-elle-V4-g18.icc” — this profile is functionally equivalent to ProPhotoRGB.</li>

<li>Immediately change the precision to 32-bit floating point precision (“Image/Precision/32-bit floating point) and check the “Perceptual gamma (sRGB)” option.</li>

<li>Using the Color Picker Tool, make sure the Color Picker is set to “Use info Window” in the Tools dialog. Then eye-dropper the color squares, and make sure to set one of the columns in the Color Picker info Window to “Pixel”. The red square will eye-dropper as (1.000000, 0.000000, 0.000000). The cyan square will eyedropper as (0.000000, 1.000000, 1.000000), and so on. All the channel values will be either 1.000000 or 0.000000.</li>

<li>While still at 32-bit floating point precision, and still using the “Perceptual gamma (sRGB)” option, convert “saturated-colors.png” to GIMP’s built-in sRGB.</li>

<li>Eyedropper the color squares again. The red square will now eyedropper as approximately (1.363299, -2.956852, -0.110389), the cyan square will eyedropper as approximately (-13.365499, 1.094588, 1.003746), and so on.</li> 

<li>For extra credit, change the precision from 32-bit floating point “Perceptual gamma (sRGB)” to 32-bit floating point “Linear light” and eye-dropper the colors again. I will leave it to you as an exercise to figure out why the eye-droppered RGB “Pixel” values change so radically when you switch back and forth between “Perceptual gamma (sRGB)” and “Linear light”.</li>

</ol>

<p>Where did the funny RGB channel values come from? At floating point precision, GIMP uses LCMS2 to do <a title="Nine Degrees Below Photography: LCMS2 Unbounded ICC Profile Conversions" href="http://ninedegreesbelow.com/photography/lcms2-unbounded-mode.html"><i>unbounded</i> ICC profile conversions</a>. This allows an RGB image to be converted from the source to the destination color space without clipping otherwise out of gamut colors. So instead of clipping the RGB channels values to the <a title="Nine Degrees Below Photography: What are 'Clipped Colors' from ICC Profile Conversions?" href="http://ninedegreesbelow.com/photography/icc-profile-conversion-clipped-colors-examples.html">boundaries of the very small sRGB color gamut</a>, the sRGB color gamut was effectively “unbounded”.</p>

<p>When you do an unbounded ICC profile conversion from a larger color space to sRGB, all the otherwise out of gamut colors are encoded using at least one sRGB channel value that is less than zero. And you might get one or more channel values that are greater than 1.0. Figure 11 below gives you a visual idea of the difference between bounded and unbounded ICC profile conversions:</p> 

<figure class='big-vid' style="max-width: 769px;">
<img width="769" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/red-flower-clipping-prophoto-to-srgb.jpg" alt="red-flower-clipping-prophoto-to-srgb">
<figcaption><strong>Unbounded (unclipped floating point) and bounded (clipped integer) conversions of a very colorful red flower from the original ProPhotoRGB color space to the much smaller sRGB color space.</strong> <em>(Images produced using ArgyllCMS and View3DScene).</em><br/><br/>

<ul>
<li><i>Top row:</i> Unbounded (unclipped floating point) and bounded (clipped integer) conversions of a very colorful red flower from the original ProPhotoRGB color space to the much smaller sRGB color space. The unclipped flower is on the left and the clipped flower is on the right.</li>

<li><i>Middle and bottom rows:</i> the unclipped and clipped flower colors in the sRGB color space. The unclipped colors are shown on the left and the clipped colors are shown on the right: <ul> <li class="none">The gray blobs are the boundaries of the sRGB color gamut.</li>
<li>The middle row shows the view inside CIELAB looking straight down the LAB Lightness axis.</li> 
<li>The bottom row shows the view inside CIELAB looking along the plane formed by the LAB A and B axes.</li></ul></li>
</ul>

The unclipped sRGB colors shown on the left are all encoded using at least one sRGB channel value that is less than zero, that is, using a negative RGB channel value.
</figcaption>
</figure>


<p>When converting saturated colors from larger color spaces to sRGB, not clipping would seem to be much better than clipping. Unfortunately a whole lot of RGB editing operations don’t work when performed on negative RGB channel values. In particular, <a title="Nine Degrees Below Photography: Multiplying out of gamut colors in the unbounded sRGB color space produces meaningless results" href="http://ninedegreesbelow.com/photography/unbounded-srgb-multiply-produces-meaningless-results.html">multiplying such colors produces meaningless results</a>, which of course applies not just to the Multiply and Divide blend modes (division and multiplications are inverse operations), but to <em>all</em> editing operations that involve multiplication by a color (other than gray, which is a special case).</p>

<p>So here’s one workaround you can use to clip the out of gamut channel values: Change the precision of “saturated-colors.png” from 32-bit floating point to 32-bit <i>integer</i> precision (“Image/Precision/32-bit integer”). This will clip the out of gamut channel values (integer precision always clips out of gamut RGB channel values). Depending on your monitor profile’s color gamut, you might or might not see the displayed colors change appearance; on a wide-gamut monitor, the change will be obvious.</p> 

<p>When switching to integer precision, all colors are <em>clipped</em> to fit within the sRGB color gamut. Switching back to floating point precision won’t restore the clipped colors.</p>

<aside class="more"><h4>More about out of gamut channel values</h4>

<p>Editing operations that only use add/subtract (which are inverse of each other), and/or multiply/divide by gray (where R=G=B), work just fine on colors that are encoded using one or more negative channel values. Almost all of the problems with <a title="Nine Degrees Below Photography: Using unbounded sRGB as a universal color space for image editing is a really bad idea" href="http://ninedegreesbelow.com/photography/unbounded-srgb-as-universal-working-space.html">unbounded sRGB image editing</a> have to do with editing operations that use multiply and divide.</p>

<p>I’m glossing over the difference between “out of gamut and encoded using at least one negative channel value” and “in gamut high dynamic range colors”, which are encoded using at least one channel value that is &gt;1.0, but no channel value that is &lt;0.0. In this latter case the color is inside the sRGB color gamut for HDR editing, but it falls outside the “0.0 to 1.0” floating point range for <a title="Nine Degrees Below Photography: Models for image editing: Display-referred and scene-referred" href="http://ninedegreesbelow.com/photography/display-referred-scene-referred.html">display-referred editing.</a></p>
</aside>

<p>As an important aside (and contrary to a distressingly popular assumption), when doing a normal “bounded” conversion to sRGB, <a title="Nine Degrees Below Photography: ICC Profile Conversion Intents" href="http://ninedegreesbelow.com/photography/icc-profile-conversion-intents.html">using “Perceptual intent” does <em>not</em> “keep all the colors”</a>. The regular and linear gamma sRGB working color space profiles are matrix profiles, which don’t have perceptual intent tables. When you ask for perceptual intent and the destination profile is a matrix profile, what you get is relative colorimetric intent, which clips.</p>


<h2 id="using-gimp-2-9-2-s-floating-point-precision-for-unclamped-editing">Using GIMP 2.9.2’s floating point precision for unclamped editing<a href="#using-gimp-2-9-2-s-floating-point-precision-for-unclamped-editing" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="high-bit-depth-gimp-s-unclamped-editing-a-whole-realm-of-new-editing-possibilities">High bit depth GIMP’s unclamped editing: a whole realm of new editing possibilities<a href="#high-bit-depth-gimp-s-unclamped-editing-a-whole-realm-of-new-editing-possibilities" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I’ve warned you about the bad things that can happen when you try to multiply or divide colors that are encoded using negative sRGB channel values. However, out of gamut sRGB channel values can also be incredibly useful.</p> 

<p>GIMP 2.9.2 does provide a number of “unclamped” editing operations from which the clipping code in the equivalent GIMP 2.8 operation has been removed. For example, at floating point precision, the Levels upper and lower sliders, Unsharp Mask, Channel Mixer and “Colors/Desaturate/Luminance” do not clip out of gamut RGB channel values (however, Curves does clip). Also the Normal, Lightness, Chroma, and Hue blend modes do not clip out of gamut channel values. </p> 

<p>Unclamped editing opens up a whole realm of new editing possibilities. Quoting from <a title="Nine Degrees Below Photography: tutorial on using high bit depth GIMP's new LCH blend modes and unclamped editing operations." href="http://ninedegreesbelow.com/photography/high-bit-depth-gimp-tutorial-edit-tonality-color-separately.html">Autumn colors: An Introduction to High Bit Depth GIMP’s New Editing Capabilities</a>:</p>

<blockquote>
<p>Unclamped editing operations might sound more arcane than interesting, but especially for photographers this is a really big deal:</p>
<ul>
    <li>Automatically clipped RGB data produces lost detail and causes hue and saturation shifts.</li>
    <li>Unclamped editing operations allow you, the photographer, to choose when and how to bring the colors back into gamut.</li>
    <li>Of interest to photographers and digital artists alike, unclamped editing sets the stage for (and already allows very rudimentary) HDR scene-referred image image editing.</li></ul>
</blockquote>

<p>Having used high bit depth GIMP for quite a while now, I can’t imagine going back to editing that is constrained to only using clipped RGB channel values. The <cite>Autumn colors</cite> tutorial provides a start-to-finish editing example making full use of unclamped editing and the LCH blend modes, with a downloadable XCF file so you can follow along.</p>


<h3 id="if-the-thought-of-working-with-unclamped-rgb-data-is-unappealing-use-integer-precision">If the thought of working with unclamped RGB data is unappealing, use integer precision<a href="#if-the-thought-of-working-with-unclamped-rgb-data-is-unappealing-use-integer-precision" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>If working with unclamped RGB channel data is simply not something you want to do, then use integer precision for all your image editing. At integer precision <i>all</i> editing operations clip. This is a function of integer encoding and so happens regardless of whether the particular editing function includes or doesn’t include clipping code.</p>

<h2 id="looking-to-the-future-gimp-3-0-and-beyond">Looking to the future: GIMP 3.0 and beyond<a href="#looking-to-the-future-gimp-3-0-and-beyond" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Even though GIMP 2.10 hasn’t yet been released, high bit depth GIMP is already an amazing image editor. GIMP 3.0 and beyond will bring many more changes, including the port to GTK+3 (for GIMP 3.0), full color management for any well-behaved RGB working space (maybe by 3.2?), plus extended LCH processing with HSV strictly for use with legacy files. Also users will eventually have the ability to choose “Perceptual” encodings other than the sRGB TRC.</p> 

<p>If you would like to see GIMP 3.0 and beyond arrive sooner rather than later, GIMP is coded, documented, and maintained by volunteers, and GIMP needs more developers. If you are not a programmer, there are <a title="GIMP website: Ways to contribute to GIMP development" href="http://www.gimp.org/develop/">many other ways you can contribute to GIMP development.</a></p>

<p><small><strong>All text and images &copy;2015 <a href="http://ninedegreesbelow.com/">Elle Stone</a>, all rights reserved.</strong></small></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Happy Birthday GIMP! ]]></title>
            <link>https://pixls.us/blog/2015/11/happy-birthday-gimp/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/11/happy-birthday-gimp/</guid>
            <pubDate>Wed, 25 Nov 2015 13:25:15 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/11/happy-birthday-gimp/lede_Mimir.jpg" /><br/>
                 <h1>Happy Birthday GIMP!</h1>  
                 <h2>Also, wallpapers and darktable 2.0 creeps even closer!</h2>   
                <p>I got busy building a <a href="https://www.gimp.org">birthday present for a project</a> I work with and all sort of neat things happened in my absence!
The <a href="http://www.ubuntu.com/">Ubuntu</a> <a href="https://wiki.ubuntu.com/UbuntuFreeCultureShowcase"><em>Free Culture Showcase</em></a> chose winners for it’s wallpaper contest for <a href="http://releases.ubuntu.com/15.10/">Ubuntu 15.10</a> ‘Wily Werewolf’ (and quite a few community members were among those chosen).</p>
<p>The <a href="http://www.darktable.org">darktable</a> crew is speeding along to a 2.0 release with a new <a href="https://pixls.us/blog/2015/11/happy-birthday-gimp/#darktable-2-0-rc2">RC2 being released</a>.</p>
<p>Also, a great big <a href="https://pixls.us/blog/2015/11/happy-birthday-gimp/#gimp-birthday"><strong>HAPPY 20<sup>th</sup> BIRTHDAY GIMP</strong></a>!
I made you a present.  I hope it fits and you like it! :)</p>
<!-- more -->
<h2 id="ubuntu-wallpapers"><a href="#ubuntu-wallpapers" class="header-link-alt">Ubuntu Wallpapers</a></h2>
<p>Back in early September I <a href="https://discuss.pixls.us/t/ubuntu-free-culture-showcase/382">posted on discuss</a> about the <a href="https://wiki.ubuntu.com/UbuntuFreeCultureShowcase">Ubuntu Free Culture Showcase</a> that was looking for wallpaper submissions from the free software community to coincide with the release of Ubuntu 15.10 ‘Wily Werewolf’.
The winners were recently chosen from among the submissions and several of our community members had their images chosen!</p>
<p>The winning entries from our community include:</p>
<figure class='big-vid'>
<a href='https://www.flickr.com/photos/carmelo75/21455138181' title='Moss inflorescence by carmelo75 on Flickr'>
<img src="https://pixls.us/blog/2015/11/happy-birthday-gimp/carmelo75.jpg" alt='Moss inflorescence by carmelo75'/>
</a>
<figcaption>
<a href="https://www.flickr.com/photos/carmelo75/21455138181"><em>Moss inflorescence</em></a><br/>
The first winner is from <a href="http://www.google.com">PhotoFlow</a> creator <a href="http://photoflowblog.blogspot.com">Andrea Ferrero</a>
<figcaption>
</figure>

<figure class='big-vid'>
<a href='https://www.flickr.com/photos/40792319@N04/20651557934' title='Light my fire, evening sun by Dariusz Duma on Flickr'>
<img src="https://pixls.us/blog/2015/11/happy-birthday-gimp/Dariusz.jpg" alt='Light my fire, evening sun by Dariusz Duma'/>
</a>
<figcaption>
<a href="https://www.flickr.com/photos/40792319@N04/20651557934"><em>Light my fire, evening sun</em></a><br/>
by <a href="https://www.flickr.com/photos/40792319@N04/">Dariusz Duma</a>
<figcaption>
</figure>

<figure class='big-vid'>
<a href='https://www.flickr.com/photos/philipphaegi/21155753321' title='Sitting Here, Making Fun by Philipp Haegi on Flickr'>
<img src="https://pixls.us/blog/2015/11/happy-birthday-gimp/Mimir.jpg" alt='Sitting Here, Making Fun by Philipp Haegi'/>
</a>
<figcaption>
<a href="https://www.flickr.com/photos/philipphaegi/21155753321"><em>Sitting Here, Making Fun</em></a><br/>
by <a href="https://www.flickr.com/photos/philipphaegi/">Mimir</a>
<figcaption>
</figure>

<figure class='big-vid'>
<a href='https://www.flickr.com/photos/patdavid/4624063643' title='Tranquil by Pat David'>
<img src="https://pixls.us/blog/2015/11/happy-birthday-gimp/Pat.jpg" alt='Tranquil by Pat David'/>
</a>
<figcaption>
<a href="https://www.flickr.com/photos/patdavid/4624063643"><em>Tranquil</em></a><br/>
by <a href="https://www.flickr.com/photos/patdavid/">Pat David</a>
<figcaption>
</figure>

<p>A big congratulations to you all for some amazing images being chosen!
If you’re running Ubuntu 15.10, you can grab the <code>ubuntu-wallpapers</code> package to <a href="https://launchpad.net/ubuntu/wily/+source/ubuntu-wallpapers">get these images right here</a>!</p>
<h2 id="darktable-2-0-rc2"><a href="#darktable-2-0-rc2" class="header-link-alt">darktable 2.0 RC2</a></h2>
<p>Hot on the heels of the prior release candidate, <a href="http://www.darktable.org">darktable</a> now <a href="https://github.com/darktable-org/darktable/releases/tag/release-2.0rc2">has an RC2 out</a>.
There are many minor bugfixes from the previous RC1, such as:</p>
<ul>
<li>high iso fix for exif data of some cameras</li>
<li>various macintosh fixes (fullscreen)</li>
<li>fixed a deadlock</li>
<li>updated translations</li>
</ul>
<p>The preliminary changelog from the 1.6.x series:</p>
<ul>
<li>darktable has been ported to gtk-3.0</li>
<li>new thumbnail cache replaces mipmap cache (much improved speed, less crashiness)</li>
<li>added print mode</li>
<li>reworked screen color management (softproof, gamut check etc.)</li>
<li>removed dependency on libraw</li>
<li>removed dependency on libsquish (solves patent issues as a side effect)</li>
<li>unbundled pugixml, osm-gps-map and colord-gtk</li>
<li>text watermarks</li>
<li>color reconstruction module</li>
<li>raw black/white point module</li>
<li>delete/trash feature</li>
<li>addition to shadows&amp;highlights</li>
<li>more proper Kelvin temperature, fine-tuning preset interpolation in WB iop</li>
<li>noiseprofiles are in external JSON file now</li>
<li>monochrome raw demosaicing (not sure whether it will stay for release, like Deflicker, but hopefully it will stay)</li>
<li>aspect ratios for crop&amp;rotate can be added to conf (ae36f03)</li>
<li>navigating lighttable with arrow keys and space/enter</li>
<li>pdf export – some changes might happen there still</li>
<li>brush size/hardness/opacity have key accels</li>
<li>the facebook login procedure is a little different now</li>
<li>export can upscale</li>
<li>we no longer drop history entries above the selected one when leaving dr or switching images</li>
<li>text/font/color in watermarks</li>
<li>image information now supports gps altitude</li>
<li>allow adding tone- and basecurve nodes with ctrl-click</li>
<li>new “mode” parameter in the export panel</li>
<li>high quality export now downsamples before watermark and frame to guarantee consistent results</li>
<li>lua scripts can now add UI elements to the lighttable view (buttons, sliders etc…)</li>
<li>a new repository for external lua scripts was started.</li>
</ul>
<p>More information and packages can be <a href="https://github.com/darktable-org/darktable/releases/tag/release-2.0rc2">found on the darktable github repository</a>.</p>
<p>Remember, updating from the currently stable 1.6.x series is a one-way street for your edits (no downgrading from 2.0 back to 1.6.x).</p>
<h2 id="gimp-birthday"><a href="#gimp-birthday" class="header-link-alt">GIMP Birthday</a></h2>
<p>All together now…</p>
<p><em>Happy Birthday to GIMP!  Happy Birthday to GIMP!</em>…</p>
<figure>
<img src="https://pixls.us/blog/2015/11/happy-birthday-gimp/wilber-big.png" alt='GIMP Wilber Big Icon'/>
<figcaption>
</figcaption>
</figure>

<p>This past weekend <a href="https://www.gimp.org">GIMP</a> celebrated it’s 20<sup>th</sup> anniversary!
It was twenty years ago on November 21<sup>st</sup> that Peter Mattis <a href="http://www.gimp.org/about/prehistory.html#november-1995-an-announcement">announced the availability</a> of the <strong>“General Image Manipulation Program”</strong> on <em>comp.os.linux.development.apps</em>.</p>
<p>Twenty years later and GIMP doesn’t look a day older than a 1.0 release!
(Yes, there’s a <a href="https://en.wikipedia.org/wiki/Double_entendre">double entendre</a> there).</p>
<p>To celebrate, I’ve been spending the past couple of months getting a brand new website and infrastructure built for the project!
<small><em>Just in case anyone was wondering where I was or why I was so quiet.</em></small>
I like the way it turned out and is shaping up so go have a look if you get a moment!</p>
<p>There’s even an <a href="http://www.gimp.org/news/2015/11/22/20-years-of-gimp-release-of-gimp-2816/">official news post</a> about it on the new site!</p>
<h3 id="gimp-2-8-16"><a href="#gimp-2-8-16" class="header-link-alt">GIMP 2.8.16</a></h3>
<p>To coincide with the 20<sup>th</sup> anniversary, the team also released a new stable version in the 2.8 series: <a href="http://www.gimp.org/downloads/">2.8.16</a>.
Head over to the downloads page to pick up a copy!!</p>
<h2 id="new-photoflow-tutorial"><a href="#new-photoflow-tutorial" class="header-link-alt">New PhotoFlow Tutorial</a></h2>
<p>Still working hard and fast on <a href="http://www.google.com">PhotoFlow</a>, <a href="http://photoflowblog.blogspot.com">Andreas</a> took some time to record a new video tutorial.
He walks through some basic usage of the program, in particular opening an image, adding layers and layer masks, and saving the results.
Have a look and if you have a moment give him some feedback!</p>
<div class='big-vid'>
<div class='fluid-vid'>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/HQpyJapbxrY?rel=0" frameborder="0" allowfullscreen></iframe>
</div>
</div>

<p>Andreas is working on PhotoFlow at a very fast pace, so expect some more news about his progress very soon!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ News from the World of Tomorrow ]]></title>
            <link>https://pixls.us/blog/2015/11/news-from-the-world-of-tomorrow/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/11/news-from-the-world-of-tomorrow/</guid>
            <pubDate>Mon, 02 Nov 2015 13:50:17 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/11/news-from-the-world-of-tomorrow/gmic_peppers.jpg" /><br/>
                 <h1>News from the World of Tomorrow</h1>  
                 <h2>And more awesome updates!</h2>   
                <p>Some awesome updates from the community and activity over on <a href="https://discuss.pixls.us">the forums</a>!
People have been busy doing some really neat things (that really never fail to astound me).
The level of expertise we have floating around on so many topics is quite inspiring.</p>
<div class='fluid-vid'>
<iframe width="480" height="360" src="https://www.youtube-nocookie.com/embed/aiwA0JrGfjA?rel=0" frameborder="0" allowfullscreen></iframe>
</div>

<p><br style="clear:both;"/></p>
<!-- more -->
<h2 id="darktable-2-0-release-candidate"><a href="#darktable-2-0-release-candidate" class="header-link-alt">darktable 2.0 Release Candidate</a></h2>
<h3 id="towards-a-better-darktable-"><a href="#towards-a-better-darktable-" class="header-link-alt">Towards a Better darktable!</a></h3>
<p>A nice Halloween weekend gift for the F/OSS photo community from <a href="http://www.darktable.org">darktable</a>: a first Release Candidate for a 2.0 release is now available!</p>
<p><a href="http://houz.org/">Houz</a> made the announcement on the forums this past weekend and includes some caveats. (Edits will be preserved going up, but it won’t be possible to downgrade back to 1.6.x).</p>
<p>Preliminary notes from houz (and <a href="https://github.com/darktable-org/darktable/releases/tag/release-2.0rc1">Github</a>):</p>
<ul>
<li>darktable has been ported to gtk-3.0</li>
<li>new thumbnail cache replaces mipmap cache (much improved speed, less crashiness)</li>
<li>added print mode</li>
<li>reworked screen color management (softproof, gamut check etc.)</li>
<li>text watermarks</li>
<li>color reconstruction module</li>
<li>raw black/white point module</li>
<li>delete/trash feature</li>
<li>addition to shadows&amp;highlights</li>
<li>more proper Kelvin temperature, fine-tuning preset interpolation in WB iop</li>
<li>noiseprofiles are in external JSON file now</li>
<li>monochrome raw demosaicing (not sure whether it will stay for release, like Deflicker, but hopefully it will stay)</li>
<li>aspect ratios for crop&amp;rotate can be added to conf (ae36f03)</li>
<li>navigating lighttable with arrow keys and space/enter</li>
<li>pdf export – some changes might happen there still</li>
<li>brush size/hardness/opacity have key accels</li>
<li>the facebook login procedure is a little different now</li>
<li>export can upscale</li>
<li>we no longer drop history entries above the selected one when leaving dr or switching images</li>
<li>text/font/color in watermarks</li>
<li>image information now supports gps altitude</li>
<li>allow adding tone- and basecurve nodes with ctrl-click</li>
<li>we renamed mipmaps to thumbnails in the preferences</li>
<li>new “mode” parameter in the export panel</li>
<li>high quality export now downsamples before watermark and frame to guarantee consistent results</li>
<li>lua scripts can now add UI elements to the lighttable view (buttons, sliders etc…)</li>
<li>a new repository for external lua scripts was started.</li>
</ul>
<p><br style="clear:both;"/></p>
<h2 id="g-mic-1-6-7"><a href="#g-mic-1-6-7" class="header-link-alt">G’MIC 1.6.7</a></h2>
<p>Because apparently David Tschumperlé doesn’t sleep, a new release of <a href="http://gmic.eu">G’MIC</a> was <a href="https://discuss.pixls.us/t/release-of-gmic-1-6-7/426">recently announced</a> as well!
This release includes a really neat new patch-based texture resynthesizer that David has been playing with for a while now.</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2015/11/news-from-the-world-of-tomorrow/gmic_syntexturize_patch.jpg" alt="G'MIC Syntexturize Patch" width='960' height='661' />
<figcaption>
Re-synthesizing an input texture to an output of arbitrary size.
</figcaption>
</figure>

<p>It will build an output texture of arbitrary size based on an input texture (and can result in some neat looking peppers apparently).</p>
<p>Speaking of G’MIC…</p>
<h3 id="g-mic-for-adobe-after-effects-and-premier-pro"><a href="#g-mic-for-adobe-after-effects-and-premier-pro" class="header-link-alt">G’MIC for Adobe After Effects and Premier Pro</a></h3>
<p>Yes, I know it’s Adobe.
Still, I can’t help but think that this might be an awesome way to introduce some people to the amazing work being done by so many F/OSS creators.</p>
<p>Tobias Fleischer announced on <a href="https://discuss.pixls.us/t/gmic-for-adobe-after-effects-and-premiere-pro/452">this post</a> that he has managed to get G’MIC working with After Effects and Premier Pro.
Even some of the more intensive filters like skeleton and Rodilius appear to be working fine (if a bit sluggish)!</p>
<figure class='big-vid'>
<img src='https://discuss.pixls.us/uploads/default/original/1X/fdef471a204c3f300f2bc435cf01ea64bb6b2b52.png' alt="Adobe After Effects G'MIC" />
</figure>


<h2 id="photoflow"><a href="#photoflow" class="header-link-alt">PhotoFlow</a></h2>
<p>You might remember <a href="http://photoflowblog.blogspot.ch/">PhotoFlow</a> as the project that creator <a href="http://photoflowblog.blogspot.com/">Andrea Ferrero</a> used when writing his <a href="https://pixls.us/blog/2015/07/photoflow-blended-panorama-tutorial/">Blended Panorama Tutorial</a> from a few months ago.
What you might not realize is that Andrea has also been working at a furious pace improving PhotoFlow (indeed it feels like every few days he is announcing new improvements - almost as fast as G’MIC!).</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2015/11/news-from-the-world-of-tomorrow/photoflow-persp-original.png" alt="PhotoFlow Perspective Correction Original" width='960' height='541' />
<img src="https://pixls.us/blog/2015/11/news-from-the-world-of-tomorrow/photoflow-persp-corrected.png" alt="PhotoFlow Perspective Correction Corrected" width='960' height='541' />
<figcaption>
Example of PhotoFlow perspective correction.
</figcaption>
</figure>

<p>His latest release was <a href="https://discuss.pixls.us/t/release-of-photoflow-version-0-2-3/476">announced a few days ago</a> as 0.2.3.
He’s incorporated some nice new improvements in this version:</p>
<ul>
<li>the additon of the <strong>LMMSE demosaicing</strong> method, directly derived from the algorithm implemented in RawTherapee</li>
<li>an <strong>impulse noise</strong> (also known as <strong>salt&amp;pepper</strong>) reduction tool, again derived from rawTherapee. It effectively reduces isolated bright and dark pixels.</li>
<li>a <strong>perspective correction</strong> tool, derived from Darktable. It can simultaneously correct horizontal and vertical perspective as well as tilting, and works interactively.</li>
</ul>
<p>Head on over to the <a href="http://photoflowblog.blogspot.com/">PhotoFlow Blog</a> to check things out!</p>
<h2 id="lightzone-4-1-3-released"><a href="#lightzone-4-1-3-released" class="header-link-alt">LightZone 4.1.3 Released</a></h2>
<p>We don’t hear as often from folks using <a href="http://lightzoneproject.org/">LightZone</a>, but that doesn’t mean they’re not working on things!
In fact, Doug Pardee just stopped by the forums a while ago to <a href="https://discuss.pixls.us/t/lightzone-4-1-3-released/447">announce a new release</a> is available, 4.1.3.
(Bonus fun - read that topic to see the <a href="http://opensource.org/licenses/BSD-3-Clause"><em>Revised BSD License</em></a> go flying right over my head!)</p>
<p>Head over to <a href="http://lightzoneproject.org/content/september-27-2015-lightzone-v413-now-available">their announcement</a> to see what they’re up to.</p>
<h2 id="rapid-photo-downloader"><a href="#rapid-photo-downloader" class="header-link-alt">Rapid Photo Downloader</a></h2>
<p>We also had the developer of <a href="http://www.damonlynch.net/rapid/">Rapid Photo Downloader</a>, Damon Lynch, <a href="https://discuss.pixls.us/t/feedback-wanted-about-rapid-photo-downloader/463">stop by the forums to solicit feedback</a> from users just the other day.
A nice discussion ensued and is well worth reading (or even contributing to!).</p>
<p>Damon is working hard on the next release of RPD (apparently the biggest update since the projects inception in 2007!), so go show some support and provide some feedback for him.</p>
<h2 id="rawtherapee-forum"><a href="#rawtherapee-forum" class="header-link-alt">RawTherapee Forum</a></h2>
<figure>
<img src='https://discuss.pixls.us/uploads/default/original/1X/b5a07c7985e481a95344c2f0e4d6c2a2cac0bda0.png' alt="RawTherapee Logo"/>
</figure>

<p>The <a href="http://rawtherapee.com/">RawTherapee</a> team is testing out having a <a href="https://discuss.pixls.us/c/software/rawtherapee">forum over here on discuss</a> as well (we welcomed the <a href="https://discuss.pixls.us/c/software/gmic">G’MIC community</a> a little while ago).
This is currently an alternate forum for the project (which <em>may</em> become the official forum in the future).
The category is quiet as we only just set it up, so drop by and say hello!</p>
<p>Speaking of RawTherapee…</p>
<h2 id="lede-image"><a href="#lede-image" class="header-link-alt">Lede Image</a></h2>
<p>I want to thank <a href="http://www.londonlight.org/">Morgan Hardwood (LondonLight.org)</a> for providing us a wonderful view of Röstånga, Sweden as a background image on the <a href="https://pixls.us/">main page</a>.</p>
<figure class='big-vid'>
<img src='https://pixls.us/images/main-lede/2015-06-06_rostanga_-_2.jpg' alt='Rostanga by Morgan Hardwood LondonLight.org'/>
<figcaption>
Röstånga by <a href="http://www.londonlight.org">Morgan Hardwood</a> 
<a class="cc" href="https://creativecommons.org/licenses/by-sa/4.0/">cba</a></div>
</figcaption>
</figure>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Users Guide to High Bit Depth GIMP 2.9.2, Part 1 ]]></title>
            <link>https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/</link>
            <guid isPermaLink="true">https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/</guid>
            <pubDate>Sun, 01 Nov 2015 18:00:00 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/flying-bird-between-trees.jpg" /><br/>
                 <h1>Users Guide to High Bit Depth GIMP 2.9.2, Part 1</h1>  
                 <h2>Part 1: New high bit depth precision options, new color space algorithms, and new color management options</h2>   
                <!-- ## New high bit depth precision options, New color management options, New algorithms -->
<h3 id="contents">Contents<a href="#contents" class="header-link"><i class="fa fa-link"></i></a></h3>
<ol class='toc'>
    <li><a href="#introduction-high-bit-depth-gimp-2-9-2">Introduction: high bit depth GIMP 2.9.2</a>

        <ol>
        <li><a href="#purpose-of-this-guide">Purpose of this guide</a></li>
        <li><a href="#useful-links-the-official-gimp-website-builds-for-windows-and-mac-building-gimp-on-linux">Useful links: the official GIMP website, builds for Windows and MAC, building GIMP on Linux</a></li>
        <li><a href="#editing-in-srgb-vs-editing-in-other-color-spaces">Editing in sRGB vs editing in other color spaces</a></li>
        <li><a href="#a-note-about-the-gamma-hack-that-s-provided-for-many-editing-operations">A note about the “Gamma hack” that’s provided for many editing operations</a></li>
        </ol></li>

    <li><a href="#new-high-bit-depth-precision-options">New high bit depth precision options</a>

        <ol>
        <li><a href="#menu-for-choosing-the-image-precision">Menu for choosing the image precision</a></li>
        <li><a href="#which-precision-should-you-choose-for-editing-">Which precision should you choose for editing?</a></li>
        <li><a href="#using-the-image-precision-options-when-exporting-an-image-to-disk">Using the image precision options when exporting an image to disk</a></li>
        </ol></li>

    <li><a href="#new-color-management-options">New color management options</a>

        <ol>
        <li><a href="#gimp-2-9-2-automatically-detects-camera-dcf-information">GIMP 2.9.2 automatically detects camera DCF information</a></li>
        <li><a href="#black-point-compensation">Black point compensation</a></li>
        </ol></li>

    <li><a href="#new-and-updated-algorithms-for-converting-to-luminance-lab-and-lch">New and updated algorithms for converting to Luminance, LAB, and LCH</a>

        <ol>
        <li><a href="#converting-srgb-images-from-color-to-black-and-white-using-luma-and-luminance">Converting sRGB images from Color to Black and White using Luma and Luminance</a></li>
        <li><a href="#decomposing-from-srgb-to-lab">Decomposing from sRGB to LAB</a></li>
        <li><a href="#lch-the-actually-usable-replacement-for-the-entirely-inadequate-color-space-known-as-hsv-">LCH: the actually usable replacement for the entirely inadequate color space known as “HSV”</a></li>
        </ol></li>
</ol>

<hr>
<h2 id="introduction-high-bit-depth-gimp-2-9-2">Introduction: high bit depth GIMP 2.9.2<a href="#introduction-high-bit-depth-gimp-2-9-2" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="purpose-of-this-guide">Purpose of this guide<a href="#purpose-of-this-guide" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>As announced on the GIMP users and developers mailing lists, the recent (November 26, 2015) GIMP 2.9.2 release is <a title="GIMP user's mailing list: ANNOUNCE: GIMP 2.9.2 released" href="https://mail.gnome.org/archives/gimp-user-list/2015-November/msg00066.html">the first development release in the GIMP 2.9.x series leading to GIMP 2.10</a>. The release announcement summarizes the many code changes that were made to port the old GIMP code over to GEGL’s high bit depth processing. </p>
<p>This user’s guide to high bit depth GIMP 2.9.2 introduces you to some of high bit depth GIMP’s new editing capabilities that are made possible by GEGL’s high bit depth processing. The guide also points out a few “gotchas” that you should be aware of. Please keep in mind that GIMP 2.9 really is a development branch, so many things don’t yet work exactly like they will work when GIMP 2.10 is released. </p>
<h3 id="useful-links-the-official-gimp-website-builds-for-windows-and-mac-building-gimp-on-linux">Useful links: the official GIMP website, builds for Windows and MAC, building GIMP on Linux<a href="#useful-links-the-official-gimp-website-builds-for-windows-and-mac-building-gimp-on-linux" class="header-link"><i class="fa fa-link"></i></a></h3>
<ul>
<li><a title="The official GIMP (Gnu Image Manipulation Program) website" href="http://www.gimp.org/">GIMP website</a></li>
<li><a title="GIMP and GEGL mailing lists and IRC" href="http://www.gimp.org/mail_lists.html">GIMP IRC and mailing list information</a></li>
<li><a title="Partha's Place" href="http://partha.com/">Partha’s GIMP 2.9 builds for Windows and MAC</a>, including a portable Windows build of my patched GIMP plus information on compiling GIMP on Windows. </li>
<li>Precompiled versions of high bit depth GIMP are more or less widely available for the various Linux operating systems. If you run Linux and you’d like to compile high bit depth GIMP yourself, <a title="Nine Degrees Below Photography: Guide to building GIMP on Linux" href="http://ninedegreesbelow.com/photography/build-gimp-in-prefix-for-artists.html">Building GIMP for artists and photographers</a> has step-by-step instructions.</li>
</ul>

<p>High bit depth GIMP is a work in progress. If you read the release notes for GIMP 2.9.2, you already know that the primary goal for the GIMP 2.10 release is full “Geglification” of the GIMP code base. </p>
<h3 id="editing-in-srgb-vs-editing-in-other-color-spaces">Editing in sRGB vs editing in other color spaces<a href="#editing-in-srgb-vs-editing-in-other-color-spaces" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>For best results when using GIMP 2.9.2, <strong><em>only edit sRGB images</em></strong>. </p>
<p>GIMP 2.8 has hard-coded sRGB parameters that make many editing operations produce wrong results for images that are in RGB working spaces other than sRGB. GIMP 2.9.2 still has these hard-coded sRGB parameters. Almost certainly GIMP 2.10 also will have these same hard-coded sRGB parameters. </p>
<p>Full support for editing images in other RGB working spaces won’t happen
at least until GIMP 3.0, and maybe not until some time after GIMP 3.0.
The next big change for GIMP will be the change-over from GTK+2 to
GTK+3, which is a pretty critical step to make as GTK+2 is on the verge
of being retired. GIMP development is a volunteer effort, porting GIMP
over to GEGL has required an enormous amount of work, and porting from
GTK+2 to GTK+3 isn’t exactly a trivial task. <a title="Hacking:Developer FAQ" href="http://wiki.gimp.org/wiki/Hacking:Developer_FAQ">More GIMP developers would help a lot</a>, so if you have any coding skills, please consider volunteering.</p>
<p>If you really do want to edit in color spaces other than sRGB “right now”, and you are comfortable building GIMP from git, <a title="Nine Degrees Below Photography: Patching GIMP for artists and photographers" href="http://ninedegreesbelow.com/photography/patch-gimp-in-prefix-for-artists.html">my patched version of GIMP 2.9</a> is hard-coded to use the much larger Rec.2020 color space, and it should be obvious how to modify the patches for other RGB working spaces.</p>
<h3 id="a-note-about-the-gamma-hack-that-s-provided-for-many-editing-operations">A note about the “Gamma hack” that’s provided for many editing operations<a href="#a-note-about-the-gamma-hack-that-s-provided-for-many-editing-operations" class="header-link"><i class="fa fa-link"></i></a></h3>
<figure>
<img width="374" height="282" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/gamma-hack.png" alt="Desaturate dialog with Gamma hack" />
</figure>

<p>A “Gamma hack” option is provided by many GIMP 2.9.2 editing operations. This option sits next to some text that says “(temp hack, please ignore)”. Unless you know exactly what you are doing, you really are better off not using the Gamma hack.</p>
<h2 id="new-high-bit-depth-precision-options">New high bit depth precision options<a href="#new-high-bit-depth-precision-options" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="menu-for-choosing-the-image-precision">Menu for choosing the image precision<a href="#menu-for-choosing-the-image-precision" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>As shown by the screenshot below, GIMP 2.9.2 offers six different image precisions:</p>
<ul><li>Three <em>integer</em> precisions: 8-bit integer, 16-bit integer, and 32-bit integer.</li> 
<li>Three <em>floating point</em> precisions: 16-bit floating point, 32-bit floating point, and 64-bit floating point.</li></ul>

<figure class=''>
<img width="739" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/precision-menu.png" alt="Precision Menu" >
<figcaption>
<strong>Menu for choosing the image precision.</strong> <br/>
<span style="font-weight: normal;">(The “Perceptual gamma (sRGB)” and “Linear light” switches are explained in <a href="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/#radiometrically-correct-editing">Part 2 of this article, under “Radiometrically correct editing”</a>)</span>.
</figcaption>
</figure>



<h3 id="which-precision-should-you-choose-for-editing-">Which precision should you choose for editing?<a href="#which-precision-should-you-choose-for-editing-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>If you have a fast computer with a lot of RAM, I recommend that you always promote your images to 32-bit floating point before you begin editing. Here’s why:</p>
<ol class="double-space ">
<li><b>Regardless of which precision you choose, all babl/GEGL/GIMP <i>internal</i> processing is done at 32-bit floating point</b>. Read that sentence three times.</li>

<li><b>There seems to be a <a title="GIMP bug report: Use 32-bit floating-point linear by default unless 8-bit" href="https://bugzilla.gnome.org/show_bug.cgi?id=734657">small speed penalty for <em>not</em> using 32-bit floating point precision</a>.</b></li>

<li><b>The Precision menu options dictate <strong>how much memory is used to store in RAM</strong> the results of internal calculations:</b> 
<ul><li>Choosing 32-bit floating point precision allows you to take full advantage of GEGL’s 32-bit floating point processing.</li>
<li>If you are working on a lower-RAM machine, performance will benefit from using 16-bit floating point or integer precision, but of course the price is a loss in precision as new editing operations use the results of previous edits as stored in memory.</li>

<li>On very low RAM systems, performance will benefit even more from using 8-bit integer precision. But if you use 8-bit integer precision, you are throwing away most of the advantages of working with a high bit depth image editor.</li>

<li>64-bit precision is made available mostly to accommodate importing and exporting very high bit precision images for scientific editing.  <em>You don’t gain any computational precision from using 64-bit precision for actual editing</em>. If you choose 64-bit precision for editing, all you are really doing is wasting system RAM resources.</li></ul>
</li>

</ol>

<p>As discussed in <a href="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/#using-gimp-2-9-2-s-floating-point-precision-for-unclamped-editing">Part 2 of this article, “Using GIMP 2.9.2’s floating point precision for unclamped editing”</a> (and depending on your editing style and goals), instead of 32-bit floating point precision, sometimes you might prefer using 16-bit or 32-bit <em>integer</em> precision. But making full use of all of high bit depth GIMP’s new editing capabilities does require using floating point precision. </p>
<div class="more"><p>Sometimes people assume that floating point is “more precise” than integer, but this isn’t actually true: At any given bit-depth, integer precision is more precise than floating point precision, but uses about the same amount of RAM:</p>
<ul class="double-space"><li>16-bit integer precision is <em>more</em> precise than 16-bit floating point precision, and the two precisions use about the same amount of RAM.</li>
<li>32-bit integer is <em>more</em> precise than 32-bit floating point precision, and the two precisions use about the same amount of RAM. </li>
</ul>

<p>GEGL/GIMP’s internal processing uses 32-bit floating point precision, so both of GIMP’s 32-bit precisions actually provide the same degree of precision.</p>
</div>



<h3 id="using-the-image-precision-options-when-exporting-an-image-to-disk">Using the image precision options when exporting an image to disk<a href="#using-the-image-precision-options-when-exporting-an-image-to-disk" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The precision menu options have another extremely important use beside dictating the precision with which the results of editing operations are held in RAM. When you export the image to disk, the precision options allow you to change the bit depth of the exported image.</p>
<p>For example, some image editors can’t read floating point tiffs. So if you want to export an image as a tiff file that will be opened in another image editor that can only read 8-bit and 16-bit integer tiffs, and your GIMP XCF layer stack is currently using 32-bit floating point precision, you might want to change the XCF layer stack precision to 16-bit integer before exporting the tiff. </p>
<p>After exporting the image, don’t forget to hit “UNDO” (“Edit/Undo . . . “, or else just use the CNTL-Z keyboard shortcut) to get back to 32-bit floating point precision (or whatever other precision you were using).</p>
<h2 id="new-color-management-options">New color management options<a href="#new-color-management-options" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="gimp-2-9-2-automatically-detects-camera-dcf-information">GIMP 2.9.2 automatically detects camera DCF information<a href="#gimp-2-9-2-automatically-detects-camera-dcf-information" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>For reasons only the camera manufacturers know, instead of embedding a proper ICC profile in camera-saved jpegs, usually they embed <a title="Nine Degrees Below Photography: What is embedded color profile information?" href="http://ninedegreesbelow.com/photography/embedded-color-space-information.html">“DCF” and “maker note”</a> information. Whenever a camera manufacturer offers the option to embed a color space that isn’t officially supported by the DCF/Exif standards, each manufacturer feels free to improvise with new tags. </p>
<p>GIMP 2.9.2 does detect and assign the correct color space for most camera-saved jpegs. Like all editing software, GIMP has to play “catch up” with new tags for new color spaces offered by new camera models.</p>
<p>Tell your camera manufacturer that you want proper ICC profiles embedded in your camera-saved jpegs.</p>
<h3 id="black-point-compensation">Black point compensation<a href="#black-point-compensation" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Unlike GIMP 2.8, GIMP 2.9 does offer black point compensation as an explicit option, and it’s enabled by default.</p>
<figure>
<img width="768" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/gimp292-preferences-color-management.png" alt="GIMP 2.9.2 color management preferences">
<img width="453" class="imgcenter" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/gimp28-preferences-color-management.png" alt="GIMP 2.8 color management preferences"> 
<figcaption>
<strong>GIMP 2.9 offers black point compensation as an explicit option.</strong></br>
As an aside, GIMP 2.8 actually did offer black point compensation, but in a very round-about way: In GIMP 2.8, if you used the default “Perceptual intent” for the Display rendering intent, then black point compensation was <em>dis</em>abled. And if you chose “Relative colorimetric” for the Display rendering intent, then black point compensation was <em>en</em>abled.</figcaption>
</figure>

<p>Even though black point compensation is checked by default in GIMP 2.9.2, whether you should use black point compensation partly depends on the color management settings provided by the other imaging software that you routinely use. For example, <a title="Nine Degrees Below Photography: Viewing Photographs on the Web" href="http://ninedegreesbelow.com//galleries/viewing-photographs-on-the-web.html">Firefox doesn’t provide for black point compensation</a>. As far as I can tell, neither does RawTherapee or darktable. If one of your goals is to make sure that images look the same as displayed in various softwares, you need to <a title="GIMP bug report: Gimp changes contrast and color of images" href="https://bugzilla.gnome.org/show_bug.cgi?id=723498">make sure all the relevant color management settings match</a>.</p>
<p>What is black point compensation? LCD monitors can’t display “zero light”. There’s always some minimum amount of light coming from the screen. Fill your screen with a solid black image, turn out all the lights and close the doors and curtains, and you’ll see what I mean.</p>
<p>Black point compensation compensates for the fact that RGB working spaces like sRGB allow you to produce colors (for example solid black) that are darker than your monitor can actually display. GIMP uses the LCMS black point compensation algorithm, which very sensibly scales the image tonality so that “solid black” in the image file maps to “darkest dark” in the monitor profile’s color gamut.</p>
<figure>
<img width="768" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/zero-nonzero-black-points.png" alt="Zero non-zero black points">
<figcaption><strong>Non-zero and zero black points</strong> <em>(images produced using icc_examin and ArgyllCMS)</em>.</figcaption>
</figure>

<p>However, depending on your monitor profile, using or not using black point compensation might not make any difference at all. The only time black point compensation makes a difference is if the Monitor profile you choose in “Preferences/Color management” actually does have a “higher than zero” black point. </p>
<p class="more">Why some monitor profiles do and some don’t have “higher than zero” black points is beyond the scope of this tutorial. Suffice it to say that a very accurate LCD monitor profile will always have a higher than zero black point. But sometimes, and especially for consumer-grade monitors, a very accurate monitor profile will make displayed images look worse than they will when using a less accurate monitor profile.</p>


<h2 id="new-and-updated-algorithms-for-converting-to-luminance-lab-and-lch">New and updated algorithms for converting to Luminance, LAB, and LCH<a href="#new-and-updated-algorithms-for-converting-to-luminance-lab-and-lch" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="converting-srgb-images-from-color-to-black-and-white-using-luma-and-luminance">Converting sRGB images from Color to Black and White using Luma and Luminance<a href="#converting-srgb-images-from-color-to-black-and-white-using-luma-and-luminance" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Under “Colors/Desaturate”, GIMP 2.8 offers three options for converting an sRGB image to black and white: Lightness, Luminosity, and Average:</p>
<ol>
<li>The “Lightness” option adds the lowest and highest RGB channel values and divides the result by two.</li>
<li>The “Luminosity” option is equal to (the Red channel times 0.213) plus (the Green channel times 0.715) plus (the Blue channel times 0.072).</li>
<li>The “Average” option sums all three RGB channel values and divides the result by three.</li>
</ol>

<p>GIMP 2.9.2 still offers all three options for converting an sRGB image to black and white. But the “Luminosity” option has been renamed <a title="Wikipedia: Luma (video)" href="https://en.wikipedia.org/wiki/Luma_%28video%29">Luma</a>, which is the technically correct term (<a title="Wikipedia: Luminosity (disambiguation)" href="https://en.wikipedia.org/wiki/Luminosity_%28disambiguation%29">though various image editors use the term “Luminosity” in various incorrect ways</a>. </p> 
<p>Also GIMP 2.9.2’s “Luma” option uses slightly different multipliers for calculating Luma, being (the Red channel times 0.222) plus (the Green channel times 0.717) plus (the Blue channel times 0.061). The GIMP 2.8 multipliers were wrong and the GIMP 2.9 multipliers are correct.</p>

<p class="more">Since I know you won’t be able to get any sleep until someone tells you why the multipliers for calculating Luma were changed, the GIMP 2.9 multipliers have been Bradford-adapted from D65 to D50, which is required for use in an ICC profile color-managed editing application (at least until the next version of the ICC specs is released and people figure out how to deal with the new freedom to use non-D50 reference white points).</p>

<p style="text-indent: 0;">GIMP 2.9.2 also offers a fourth option for converting sRGB images to black and white, which is “Luminance”. “Luminance” is short for <a title="Wikipedia: Relative Luminance" href="https://en.wikipedia.org/wiki/Relative_luminance">relative luminance</a>. Luminance is calculated using the same channel multipliers that are used to calculate Luma. The mathematical difference between calculating Luma and Luminance is as follows:</p> 
<ul>
<li>Luma is calculated using RGB channel values that are encoded using the sRGB TRC.</li>
<li>Luminance is calculated using linearized RGB channel values, producing a radiometrically correct and physically meaningful conversion from color to black and white.</li></ul>

<p>Of the various options in the “Colors/Desaturate” menu, “Luminance” is the only physically meaningful way to convert from color to black and white.</p> <p>The Red, Blue, and Green Luma and Luminance channel multipliers are specific to the sRGB color space. These channel multipliers are actually the “Y” components of the sRGB ICC profile’s XYZ primaries. As you might expect, different RGB working spaces have different “Y” values, and so the GIMP 2.9.2 conversions to Luma and Luminance only produce correct results for sRGB images.</p>

<figure class='big-vid'>
<img src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/rgb-luminance-conversion-to-black-and-white.jpg" alt=""  />
<figcaption style='text-align:left; max-width:772px; margin:0 auto;'>
<strong>GIMP 2.9 sRGB Luminance and Luma conversions to black and white</strong><br/>
Click to compare sRGB Luminance and Luma conversions to black and white:<br><span class="toggle-swap" data-fig-swap="rgb-luminance-conversion-to-black-and-white.jpg">1. “Colors/Desaturate/Luminance” conversion to black and white</span>
<span class="toggle-swap" data-fig-swap="rgb-luma-conversion-to-black-and-white.jpg">2. “Colors/Desaturate/Luma” conversion to black and white</span>
</figcaption>
</figure>



<h3 id="decomposing-from-srgb-to-lab">Decomposing from sRGB to LAB<a href="#decomposing-from-srgb-to-lab" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Decomposing to LAB does use hard-coded sRGB parameters and so will produce wrong results in other RGB working spaces. </p>
<p>In GIMP 2.8, decomposing an sRGB image to LAB produced flatly wrong results.
In GIMP 2.9.2, decomposing an sRGB image to LAB does produce mathematically correct results. But if you use “drag and drop” to pull the decomposed grayscale layers over to your sRGB layer stack, there is still a small error in the resulting RGB layer. Figure 3 below illustrates the problem:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/red-green-blue-glass-color-LAB-L-mathematically-correct.jpg" alt="RGB Glass Color LAB L Mathematically Correct"  />
<figcaption style='text-align: left; max-width: 768px; margin:0 auto;'>
<strong>Decomposing to LAB and retrieving the LAB Lightness (“L”) channel</strong><br/>
<em>Click the links below the image to see the original color image and the results of decomposing to LAB plus “dragging and dropping the L channel” in GIMP 2.8 vs GIMP 2.9.</em>
<span class="toggle-swap" data-fig-swap="red-green-blue-glass-color-LAB-L-mathematically-correct.jpg">1. Mathematically correct conversion to LAB Lightness</span>
<span class="toggle-swap" data-fig-swap="red-green-blue-glass-color-LAB-L-gimp29-drag-drop.jpg">2. GIMP 2.9.2 decompose to LAB + drag and drop (a little wrong)</span>
<span class="toggle-swap" data-fig-swap="red-green-blue-glass-gimp28-incorrect-LAB-L-to-RGB.jpg">3. GIMP 2.8 decompose to LAB + drag and drop (not done on linearized RGB, so results are very wrong)</span>
<span class="toggle-swap" data-fig-swap="red-green-blue-glass-color.jpg">4. The original color layer that was decomposed to LAB</span>
<span class="toggle-swap" data-fig-swap="xicclu-lstar-lab-l-srgb-trc.png">5. Difference between the LAB and sRGB companding curves (the reason why “drag and drop” in GIMP 2.9 produces slightly wrong results)</span>
</figcaption>
</figure>


<p>Assuming you start with an image in the regular sRGB color space, then:</p>
<ul class="double-space">
<li>In GIMP 2.9.2, decomposing a layer to LAB in GIMP 2.9 produces mathematically correct results.

<p>However, dragging the resulting grayscale channels back to the RGB XCF color stack results in a slightly wrong result. This is because the dropped grayscale layer(s), which don’t have an embedded ICC profile, are assumed to be encoded using the sRGB <a title="Bruce Lindbloom's Equations for converting from RGB and LAB to XYZ" href="http://brucelindbloom.com/index.html?Eqn_RGB_to_XYZ.html">companding curve</a> (Tone Reproduction Curve, “TRC”), when really they are encoded using the LAB companding curve. This is a color management problem that can be solved by enabling GIMP to do grayscale color management (all that’s needed is a little developer time — did I mention that GIMP really does need more developers?).</p>

<p>As an incredibly important aside, a mathematically correct conversion from sRGB to LAB Lightness and back to sRGB produces exactly the same thing as using GIMP 2.9.2’s “Colors/Desaturate/Luminance” option to change an sRGB image from color to black and white.</p></li>

<li>In GIMP 2.8, decomposing a layer to LAB produces wildly mathematically incorrect results, and dragging the resulting channel(s) back to the RGB XCF color stack also produces wildly mathematically incorrect results. So older GIMP tutorials on using the LAB Lightness channel to convert an image to black and white won’t produce anywhere near the same results when using GIMP 2.9/GIMP 2.10.</li> 
</ul>

<p>If you’d like to know more about “LAB Lightness to black and white”, the following two-part article untangles the massive amounts of confusion regarding converting an RGB image to black and white using the LAB Lightness channel:</p>
<ol>
<li><a title="LAB Tutorial, Part 1, Nine Degrees Below Photography" href="http://ninedegreesbelow.com/photography/lab-lightness-to-black-and-white-gimp28.html">LAB Lightness to black and white using GIMP 2.8</a>. </li>
<li><a title="LAB Tutorial, Part 2, Nine Degrees Below Photography" href="http://ninedegreesbelow.com/photography/lab-lightness-to-black-and-white-gimp29-photoshop.html">LAB Lightness to black and white using GIMP 2.9 and PhotoShop</a> (the typical PhotoShop tutorial on using the LAB Lightness channel to convert to black and white does produce mathematically <em>in</em>correct results).</li>
</ol>


<h3 id="lch-the-actually-usable-replacement-for-the-entirely-inadequate-color-space-known-as-hsv-">LCH: the actually usable replacement for the entirely inadequate color space known as “HSV”<a href="#lch-the-actually-usable-replacement-for-the-entirely-inadequate-color-space-known-as-hsv-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>LCH calculations do use hard-coded sRGB parameters, and so will produce wrong results in other RGB working spaces.</p>
<p><a title="Wikipedia: HSL and HSV" href="https://en.wikipedia.org/wiki/HSL_and_HSV">HSV</a> (“Hue/Saturation/Value”) is a <a title="Wikipedia: HSL and HSV Disadvantages" href="https://en.wikipedia.org/wiki/HSL_and_HSV#Disadvantages">sad little color space</a> designed for <a title="Wikipedia: HSL and HSV Motivations" href="https://en.wikipedia.org/wiki/HSL_and_HSV#Motivation">fast processing on slow computers, way back in the stone age of digital processing</a>. HSV is OK for picking colors from a color wheel. But it’s really wretched for just about any other editing application, because despite the fact that “HSV” stands for “Hue/Saturation/Value”, you actually can’t adjust color and tonality separately in the HSV color space.</p>
<p>“LCH” stands for “Lightness, Chroma, Hue”. LCH is mathematically derived from the <a title="Nine Degrees Below Photography: A small guided tour of color patches as located in the CIELAB reference color space." href="http://ninedegreesbelow.com/photography/pictures-of-color-spaces.html">CIELAB reference color space</a>, which in turn is a perceptually uniform transform of the <a title="Nine Degrees Below Photography: Completely Painless Programmer's Guide to XYZ, RGB, ICC, xyY, and TRCs" href="http://ninedegreesbelow.com/photography/xyz-rgb.html">CIEXYZ reference color space</a>. Unlike HSV, LCH is a physically meaningful color space that allows you to edit separately for color and tonality.</p>
<p>Very roughly speaking:</p>
<ul>
<li>LCH <em>Lightness</em> corresponds to HSV <em>Value</em>.</li>

<li>LCH <em>Chroma</em> corresponds to HSV <em>Saturation</em>.</li>

<li>LCH <em>Hue</em> corresponds to HSV <em>Hue</em> (the names are the same, but the two blend modes are based on very different mathematics).</li>

<li>LCH <em>Color</em> is a combination of LCH Chroma and Hue, and corresponds to HSV <em>Color</em>, which is a combination of HSV Hue and Saturation (again, the names are the same, but the two blend modes are based on very different mathematics).</li></ul>

<p>LCH blend modes and painting are a game-changing addition to high bit depth GIMP editing capabilities. If you’d like to see examples of what you can do with LCH, that you can’t even come close to doing with HSV, I’ve written a couple of tutorials on using GIMP’s LCH color space capabilities:</p>

<ol class="double-space">
<li><a title="LCH Blend modes tutorial, Nine Degrees Below Photograhy" href="http://ninedegreesbelow.com/photography/gimp-lch-blend-modes.html">A tutorial on GIMP’s very awesome LCH Blend Modes</a>, which shows how to use GIMP’s new LCH blend modes to repair a badly damaged image, and then to colorize a black and white rendering of the image.</li>

<li><a title="Tutorial on using LCH, Nine Degrees Below Photography" href="http://ninedegreesbelow.com/photography/high-bit-depth-gimp-tutorial-edit-tonality-color-separately.html">Autumn colors: An Introduction to High Bit Depth GIMP’s New Editing Capabilities</a>, which shows how to use GIMP’s new LCH blend modes to edit separately for color and tonality. </li>
</ol>

<figure class='big-vid'>
<img width="772" height="" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/patch-front-fish.jpg" alt="Compare LCH vs HSV when restoring color.">
<figcaption style='max-width: 772px; text-align:left; margin:0 auto;'>Restoring color to a damaged image: LCH Color blend mode vs the HSV Color blend mode: The LCH Color blend mode produces smooth, believable color transitions. The HSV Color blend mode produces very splotchy results.
</figcaption>
</figure>

<figure class='big-vid'>
<img width="772" height="" src="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-1/color-blend-modes-vs-tonality.jpg" alt="LCH vs HSV when changing color.">
<figcaption style='max-width: 772px; text-align:left; margin:0 auto;'>Changing an image’s color: LCH Color blend mode vs HSV Color blend mode: The LCH Color blend mode changes the image color without modifying the image tonality, whereas the HSV Color blend mode simultaneously changes tonality along with color (HSV blending with blue made the tonality darker, HSV blending with yellow made the tonality lighter).</figcaption>
</figure>

<p>I’m not an especially skilled programmer. In fact I find writing code to be a painfully slow exercise. But one major reason why I maintain a <a title="Nine Degrees Below Photography: Patching GIMP for artists and photographers" href="http://ninedegreesbelow.com/photography/patch-gimp-in-prefix-for-artists.html">patched version of high bit depth GIMP</a> is precisely so I can use the LCH color space not just for blending and painting, but also for <a title="GIMP bug report: Add LCH to the color picker" href="https://bugzilla.gnome.org/show_bug.cgi?id=749902">picking colors and as a replacement for the essentially useless HSV “Hue-Saturation” tool</a>. These particular editing capabilities will eventually make it into an official GIMP release, but I didn’t want to wait for “eventually” to happen.</p>

<p><a href="https://pixls.us/articles/users-guide-to-high-bit-depth-gimp-2-9-2-part-2/">Click here to go to Part 2</a> of this guide to GIMP 2.9.2!<br>Part 2 discusses using GIMP 2.9.2 to do radiometrically correct editing, unbounded ICC profile conversions, and unclamped editing.</p>
<p><small><strong>All text and images &copy;2015 <a href="http://ninedegreesbelow.com/">Elle Stone</a>, all rights reserved.</strong></small></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Portrait Lighting Cheat Sheets ]]></title>
            <link>https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/</guid>
            <pubDate>Thu, 17 Sep 2015 14:23:35 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/Lighting-Samples.jpg" /><br/>
                 <h1>Portrait Lighting Cheat Sheets</h1>  
                 <h2>Blender to the Rescue!</h2>   
                <p>Many moons ago <a href="http://blog.patdavid.net/2012/03/visualize-photography-lighting-setups.html" title="Visualize Photography Lighting Setups in Blender">I had written about</a> acquiring a YN-560 speedlight for playing around with off-camera lighting.
At the time I wanted to experiment with how different modifiers might be used in a portrait setting.
Unfortunately, these were lighting modifiers that I didn’t own yet.</p>
<p>I wasn’t going to let that slow me down, though!</p>
<p>If you want to skip the how and why to get straight to the cheat sheets, <a href="https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/#the-lighting-cheat-sheets">click here</a>.</p>
<p><a href="http://ir-ltd.net/">Infinite Realities</a> had released a full 3D scan by <a href="http://ir-ltd.net/tag/lee-perry-smith/" title="Possibly NSFW">Lee Perry-Smith</a> of his head that was graciously licensed under a <a href="http://creativecommons.org/licenses/by/3.0/" title="Creative Commons Attribution 3.0">Creative Commons Attribution 3.0 Unported License</a>.
For reference, here is a link to the <a href="http://www.ir-ltd.net/uploads/Infinite_Scan_Ver0.1.rar">object file and textures</a> (80MB) and the <a href="http://www.ir-ltd.net/uploads/Infinite_Scan_Displacements_Ver0.1.rar">displacement maps</a> (65MB) from the Infinite Realities website.</p>
<p>What I did was to bring the high resolution scan and displacement maps into <a href="http://www.blender.org/">Blender</a> and manually created my lights with modifiers in a virtual space.
Then I could simply render what a particular light/modifier would look like with a realistic person being lit in any way I wanted.</p>
<!-- more -->
<figure class="big-vid">
<img src="https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/blender-view-256.png" alt="Blender View Lighting Setup"/>
</figure>

<p>This leads to all sorts of neat freedom to experiment with things to see how they might come out.
Here’s another look at the lede image:</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/th_Lighting-Samples.jpg" alt="Blender Lighting Samples" />
<figcaption>
Various lighting setups test in Blender.
</figcaption>
</figure>

<p>I had originally intended to make a nice bundled application that would allow someone to try all sorts of different lighting setups, but my skill in Blender only go so far.
My skills at convincing others to help me didn’t go very far either. :)</p>
<p>So, if you’re ok with navigating around Blender already, feel free to check out <a href="http://blog.patdavid.net/2012/03/visualize-photography-lighting-setups.html" title="Visualize Photography Lighting Setups in Blender">my original blog post</a>
 to download the .blend file and give it try!
<a href="https://about.me/jimmygunawan/bio">Jimmy Gunawan</a> even took it further and modified the .blend to work with Blender cycles rendering as well.</p>
<div class="fluid-vid">
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/irLcpDdnkcM?rel=0" frameborder="0" allowfullscreen></iframe>
</div>

<p>With the power to create a lighting visualization of any scenario I then had to see if there was something cool I could make for others to use…</p>
<h2 id="the-lighting-cheat-sheets"><a href="#the-lighting-cheat-sheets" class="header-link-alt">The Lighting Cheat Sheets</a></h2>
<p>I couldn’t help but generate some lighting cheat sheets to help others use as a reference.
I’ve seen some different ones around but I took advantage of having the most patient model in the world to do this with. :)</p>
<p>These were generated by rotating a 20” (<em>virtual</em>) softbox in a circle around the subject at 3 different elevations (0, 30&deg;, and 60&deg;).</p>
<p><em>Click the caption title for a link to the full resolution files</em>:</p>
<figure class='big-vid'>
<img src="https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/0-degrees-portrait-lighting-cheat-sheet-reference.jpg" alt='Blender Lighting Setup 0 degrees' />
<figcaption>
<a href="0-degrees-portrait-lighting-cheat-sheet-reference-full.jpg" title="Click for full resolution version">Softbox 0&deg; Portrait Lighting Cheat Sheet Reference</a><br/>
by Pat David (<a class='cc' href='https://creativecommons.org/licenses/by-sa/2.0/'>cba</a>)
</figcaption>
</figure>

<figure class='big-vid'>
<img src="https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/30-degrees-portrait-lighting-cheat-sheet-reference.jpg" alt='Blender Lighting Setup 30 degrees' />
<figcaption>
<a href="30-degrees-portrait-lighting-cheat-sheet-reference-full.jpg" title="Click for full resolution version">Softbox 30&deg; Portrait Lighting Cheat Sheet Reference</a><br/>
by Pat David (<a class='cc' href='https://creativecommons.org/licenses/by-sa/2.0/'>cba</a>)
</figcaption>
</figure>

<figure class='big-vid'>
<img src="https://pixls.us/blog/2015/09/portrait-lighting-cheat-sheets/60-degrees-portrait-lighting-cheat-sheet-reference.jpg" alt='Blender Lighting Setup 60 degrees' />
<figcaption>
<a href="60-degrees-portrait-lighting-cheat-sheet-reference-full.jpg" title="Click for full resolution version">Softbox 60&deg; Portrait Lighting Cheat Sheet Reference</a><br/>
by Pat David (<a class='cc' href='https://creativecommons.org/licenses/by-sa/2.0/'>cba</a>)
</figcaption>
</figure>

<p>Hopefully these might prove useful as a reference for some folks.
Share them, print them out, tape them to your lighting setups! :)
I wonder if we could get some cool folks from the community to make something neat with them?</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Softness and Superresolution ]]></title>
            <link>https://pixls.us/blog/2015/09/softness-and-superresolution/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/09/softness-and-superresolution/</guid>
            <pubDate>Tue, 08 Sep 2015 17:13:08 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/09/softness-and-superresolution/francis.jpg" /><br/>
                 <h1>Softness and Superresolution</h1>  
                 <h2>Experimenting and Clarifying</h2>   
                <p>A small update on how things are progressing (hint: well!) and some neat things the community is playing with.</p>
<p>I have been quiet these past few weeks because I decided I didn’t have enough to do and thought a rebuild/redesign of the <a href="http://static.gimp.org">GIMP website</a> would be fun, apparently.
Well, it <em>is</em> fun and something that couldn’t hurt to do.
So I stepped up to help out.</p>
<!-- more -->
<h2 id="a-question-of-softness"><a href="#a-question-of-softness" class="header-link-alt">A Question of Softness</a></h2>
<p>There was <a href="https://www.facebook.com/groups/speedlightfundamentals/permalink/1627843414142335/">a thread</a> recently on a certain large social network in a group dedicated to off-camera flash.
The thread was started by someone with the comment:</p>
<blockquote>
<p>The most important thing you can do with your speed light is to put some rib <small>[sic]</small> stop sail cloth over the speed light to soften the light.</p>
</blockquote>
<p>Which just about gave me an aneurysm (those that know me and lighting can probably understand why).
Despite some sound explanations about why this won’t work to “soften” the light, there was a bit of back and forth about it.
To make matters worse, even after over 100 comments, <em>nobody</em> bothered to just go out and shoot some sample images to see it for themselves.</p>
<p>So I finally went out and shot some to illustrate and I figured they would be more fun if they were shared 
(I did actually post these <a href="https://discuss.pixls.us/t/light-source-softness/384">on our forum</a>).</p>
<p>I quickly set a lightstand up with a YN560 on it pointed at my garden statue.
I then took a shot with bare flash, one with diffusion material pulled over the flash head, and one with a 20” diy softbox attached.</p>
<p>Here’s what the setup looked like with the softbox in place:</p>
<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/softbox-setup.jpg" alt="Soft Light Test - Softbox Setup" width="640" height="480" />
<figcaption>
Simple light test setup (with a DIY softbox in place).
</figcaption>
</figure>

<p>Remember, this was done to demonstrate that simply placing some diffusion fabric over the head of a speedlight does nothing to “soften” the resulting light:</p>
<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/francis-bare.jpg" data-swap-src="francis-diffusion-panel.jpg" alt="Softness test image bare flash" width="640" height="640" />
<figcaption>
Bare flash result.  Click to compare with diffusion material.
</figcaption>
</figure>

<p>This shows clearly that diffusion material over the flash head does <em>nothing</em> to affect the “softness” of the resulting light.</p>
<p>For a comparison, here is the same shot with the softbox being used:</p>
<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/francis-softbox.jpg" data-swap-src="francis-diffusion-panel.jpg" alt="Softness test image softbox" width="640" height="640" />
<figcaption>
Same image with the softbox in place.  Click to compare with diffusion material.
</figcaption>
</figure>


<p>I also created some crops to help illustrate the difference up close:</p>
<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/crop-1-bare.jpg" alt="Softness test crop #1" width="640" height="640" />
<figcaption>
Click to compare: 
<span class='toggle-swap' data-fig-swap='crop-1-bare.jpg'>Bare Flash</span>
<span class='toggle-swap' data-fig-swap='crop-1-diffusion.jpg'>With Diffusion</span>
<span class='toggle-swap' data-fig-swap='crop-1-softbox.jpg'>With Softbox</span>
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/crop-2-bare.jpg" alt="Softness test crop #1" width="640" height="640" />
<figcaption>
Click to compare: 
<span class='toggle-swap' data-fig-swap='crop-2-bare.jpg'>Bare Flash</span>
<span class='toggle-swap' data-fig-swap='crop-2-diffusion.jpg'>With Diffusion</span>
<span class='toggle-swap' data-fig-swap='crop-2-softbox.jpg'>With Softbox</span>
</figcaption>
</figure>

<p>Hopefully this demonstration can help put to rest any notion of softening a light through close-set diffusion material (at not-close flash-to-subject distances).  At the end of the day, the “softness” quality of a light is a function of the <em>apparent size</em> of the light source <em>relative to the subject</em>. (The sun is the biggest light source I know of, but it’s so far it’s quality is quite harsh.)</p>
<h2 id="a-question-of-scaling"><a href="#a-question-of-scaling" class="header-link-alt">A Question of Scaling</a></h2>
<p>On <a href="https://discuss.pixls.us">discuss</a>, member <a href="https://discuss.pixls.us/users/paperdigits">Mica</a> <a href="https://discuss.pixls.us/t/whats-your-workflow-for-up-scaling-images/375/7">asked an awesome question</a> about what our workflows are for adding resolution (upsizing) to an image.
There were a bunch of great suggestions from the community.</p>
<p>One I wanted to talk about briefly I thought was interesting from a technical perspective.</p>
<p>Both Hasselblad and Olympus announced not too long ago the ability to drastically increase the resolution of images in their cameras that used a “sensor-shift” technology to shift the sensor by a pixel or so while shooting multiple frames, then combing the results into a much larger megapixel image (200MP in the case of Hasselblad, and 40MP in the Olympus).</p>
<p>It turns out we can do the same thing manually by burst shooting a series of images while handholding the camera (the subtle movement of our hand while shooting provides the requisite “shift” to the sensor).
Then we simply combine the images, upscale, and average the results to get a higher resolution result.</p>
<p>The basic workflow uses <a href="http://hugin.sourceforge.net/">Hugin</a> <code>align_image_stack</code>, <a href="http://imagemagick.org/script/index.php">Imagemagick</a> <code>mogrify</code>, and <a href="http://gmic.eu/">G’MIC</a> <code>mean blend script</code> to achieve the results.</p>
<ol>
<li>Shoot a bunch of handheld images in burst mode (if available).</li>
<li>Develop raw files if that’s what you shot.</li>
<li>Scale images up to 4x resolution (200% in width and height).  Straight nearest-neighbor type of upscale is fine.<ul>
<li>In your directory of images, create a new sub-directory called <em>resized</em>.</li>
<li>In your directory of images, run <code>mogrify -scale 200% -format tif -path ./resized *.jpg</code> if you use jpg’s, otherwise change as needed.
This will create a directory full of upscaled images.</li>
</ul>
</li>
<li>Align the images using Hugin’s <code>align_image_stack</code> script.<ul>
<li>In the <em>resized</em> directory, run <code>/path/to/align_image_stack -a OUT file1.tif file2.tif ... fileX.tif</code>
The <code>-a OUT</code> option will prefix all your new images with <code>OUT</code>.</li>
<li>I move all of the <code>OUT*</code> files to a new sub-directory called <code>aligned</code>.</li>
</ul>
</li>
<li>In the <code>aligned</code> directory, you now only need to mean average all of the images together.<ul>
<li>Using Imagemagick: <code>convert OUTfile*.tif -evaluate-sequence mean output.bmp</code></li>
<li>Using G’MIC: <code>gmic video-avg.gmic -avg \&quot; *.tif \&quot; -o output.bmp</code></li>
</ul>
</li>
</ol>
<p>I used 7 burst capture images from an iPhone 6+ (default resolution 3264x2448).
This is the test image:</p>
<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/Super-full.jpg" alt="Superresolution test image" width="640" height="480" />
<figcaption>
Sample image, red boxes show 100% crop areas.
</figcaption>
</figure>

<p>Here is a 100% crop of the first area:</p>
<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/crop-1-base.jpg" alt="Superresolution crop #1 example" width="500" height="250" />
<figcaption>
100% crop of the base image, straight upscale.
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/crop-1-super.jpg" alt="Superresolution crop #1 example result" width="500" height="250" />
<figcaption>
100% crop, super resolution process result.
</figcaption>
</figure>

<p>The second area crop:</p>
<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/crop-2-base.jpg" alt="Superresolution crop #2 example " width="500" height="250" />
<figcaption>
100% crop, super resolution process result.
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2015/09/softness-and-superresolution/crop-2-super.jpg" alt="Superresolution crop #2 example result" width="500" height="250" />
<figcaption>
100% crop, super resolution process result.
</figcaption>
</figure>


<p>Obviously this doesn’t replace the ability to have that many raw pixels available in a single exposure, but if the subject is relatively static this method can do quite well to help increase the resolution.
As with any mean/median blending technique, a nice side-effect of the process is great noise reduction as well…</p>
<p>Not sure if this warrants a full article post, but may consider it for later.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Freaky Details (Calvin Hollywood) ]]></title>
            <link>https://pixls.us/articles/freaky-details-calvin-hollywood/</link>
            <guid isPermaLink="true">https://pixls.us/articles/freaky-details-calvin-hollywood/</guid>
            <pubDate>Mon, 31 Aug 2015 19:33:50 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/freaky-details-calvin-hollywood/freaky.jpg" /><br/>
                 <h1>Freaky Details (Calvin Hollywood)</h1>  
                 <h2>Replicating Calvin Hollywood's Freaky Details in GIMP</h2>   
                <p>German photographer/digital artist/photoshop trainer <a href="http://www.calvinhollywood-blog.com">Calvin Hollywood</a> has a rather unique style to his photography. It’s a sort of edgy, gritty, hyper-realistic result, almost a blend between illustration and photography.  </p>
<figure>
<a href="http://www.calvinhollywood-blog.com/portfolio/">
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/calvin-thumbs.jpg" alt="Calvin Hollywood Examples" width="470" height="315" />
</a>
</figure>

<p>As part of one of his courses, he talks about a technique for accentuating details in an image that he calls “Freaky Details”.  </p>
<p>Here is Calvin describing this technique using Photoshop:</p>
<div class='fluid-vid'><iframe width="560" height="315" src="https://www.youtube.com/embed/ZV9u0Wu8L0M" frameborder="0" allowfullscreen=""></iframe></div>

<p>In my meandering around different retouching tutorials I came across it a while ago, and wanted to replicate the results in <a href="http://www.gimp.org">GIMP</a> if possible. There were a couple of problems that I ran into for replicating the exact same workflow:  </p>
<ol>
<li>Lack of a “Vivid Light” layer blend mode in GIMP</li>
<li>Lack of a “Surface Blur” in GIMP</li>
</ol>
<p>Those problems have been rectified (and I have more patience these days to figure out what exactly was going on), so let’s see what it takes to replicate this effect in GIMP!</p>
<h2 id="replicating-freaky-details">Replicating Freaky Details<a href="#replicating-freaky-details" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="requirements">Requirements<a href="#requirements" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The only extra thing you’ll need to be able to replicate this effect is <a href="http://gmic.eu/">G’MIC for GIMP</a>.</p>
<p class='aside'>
You don’t <em>technically</em> need G’MIC to make this work, but the process of manually creating a <strong>Vivid Light</strong> layer is tedious and error-prone in GIMP right now.
Also, you won’t have access to G’MIC’s Bilateral Blur for smoothing. 
And, seriously, it’s G’MIC - you should have it anyway for all the other cool stuff it does!
</p>

<h3 id="summary-of-steps">Summary of Steps<a href="#summary-of-steps" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Here’s the summary of steps we are about to walk through to create this effect in GIMP:  </p>
<ol>
<li>Duplicate the background layer.</li>
<li>Invert the colors of the top layer.</li>
<li>Apply “Surface Blur” to top layer.</li>
<li>Set top layer blend mode to “Vivid Light”.</li>
<li>New layer from visible.</li>
<li>Set layer blend mode of new layer to “Overlay”, hide intermediate layer.</li>
</ol>
<p>There are just a couple of small things to point out though, so keep reading to be aware of them!  </p>
<h3 id="detailed-steps">Detailed Steps<a href="#detailed-steps" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I’m going to walk through each step to make sure it’s clear, but first we need an image to work with!  </p>
<p>As usual, I’m off to <a href="http://www.flickr.com/creativecommons">Flickr Creative Commons</a> to search for a <a href="https://creativecommons.org/" title="Creative Commons">CC licensed</a> image to illustrate this with. 
I found an awesome portrait taken by the <a href="https://www.flickr.com/photos/thenationalguard/">U.S. National Guard/Staff Sergeant Christopher Muncy</a>:</p>
<figure>
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Base.jpg" alt="New York National Guard, on Flickr" width="640" height="808" />
<figcaption>
<a href="https://www.flickr.com/photos/thenationalguard/15941126053">New York National Guard</a> by <a href="https://www.flickr.com/photos/thenationalguard/">U.S. National Guard/Staff Sergeant Christopher Muncy</a> 
on Flickr (<span class='cc'><a href="https://creativecommons.org/licenses/by/2.0/" title="Creative Commons Attribution">cb</a></span>).<br/>
Airman First Class Anthony Pisano, a firefighter with the New York National Guard’s 106th Civil Engineering Squadron, 106th Rescue Wing conducts a daily equipment test during a major snowstorm on February 17, 2015.<br/>
(New York Air National Guard / Staff Sergeant Christopher S Muncy / released)
</figcaption>
</figure>

<p>This is a great image to test the effect, and to hopefully bring out the details and gritty-ness of the portrait.  </p>
<h4 id="1-2-duplicate-background-layer-and-invert-colors">1./2. Duplicate background layer, and invert colors<a href="#1-2-duplicate-background-layer-and-invert-colors" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>So, duplicate your base image layer (Background in my example).  </p>
<p><span class="Cmd">Layer → Duplicate<br> (Shift-Ctrl-D,Shift-⌘-D)
</span></p>
<p>I will usually name the duplicate layer something descriptive, like <strong>“Temp”</strong> ;).  </p>
<p>Next we’ll just invert the colors on this <strong>“Temp”</strong> layer.  </p>
<p><span class="Cmd">Colors → Invert</span></p>
<p>So right now, we should be looking at this on our canvas:  </p>
<figure>
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Base-Invert.jpg" alt="GIMP Freaky Details Inverted Image" width="640" height="808" />
<figcaption>
The inverted duplicate of the base layer.
</figcaption>
</figure>


<figure>
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Base-Invert-Layers.png" alt="GIMP Freaky Details Inverted Image Layers" width="249" height="213" />
<figcaption>
What the Layers dialog should look like.
</figcaption>
</figure>

<p>Now that we’ve got our inverted <strong>“Temp”</strong> layer, we just need to apply a little blur.  </p>
<h4 id="3-apply-surface-blur-to-temp-layer">3. Apply “Surface Blur” to Temp Layer<a href="#3-apply-surface-blur-to-temp-layer" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>There’s a couple of different ways you could approach this. Calvin Hollywood’s tutorial explicitly calls for a Photoshop <strong>Surface Blur</strong>. I think part of the reason to use a <strong>Surface Blur</strong> vs. <strong>Gaussian Blur</strong> is to cut down on any halos that will occur along edges of high contrast.  </p>
<p>There are three main methods of blurring this layer that you could use:  </p>
<ol>
<li><p>Straight Gaussian Blur (easiest/fastest, but may halo - worst results)  </p>
<p><span class="Cmd" style="font-size:0.9em;">Filters → Blur → Gaussian Blur</span></p>
</li>
<li><p>Selective Gaussian Blur (closer to true “Surface Blur”)  </p>
<p><span class="Cmd" style="font-size:0.9em;">Filters → Blur → Selective Gaussian Blur</span></p>
</li>
<li><p>G’MIC’s Smooth [bilateral] (closest to true “Surface Blur”)  </p>
<p><span class="Cmd" style="font-size:0.9em;">Filters → G’MIC → Repair → Smooth [bilateral]</span></p>
</li>
</ol>
<p>I’ll leave it as an exercise for the reader to try some different methods and choose one they like. (At this point I personally pretty much just always use G’MIC’s Smooth [bilateral] - this produces the best results by far).  </p>
<p>For the Gaussian Blurs, I’ve had good luck with radius values around 20% - 30% of an image dimension. As the blur radius increases, you’ll be acting more on larger local contrasts (as opposed to smaller details) and run the risk of halos. So just keep an eye on that.  </p>
<p>So, let’s try applying some G’MIC Bilateral Smoothing to the <strong>“Temp”</strong> layer and see how it looks!  </p>
<p>Run the command:  </p>
<p><span class="Cmd" >Filters → G’MIC → Repair → Smooth [bilateral]</span></p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/GMIC-bilateral.png" alt="GIMP Freaky Details G'MIC Bilateral Filter" width="960" height="735" />
<figcaption>
The values I used in this example for Spatial/Value Variance.
</figcaption>
</figure>

<p>The values you want to fiddle with are the Spatial Variance and Value Variance (25 and 20 respectively in my example). You can see the values I tried for this walkthrough, but I encourage you to <em>experiment a bit on your own as well</em>!  </p>
<p>Now we should see our canvas look like this:  </p>
<figure >
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Base-Bilateral.jpg" alt="GIMP Freaky Details G'MIC Bilateral Filter Result" width="640" height="808" />
<figcaption>
Our <strong>“Temp”</strong> layer after applying G’MIC Smoothing [bilateral]
</figcaption>
</figure>


<figure>
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Base-Invert-Layers.png" alt="GIMP Freaky Details Inverted Image Layers" width="249" height="213" />
<figcaption>
Layers should still look like this.
</figcaption>
</figure>


<p>Now we just need to blend the <strong>“Temp”</strong> layer with the base background layer using a <strong>“Vivid Light”</strong> blending mode…  </p>
<h4 id="4-5-set-temp-layer-blend-mode-to-vivid-light-new-layer">4./5. Set <em>Temp</em> Layer Blend Mode to <em>Vivid Light</em> &amp; New Layer<a href="#4-5-set-temp-layer-blend-mode-to-vivid-light-new-layer" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Now we need to blend the <strong>“Temp”</strong> layer with the Background layer using a <strong>“Vivid Light”</strong> blending mode. Lucky for me, I’m friendly with the G’MIC devs, so I asked nicely, and ﻿<a href="https://tschumperle.users.greyc.fr/">David Tschumperlé</a> added this blend mode for me.  </p>
<p>So, again we start up G’MIC:  </p>
<p><span class="Cmd">Filters → G’MIC → Layers → Blend [standard] - Mode: Vivid Light</span></p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/GMIC-Vivid.png" alt="GIMP Freaky Details Vivid Light Blending" width="960" height="735" />
<figcaption>
G’MIC <strong>Vivid Light</strong> blending mode, pay attention to <span style="color:green;">Input/Output!</span>
</figcaption>
</figure>

<p>Pay careful attention to the <span style="color:green;">Input/Output</span> portion of the dialog. You’ll want to set the <strong>Input Layers</strong> to <strong>All visibles</strong> so it picks up the <strong>Temp</strong> and <strong>Background</strong> layers. You’ll also probably want to set the <strong>Output</strong> to <strong>New layer(s)</strong>.  </p>
<p>When it’s done, you’re going to be staring at a very strange looking layer, for sure:  </p>
<figure> 
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Base-Vivid.jpg" alt="GIMP Freaky Details Vivid Light Blend Mode" width="640" height="808" />
<figcaption>
Well, sure it looks weird out of context…
</figcaption>
</figure>


<figure> 
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/GMIC-Vivid-Layers.png" alt="GIMP Freaky Details Vivid Light Blend Mode Layers" width="249" height="258" />
<figcaption>
The layers should now look like this.
</figcaption>
</figure>


<p>Now all that’s left is to hide the <strong>“Temp”</strong> layer, and set the new <strong>Vivid Light</strong> result layer to <strong>Overlay</strong> layer blending mode…  </p>
<h4 id="6-set-vivid-light-result-to-overlay-hide-temp-layer">6. Set Vivid Light Result to Overlay, Hide <em>Temp</em> Layer<a href="#6-set-vivid-light-result-to-overlay-hide-temp-layer" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>We’re just about done. Go ahead and hide the <strong>“Temp”</strong> layer from view (we won’t need it anymore - you could delete it as well if you wanted to).  </p>
<p>Finally, set the G’MIC <strong>Vivid Light</strong> layer output to <strong>Overlay</strong> layer blend mode:  </p>
<figure> 
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/GMIC-Final-Layers.png" alt="GIMP Freaky Details Final Blend Mode Layers" width="249" height="259" />
<figcaption>
Set the resulting G’MIC output layer to <strong>Overlay</strong> blend mode.
</figcaption>
</figure>


<p>The results we should be seeing will have enhanced details and contrasts, and should look like this (mouseover to compare the original image):  </p>
<figure> 
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Final.jpg" alt="GIMP Freaky Details Final" data-swap-src="Base.jpg" width="640" height="808" />
<figcaption>
Our final results (whew!)<br/>
(click to compare to original)
</figcaption>
</figure>


<p>This technique will emphasize any noise in an image so there may be some masking and selective application required for a good final effect.</p>
<h3 id="summary">Summary<a href="#summary" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This is not an effect for everyone. I can’t stress that enough. It’s also not an effect for every image. But if you find an image it works well on, I think it can really do some interesting things. It can definitely bring out a very dramatic, gritty effect (it works well with nice hard rim lighting and textures).  </p>
<p>The original image used for this article is another great example of one that works well with this technique:</p>
<figure> 
<img src="https://pixls.us/articles/freaky-details-calvin-hollywood/Final2-curves.jpg" alt="GIMP Freaky Details Alternate Final" data-swap-src="Base2.jpg" width="640" height="962" />
<figcaption>
<a href="http://www.flickr.com/photos/shakeskc/6519028411/">After a Call</a> by <a href="http://markshaiken.com/">Mark Shaiken</a> on Flickr. (<span class='cc'><a href="https://creativecommons.org/licenses/by-nc-sa/2.0/" title="Creative Commons Attribution Non-Commercial Share-Alike">cbna</a></span>)
</figcaption>
</figure>

<p>I had muted the colors in this image before applying some Portra-esque color curves to the final result..</p>
<p>Finally, a <strong>BIG THANK YOU</strong> to <a href="https://tschumperle.users.greyc.fr/">David Tschumperlé</a> for taking the time to add a <strong>Vivid Light</strong> blend mode in G’MIC.  </p>
<p>Try the method out and let me know what you think or how it works out for you! And as always, if you found this useful in any way, please share it, pin it, like it, or whatever you kids do these days…  </p>
<p>This tutorial was originally published <a href="http://blog.patdavid.net/2013/02/calvin-hollywood-freaky-details-in-gimp.html">here</a>.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Notes from the dark(table) Side ]]></title>
            <link>https://pixls.us/blog/2015/08/notes-from-the-dark-table-side/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/08/notes-from-the-dark-table-side/</guid>
            <pubDate>Fri, 14 Aug 2015 14:32:34 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/08/notes-from-the-dark-table-side/darktable_2.jpg" /><br/>
                 <h1>Notes from the dark(table) Side</h1>  
                 <h2>A review of the Open Source Photography Course</h2>   
                <p>We recently posted about the Open Source Photography Course from photographer Riley Brandt.
We now also have a review of the course as well.</p>
<p>This review is actually by one of the <a href="http://wwww.darktable.org">darktable</a> developers, <a href="http://houz.org">houz</a>!
He had originally <a href="https://discuss.pixls.us/t/review-of-riley-brandts-open-source-photography-course/344/1">posted it on discuss</a> as a topic but I think it deserves a blog post instead.
(When a developer from a favorite project speaks up, it’s usually worth listening…)</p>
<p>Here is houz’s review:</p>
<hr>
<h2 id="the-open-source-photography-course-review"><a href="#the-open-source-photography-course-review" class="header-link-alt">The Open Source Photography Course Review</a></h2>
<h3 id="by-houz"><a href="#by-houz" class="header-link-alt">by houz</a></h3>
<figure>
<img src="https://pixls.us/blog/2015/08/notes-from-the-dark-table-side/houz.jpg" alt="Author houz headshot" />
</figure>


<p>It seems that there is no topic to discuss <a href="https://discuss.pixls.us/t/the-open-source-photography-course/263">The Open Source Photography Course</a> yet so let’s get started.</p>
<h3 id="disclaimer"><a href="#disclaimer" class="header-link-alt">Disclaimer</a></h3>
<p>First of all, as a darktable developer I am biased so take everything I write with a grain of salt. Second, I didn’t pay for my copy of the videos but Riley was kind enough to provide a free copy for me to review. So add another pinch of salt. I will therefore not tell you if I would encourage you to buy the course. You can have my impressions nevertheless.</p>
<h3 id="review"><a href="#review" class="header-link-alt">Review</a></h3>
<p>I won’t say anything about the GIMP part, not because it wouldn’t know how to use that software but it’s relatively short and I just didn’t notice anything to comment on. It’s solid basics of how to use GIMP and the emphasis on layer masks is really important in real world usage.</p>
<!-- more -->
<p>Now for the darktable part, I have to say that I liked it a lot. It showcases a viable workflow and is relatively complete – not by explaining every module and becoming the audio book of the user manual but by showing at least one tool for every task. And as we all know, in darktable there are many ways to skin a cat, so concentrating on your favourites is a good thing.</p>
<p>What I also appreciate is that Riley managed to cut the single topics to manageable chunks of around 10 minutes or less so you can easily watch them in your lunch break and have no problem to come back to one topic later and easily find what you are looking for.</p>
<p>Before this starts to sound like an advertisement I will just point out some small nitpicking things I noticed while watching the videos. Most of these were not errors in the videos but are just extra bits of information that might make your workflow even smoother, so it’s more of an addendum than an erratum.</p>
<ul>
<li>When going through your images on lighttable you can either zoom in till you only see a single image (alt-1 is a shortcut for that) or hold the z key pressed. Both are shown in the videos. The latter can quickly become tedious since releasing z just once bring you back to where you were. There are however two more keyboard shortcuts that are not assigned by default under views&gt;lighttable: ‘sticky preview’ and ‘sticky preview with focus detection’. Both work just like normal z and ctrl-z, just without the need to keep the key pressed. You can assign a key to these, for example by reusing z and ctrl-z.</li>
<li>Color labels can be set with F1 .. F5, similar to rating.</li>
<li>Basecurve and tonecurve allow very fine up/down movement of points with the mouse wheel. Hover over a node and scroll.</li>
<li>Gaussian in shadows&amp;highlights tends to give stronger halos than bilateral in normal use, see <a href="http://www.darktable.org/2012/09/edge-aware-image-development/">the darktable blog</a> for an example.</li>
<li>For profiled denoising better use ‘HSV color’ instead of ‘color’ and ‘HSV lightness’ instead of ‘lightness’, see <a href="http://darktable.org/usermanual/ch03s02s06.html.php">the user manual</a> for details.</li>
<li>When using the mouse wheel to zoom the image you can hold ctrl to get it smaller than fitting to the screen. That’s handy to draw masks over the image border.</li>
<li>When moving the triangles in color zones apart you actually widen the scope of affected values since the curve gets moved off the center line on a wider range.</li>
<li>Also color zones: You can also change reds and greens in the same instance, no need for multiple instances. Riley knows that and used two instances to be able to control the two changes separately.</li>
<li>When loading sidecar files from lighttable, you can even treat a JPEG that was exported from darktable like an XMP file and manually select that since the JPEGs get the processing data embedded. It’s like a backup of the XMP with a preview. <strong>Caveat:</strong> When using LOTS of mask nodes (mostly with the brush mask) the XMP data might get too big so it’s no longer possible to embed in the JPEG, but in general it works.</li>
<li>The collect module allows to store presets so you can quickly access often used search rules. And since presets only store the module settings and not 
the resulting image set these will be updated when new images are imported.</li>
<li>In neutral density you can draw a line with the right mouse button, similar to rotating images.</li>
<li>Styles can also be created from darkroom, there is a small button next to the history compression button.</li>
</ul>
<p>So, that’s it from me. Did you watch the videos, too? What was your impression? Do you have any remarks?</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Color Curves Matching ]]></title>
            <link>https://pixls.us/articles/color-curves-matching/</link>
            <guid isPermaLink="true">https://pixls.us/articles/color-curves-matching/</guid>
            <pubDate>Tue, 04 Aug 2015 19:10:36 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/color-curves-matching/dorothy.jpg" /><br/>
                 <h1>Color Curves Matching</h1>  
                 <h2>Sample points and matching tones</h2>   
                <p>In my previous post on <a href="https://pixls.us/articles/basic-color-curves/">Color Curves for Toning/Grading</a>, I looked at the basics of what the Curves dialog lets you do in <a href="http://www.gimp.org">GIMP</a>.
I had been meaning to revisit the subject with a little more restraint (the color curve in that post was a little rough and gross, but it was for illustration so I hope it served its purpose).</p>
<p>This time I want to look at the use of curves a little more carefully.
You’d be amazed at the subtlety that gentle curves can produce in toning your images.
Even small changes in your curves can have quite the impact on your final result.
For instance, have a look at the four film emulation curves created by <a href="http://www.prime-junta.net/pont/How_to/100_Curves_and_Films/_Curves_and_films.html">Petteri Sulonen</a> (if you haven’t read his page yet on creating these curves, it’s well worth your time):</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/dot-original.jpg" alt='Dot Original Headshot' width='550' height='469'>
<figcaption>
Original
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/color-curves-matching/dot-portra.jpg" alt='Dot Portra NC400 Film' width='550' height='469'>
<figcaption>
Portra<em>esque</em> (Kodak Portra NC400 Film)
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/color-curves-matching/dot-provia.jpg" alt='Dot Fuji Provia Film' width='550' height='469'>
<figcaption>
Provia<em>esque</em> (Fujichrome Provia)
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/color-curves-matching/dot-velvia.jpg" alt='Dot Fuji Velvia Film' width='550' height='469'>
<figcaption>
Velvia<em>esque</em> (Fujichrome Velvia)
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/color-curves-matching/dot-xpro.jpg" alt='Dot crossprocessed C41 Film' width='550' height='469'>
<figcaption>
Crossprocess (E6 slide film in C-41 neg. processing)
</figcaption>
</figure>

<p>I can’t thank Petteri enough for releasing these curves for everyone to use (for us GIMP users, there is a .zip file at the bottom of his post that contains these curves packaged up).
Personally I am a huge fan of the Portra<em>esque</em> curve that he has created.
If there is a person in my images, it’s usually my go-to curve as a starting point.
It really does generate some wonderful skin tones overall.</p>
<p>The problem in generating these curves is that one has to be very, very familiar with the characteristics of the film stocks you are trying to emulate.
I never shot Velvia personally, so it is hard for me to have a reference point to start from when attempting to emulate this type of film.</p>
<p>What we can do, however, is to use our personal vision or sense of aesthetic to begin toning our images to something that we like.  GIMP has some great tools for helping us to become more aware of color and the effects of each channel on our final image.  That is what we are going to explore…</p>
<p class='aside'>
<span>Disclaimer</span>

I cannot stress enough that what we are approaching here is an entirely subjective interpretation of what is pleasing to our own eyes.  Color is a very complex subject and deserves study to really understand.  Hopefully some of the things I talk about here will help pique your interest to push further and experiment!
<br/>
There is no right or wrong, but rather what you find pleasing to your own eye.
</p>



<h2 id="approximating-tones">Approximating Tones<a href="#approximating-tones" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>What we will be doing is using <strong>Sample Points</strong> and the <strong>Curves</strong> dialog to modify the color curves in my image above to emulate something else.  It could be another photograph, or even a painting.</p>
<p>I’ll be focusing on the skin tones, but the method can certainly be used for other things as well.</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/dot-original.jpg" alt='Dot Original Headshot' width='550' height='469'>
<figcaption>
My wonderful model.
</figcaption>
</figure>

<p>With an image you have, begin considering what you might like to approximate the tones on.  For instance, in my image above I want to work on the skin tones to see where it leads me.</p>
<p>Now find an image that you like, and would like to approximate the tones from.  It helps if the image you are targeting already has tones <em>somewhat</em> similar to what you are starting with (for instance, I would look for another Caucasian image with similar tones to start from, as opposed to Asian).  Keeping tones at least similar will reduce the violence you’ll do to your final image.</p>
<p>So for my first example, perhaps I would like to use the knowledge that the Old Masters already had in regards to color, and would like to emulate the skin tones from Vermeer’s <em>Girl with the Pearl Earring</em>.</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/537px-Johannes_Vermeer_%281632-1675%29_-_The_Girl_With_The_Pearl_Earring_%281665%29.jpg" alt='Johannes Vermeer Girl with the Pearl Earring' width='537' height='768'>
<figcaption>
<a href="http://en.wikipedia.org/wiki/Johannes_Vermeer">Johannes Vermeer</a> - <a href="http://en.wikipedia.org/wiki/Girl_with_a_Pearl_Earring">The Girl With The Pearl Earring (1665)</a>
</figcaption>
</figure>

<p>In GIMP I will have my original image already opened, and will then open my target image as a new layer.  I’ll pull this layer to one side of my image to give me a view of the areas I am interested in (faces and skin).</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/vermeer-initial.jpg" alt='Vermeer setup GIMP' width='640' height='539'>
</figure>

<p>I will be using <a href="http://docs.gimp.org/en/gimp-sample-point-dialog.html"><strong>Sample Points</strong></a> extensively as I proceed.  Read up on them if you haven’t used them before.  They are basically a means of giving you real-time feedback of the values of a pixel in your image (you can track up to four points at one time).</p>
<p>I will put a first sample point somewhere on the higher skin tones of my base image.  In this case, I will put one on my models forehead (we’ll be moving it around shortly, so somewhere in the neighborhood is fine).</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/sample-point-first.png" alt='GIMP first sample point' width='381' height='195'>
</figure>

<p><strong>Ctrl + Left Click</strong> in the ruler area of your main window (shown in <span style="color: #00FF00;">green above</span>), and drag out into your image.  There should be crosshairs across your entire image screen showing you where you are dragging.</p>
<p>When you release the mouse button, you’ve dropped a <strong>Sample Point</strong> onto your image.  You can see it in my image above as a small crosshair with the number <strong>1</strong> next to it.</p>
<p>GIMP <i>should</i> open the sample points dialog for you when you create the first point, but if not you can access it from the image menu under:</p>
<p><span class='Cmd'>Windows → Dockable Dialogs → Sample Points</span></p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/Sample-point-first-dialog.png" alt='Sample points dialog' width='208' height='330'>
</figure>

<p>This is what the dialog looks like.
You can see the RGB pixel data for the first sample point that I have already placed.
As you place more sample points, they will each be reflecting their data on this dialog.</p>
<p>You can go ahead and place more sample points on your image now.  I’ll place another sample point, but this time I will put it on my target image where the tones seem similar in brightness.</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/vermeer-2-points.jpg" alt='Sample point placed' width='550' height='167'>
</figure>

<p>What I’ll then do is change the data being shown in the <strong>Sample Points</strong> dialog to show HSV data instead of Pixel data.</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/Sample-point-value-match.png" alt='Sample points dialog with 2 points' width='208' height='330'>
</figure>

<p>Now, I will shoot for around 85% value on my source image, and try to find a similar value level in similar tones from my target image as well.  Once you’ve placed a sample point, you can continue to move it around and see what types of values it gives you.  (If you use another tool in the meantime, and can no longer move just the sample point - you can just select the <strong>Color Picker Tool</strong> to be able to move them again).</p>
<p>Move the points around your skin tones until you get about the same <strong>Value</strong> for both points.</p>
<p>Once you have them, make sure your original image layer is active, then start up the curves dialog.</p>
<p><span class='Cmd'>Colors → Curves…</span></p>
<p>Now here is something really handy to know while using the Curves dialog: if you hover your mouse over your image, you’ll notice that the cursor is a dropper - you can click and drag on an area of your image, and the corresponding value will show up in your curves dialog for that pixel (or averaged area of pixels if you turn that on).  </p>
<p>So click and drag to about the same pixel you chose in your original image for the sample point.</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/curve-first-point.png" alt='Curve base' width='378' height='521'>
<figcaption>
Curves dialog with a value point (217) for my sampled pixel.
</figcaption>
</figure>

<p>Here is what my working area currently looks like:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/color-curves-matching/workspace-1.jpg" alt='GIMP workspace for sample point color matching' width='960' height='439'>
</figure>

<p>I have my curves dialog open, and an area around my sample point chosen so that the values will be visible in the dialog, my images with their associated sample points, and the sample points dialog showing me the values of those points.</p>
<p>The basic idea now is to adjust my RGB channels to get my original image sample point (#1) to match my target image sample point (#2).</p>
<p>Because I selected an area around my sample point with the curves dialog open, I will know roughly where those values need to be adjusted.  Let’s start with the <b style="color: #FF0000;">Red</b> channel.</p>
<p>First, set the <strong>Sample Points</strong> dialog back to <strong><i>Pixel</i></strong> to see the RGBA data for that pixel.</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/Sample-point-rgb-match.png" alt='GIMP Sample point Red Green Blue matching' width='208' height='330'>
</figure>

<p>We can now see that to match the pixel colors we will need to make some adjustments to each channel.  Specifically, </p>
<p>the <b style="color: #ff0000">Red</b> channel will have to come down a bit (218 → 216), </p>
<p>the <b style="color: #00ff00">Green</b> down some as well (188 → 178), </p>
<p>and <b style="color: #0000ff">Blue</b> much more (171 → 155).</p>
<p>You may want to resize your <strong>Curves</strong> dialog window larger to be able to more finely control the curves.  If we look at the Red channel in my example, we would want to adjust the curve down slightly at the vertical line that shows us where our pixel values are:</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/earring-red.png" alt='Color Curve Adjustment Red' width='370' height='495'>
</figure>

<p>We can adjust the red channel curve along this vertical axis (marked x:217) until our pixel red value matches the target (216).</p>
<p>Then just change over to the green channel and do the same:</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/earring-green.png" alt='Color Curve Adjustment Green' width='370' height='495'>
</figure>

<p>Here we are adjusting the green curve vertically along the axis marked x:190 until our pixel green value matches the target (178).</p>
<p>Finally, follow the same procedure for the blue channel:</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/earring-blue.png" alt='Color Curve Adjustment Blue' width='370' height='495'>
</figure>

<p>As before, we adjust along the vertical axis x:173 until our blue channel matches the target (155).</p>
<p>At this point, our first sample point pixel should be the same color as from our target.</p>
<p>The important thing to take away from this exercise is to be watching your image as you are adjusting these channels to see what types of effects they produce.  Dropping the green channel should have seen a slight addition of magenta to your image, and dropping the blue channel should have shown you the addition of a yellow to balance it.</p>
<p>Watch your image as you make these changes.</p>
<p><em><strong>Don’t</strong> hit <em>OK</em> on your curves dialog yet!</em></p>
<p>You’ll want to repeat this procedure, but using some sample points that are darker than the previous ones.  Our first sample points had values of about 85%, so now let’s see if we can match pixels down below 50% as well.</p>
<p><em>Without</em> closing your curves dialog, you should be able to click and drag your sample points around still.  So I would set your <strong>Sample Points</strong> dialog to show you HSV values again, and now drag your first point around on your image until you find some skin that’s in a darker value, maybe around 40-45%.</p>
<p>Once you do, try to find a corresponding value in your target image (or something close at least).</p>
<p>I managed to find skin tones with values around 45% in both of my images:</p>
<div style='text-align: center; height: 366px;'>
<img style='display: inline; width: initial;' src="https://pixls.us/articles/color-curves-matching/sample-point-45.png" width='208' height='330' alt="Color CUrve Skin Dark">
<img style='display: inline; width: initial;' src="https://pixls.us/articles/color-curves-matching/sample-point-45-rgb.png" width='208' height='330' alt="Color Curve Sking Dark RGB">
</div>

<p>In these darker tones, I can see that the adjustments I will have to make are for:</p>
<p><b style="color: #ff0000">Red</b> down a bit (116 → 114),</p>
<p><b style="color: #00ff00">Green</b> bumped up some (60 → 73),</p>
<p><b style="color: #0000ff">Blue</b> slightly down (55 → 53).</p>
<p>With the curves dialog still active, I then click and drag on my original image until I am in the same area as my sample point again.  This give me my vertical line showing me the value location in my curves dialog, just as before:</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/earring-dark-red.png" alt='Dark tones red' width='370' height='495'>
<figcaption>
<b style="color: #FF0000">Red</b> down to 114.
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/color-curves-matching/earring-dark-green.png" alt='Dark tones green' width='370' height='495'>
<figcaption>
<b style="color: #00FF00;">Green</b> up to 73.
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/color-curves-matching/earring-dark-blue.png" alt='Dark tones blue' width='370' height='495'>
<figcaption>
<b style="color: #0000FF">Blue</b> down to 53.
</figcaption>
</figure>

<p>At this point you <i>should</i> have something similar to the tones of your target image.  Here is my image after these adjustments so far:</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/vermeer-final.jpg" data-swap-src='dot-original.jpg' width='550' height='469' alt='Results so far GIMP Matching'>
<figcaption>
Effects of the curves so far (click to compare to original).
</figcaption>
</figure>

<p>Once you’ve got things in a state that you like, it would be a good idea to save your progress.
At the top of the Curves dialog there is a <strong>“+”</strong> symbol.
This will let you add the current settings to your favorites.
This will allow you to recall these settings later to continue working on them.</p>
<p><strong>However</strong>, you’re results might not quite look right at the moment.  So why not?</p>
<p>Well, the first problem is that <strong>Sample Points</strong> will only allow you to sample a single pixel value.  There’s a chance that the pixels you pick are not truly representative of the correct skin tones in that range (for instance you may have inadvertently clicked a pixel that represents the oil paint cracks in the image).  It would be nice if there were an option for Sample Points to allow an adjustable sample radius (if there is an option I haven’t found it yet).</p>
<p>The second issue is that similar value points might be very different colors overall.  Hopefully your sources will be nice for you to pick in areas that you know are relatively consistent and representative of the tones you want, but it’s not always a guarantee.</p>
<p>If the results are not quite what you want at the moment, you can do what I will sometimes do and go back to the beginning…</p>
<p>While still keeping the curves dialog open you can pull your sample points to another location, and match the target again.  Try choosing another sample point with a similar value as the first one.  This time instead of adding new points the curve as you make adjustments, just drag the existing points you previously placed.</p>
<h2 id="it-s-an-iterative-process">It’s an Iterative Process<a href="#it-s-an-iterative-process" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Depending on how interested you are in tweaking your resulting curve, you may find yourself going around a couple of times.  That’s ok.</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/iterate.png" alt='Iterative flowchart' width='550' height='752'>
</figure>

<p>I would recommend keeping your curves to having two control points at first.  You want your curves to be smooth across the range (any abrupt changes will do strange things to your final image).</p>
<p>If you are doing a couple of iterations, try modifying existing points on your curves instead of adding new ones.  <b style="font-size:1.3em;"><i>It may not be an exact match</i></b>, but it doesn’t have to be.  It only needs to look nice to your eyes.</p>
<p>There won’t be a perfect solution for a perfect color matching between images, but we can produce pleasing curves that emulate the results we are looking for.</p>
<h2 id="in-conclusion">In Conclusion<a href="#in-conclusion" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>I personally have found the process of doing this with different images to be quite instructive in how the curves will affect my image.
If you try this out and pay careful attention to what is happening while you do it, I’m hopeful you will come away with a similar appreciation of what these curves will do.</p>
<p>Most importantly, don’t be constrained by what you are targeting, but rather use it as a stepping off point for inspiration and experimentation for your own expression!</p>
<p>I’ll finish with a couple of other examples…</p>
<figure>
<img src="https://pixls.us/articles/color-curves-matching/botticelli-final.jpg" data-swap-src='dot-original.jpg' alt='Dot Botticelli Birth of Venus' width="550" height="469" >
<figcaption>
<a href="http://en.wikipedia.org/wiki/Sandro_Botticelli">Sandro Botticelli</a> - <a href="http://en.wikipedia.org/wiki/The_Birth_of_Venus_(Botticelli"><em>The Birth of Venus</em></a>) (click to compare to original)
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/color-curves-matching/stmichael-final.jpg" data-swap-src='dot-original.jpg' width="550" height="469" >
<figcaption>
<a href="http://www.googleartproject.com/collection/gemaldegalerie-staatliche-museen-zu-berlin/artwork/st-michael-fa-presto/320372/">Fa Presto - St. Michael</a> (click to compare original)
</figcaption>
</figure>

<p>And finally, as promised, here’s the video tutorial that steps through everything I’ve explained above:</p>
<div class='big-vid'>
<div class='fluid-vid'>
<iframe width="560" height="315" src="https://www.youtube.com/embed/rVfIuYV5Ghs" frameborder="0" allowfullscreen=""></iframe>
</div>
</div>

<p class='aside'>
From a request, I’ve packaged up some of the curves from this tutorial (Pearl Earring, St. Michael, the previous Orange/Teal Hell, and another I was playing with from a Norman Rockwell painting): 

<span style="font-size: 1.2rem;">
<a href="https://docs.google.com/open?id=0B21lPI7Ov4CVT1gyVlpvc3psWVU">Download the Curves (7zip .7z)</a>
</span>
</p>


  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ New Discuss Categories and Logging In ]]></title>
            <link>https://pixls.us/blog/2015/07/new-discuss-categories-and-logging-in/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/07/new-discuss-categories-and-logging-in/</guid>
            <pubDate>Thu, 30 Jul 2015 21:56:42 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/07/new-discuss-categories-and-logging-in/R0001640-carvac-full.jpg" /><br/>
                 <h1>New Discuss Categories and Logging In</h1>  
                 <h2>Software, Showcase, and Critiques. Oh My!</h2>   
                <p>Hot on the heels of our <a href="https://pixls.us/blog/2015/07/welcome-g-mic/">last post</a> about welcoming <a href="http://gmic.eu">G’MIC</a> to the forums at <a href="https://pixls.us//discuss.pixls.us">discuss.pixls.us</a>, I thought I should speak briefly about some other additions I’ve recently made.</p>
<p>These were tough for me to finally make a decision about.
I want to be careful and not get crazy with <em>over</em>-categorization.
At the same time, I <em>do</em> want to make good logical breakdowns for people that is still intuitive.</p>
<!-- more -->
<p>Here is what the current category breakdown looks like for discuss:</p>
<ul>
<li><a href="https://discuss.pixls.us/c/pixls-us">PIXLS.US</a><br><small>The comment/posts from articles/blogposts here on the main site.</small></li>
<li><a href="https://discuss.pixls.us/c/processing">Processing</a><br><small>Processing and managing images after they’ve been captured.</small></li>
<li><a href="https://discuss.pixls.us/c/capturing">Capturing</a><br><small>Capturing an image and the ways we go about doing it.</small></li>
<li><a href="https://discuss.pixls.us/c/showcase"><strong>Showcase</strong></a>  </li>
<li><a href="https://discuss.pixls.us/c/critique"><strong>Critique</strong></a>  </li>
<li><a href="https://discuss.pixls.us/c/meta">Meta</a><br><small>Discussions related to the website or the forum itself.</small><ul>
<li><a href="https://discuss.pixls.us/c/meta/help">Help!</a><br><small>Help with the website or forums.</small></li>
</ul>
</li>
<li><a href="https://discuss.pixls.us/c/software">Software</a><br><small>Discussions about various software in general.</small><ul>
<li><a href="https://pixls.us//discuss.pixls.us/c/software/gmic">G’MIC</a><br><small>Topics all about G’MIC.</small></li>
</ul>
</li>
</ul>
<p>Along with the addition of the <a href="https://discuss.pixls.us/c/software">Software</a> category (and the <a href="https://pixls.us//discuss.pixls.us/c/software/gmic">G’MIC subcategory</a>), I decided that the <a href="https://discuss.pixls.us/c/meta/help">Help!</a> category would make more sense under the <a href="https://discuss.pixls.us/c/meta">Meta</a> category.
That is, the Help! section is for website/forum help, which is more of a Meta topic (hence moving it).</p>
<h3 id="-software-https-discuss-pixls-us-c-software-"><a href="#-software-https-discuss-pixls-us-c-software-" class="header-link-alt"><a href="https://discuss.pixls.us/c/software">Software</a></a></h3>
<p>As we’ve already seen, there is now a <a href="https://discuss.pixls.us/c/software">Software</a> category for all discussions about the various software we use.
The first sub-category to this is of course, the <a href="https://pixls.us//discuss.pixls.us/c/software/gmic">G’MIC subcategory</a>.</p>
<figure>
<img src="https://pixls.us/blog/2015/07/new-discuss-categories-and-logging-in/projects2.jpg" alt="F/OSS Project Logos" />
</figure>

<p>If there is enough interest in it, I am open to creating more sub-categories as needed to support particular software projects (GIMP, darktable, RawTherapee, etc…).
I will wait until there is some interest before adding more categories here.</p>
<h3 id="-showcase-https-discuss-pixls-us-c-showcase-"><a href="#-showcase-https-discuss-pixls-us-c-showcase-" class="header-link-alt"><a href="https://discuss.pixls.us/c/showcase">Showcase</a></a></h3>
<p>This category had some interest from members and I agree that it’s a good idea.
It’s intended as a place for members to showcase the works they’re proud of and to hopefully serve as a nice example of what we’re capable of producing using F/OSS tools.</p>
<p>A couple of examples from the <em>Showcase</em> category so far:</p>
<figure>
<img src="https://pixls.us/blog/2015/07/new-discuss-categories-and-logging-in/R0001640-carvac.jpg" alt='Filmulator Output Example, by Carlo Vaccari'>
<figcaption>
<em>New Life</em>, <a href="https://discuss.pixls.us/t/new-life-how-to-get-great-colors-with-filmulator/304">Filmulator Output Sample</a>, by <a href="https://discuss.pixls.us/users/carvac/activity">CarVac</a>
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/blog/2015/07/new-discuss-categories-and-logging-in/Mairi-Troisieme.jpg" alt='Mairi Troisieme, by Pat David'>
<figcaption>
<a href="https://discuss.pixls.us/t/mairi-troisieme/302">Mairi Troisième</a> by <a href="https://www.flickr.com/photos/patdavid">Pat David</a> (<a href='https://creativecommons.org/licenses/by-nc-sa/2.0/' class='cc'>cbna</a>)
</figcaption>
</figure>

<p>There may be a use of this category later for storing submissions for a <a href="https://discuss.pixls.us/t/poll-main-site-frontpage-lede/244/7">rotating lede image</a> on the main page of the site.</p>
<h3 id="-critique-https-discuss-pixls-us-c-critique-"><a href="#-critique-https-discuss-pixls-us-c-critique-" class="header-link-alt"><a href="https://discuss.pixls.us/c/critique">Critique</a></a></h3>
<p>This is intended as a place for members to solicit advice and critiques on their works from others.
It took me a little work to come up with an initial take on the <a href="https://discuss.pixls.us/t/about-the-critique-category/309">overall description</a> for the category.</p>
<p>I can promise that I will do my best to give honest and constructive feedback to anyone that asks in this category.
I also promise to do my best to make sure that no post goes un-answered here (I know how beneficial feedback has been to me in the past, so it’s the least I could do to help others out in return).</p>
<h2 id="discuss-login-options"><a href="#discuss-login-options" class="header-link-alt">Discuss Login Options</a></h2>
<p>I also bit the bullet this week and <em>finally</em> caved to sign up for a Facebook account.
The only reason was because I had to have a personal account to get an API key to allow people to log in using their FB account (with OAuth).</p>
<figure>
<img src="https://pixls.us/blog/2015/07/new-discuss-categories-and-logging-in/discuss-logins.png" alt='dicuss.pixls.us login options'>
<figcaption>
We can now use Google, Facebook, Twitter, and Yahoo! to Log In.
</figcaption>
</figure>


<p>On the other hand, we now accept <strong>four</strong> different methods of logging in automatically along with signing up for a normal account.
I have been trying to make it as frictionless as possible to join the conversation and hopefully this most recent addition (FB) will help in some small way.</p>
<p>Oh, and if you want to add me on Facebook, my <a href="https://www.facebook.com/profile.php?id=100009722205862">profile can be found here</a>.
I also took the time to create a page for the site here: <a href="https://www.facebook.com/PIXLSUS">PIXLS.US on Facebook</a>.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Basic Color Curves ]]></title>
            <link>https://pixls.us/articles/basic-color-curves/</link>
            <guid isPermaLink="true">https://pixls.us/articles/basic-color-curves/</guid>
            <pubDate>Mon, 27 Jul 2015 15:26:49 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/basic-color-curves/tranquil.jpg" /><br/>
                 <h1>Basic Color Curves</h1>  
                 <h2>An introduction and simple color grading/toning</h2>   
                <p>Color has this amazing ability to evoke emotional responses from us.
From the warm glow of a sunny summer afternoon to a cool refreshing early evening in fall.
We associate colors with certain moods, places, feelings, and memories (consciously or not).</p>
<p>Volumes have been written on color and I am in no ways even remotely qualified to speak on it.
So I won’t.</p>
<p>Instead, we are going to take a look at the use of the <strong>Curves</strong> tool in <a href="http://www.gimp.org">GIMP</a>.
Even though GIMP is used to demonstrate these ideas, the principles are generic to just about any RGB curve adjustments.</p>
<h2 id="your-pixels-and-you">Your Pixels and You<a href="#your-pixels-and-you" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>First there’s something you need to consider if you haven’t before, and that’s what goes into representing a colored pixel on your screen.</p>
<figure>
<img height="250" width="250"  src="https://pixls.us/articles/basic-color-curves/curves-house-square-full.jpg" alt="PIXLS.US House Zoom Example"/>
<figcaption>
Open up an image in GIMP.
</figcaption>
</figure>

<figure>
<img height="250" width="250"  src="https://pixls.us/articles/basic-color-curves/curves-house-square-zoom-1.jpg" alt="PIXLS.US House Zoom Example" />
<figcaption>
Now zoom in.
</figcaption>
</figure>

<figure>
<img height="250" width="250"  src="https://pixls.us/articles/basic-color-curves/curves-house-square-zoom-2.jpg" alt="PIXLS.US House Zoom Example" />
<figcaption>
Nope - don’t be shy now, zoom in more!
</figcaption>
</figure>

<figure>
<img height="250" width="250"  src="https://pixls.us/articles/basic-color-curves/curves-house-square-zoom-3.png" alt="PIXLS.US House Zoom Example" />
<figcaption>
Aaand there’s your pixel.
So let’s investigate what goes into making your pixel.
</figcaption>
</figure>

<p>Remember, each pixel is represented by a combination of 3 colors: <b style="color:red">Red</b>, <b style="color: green;">Green</b>, and <b style="color: blue;">Blue</b>.
In GIMP (currently at 8-bit), that means that each RGB color can have a value from <strong>0 - 255</strong>, and combining these three colors with varying levels in each channel will result in all the colors you can see in your image.</p>
<p>If all three channels have a value of 255 - then the resulting color will be pure white.
If all three channels have a value of 0 - then the resulting color will be pure black.</p>
<p>If all three channels have the same value, then you will get a shade of gray (128,128,128 would be a middle gray color for instance).</p>
<p>So now let’s see what goes into making up your pixel:</p>
<figure>
<img height="233" width="256"  src="https://pixls.us/articles/basic-color-curves/curves-your-pixel-info.png" alt="GIMP Color Picker Pixel View" />
<figcaption>
The RGB components that mix into your final <span style="color: #7ba3ce;">blue pixel.
</figcaption>
</figure>

<p>As you can see, there is more blue than anything else (it is a blue-ish pixel after all), followed by green, then a dash of red.
If we were to change the values of each channel, but kept ratio the same between Red, Green, and Blue, then we would keep the same color and just lighten or darken the pixel by some amount.</p>
<h2 id="curves-value">Curves: Value<a href="#curves-value" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>So let’s leave your pixel alone for the time being, and actually have a look at the <strong>Curves</strong> dialog.
I’ll be using this wonderful image by <a href="http://www.flickr.com/photos/qsimple/">Eric</a> from <a href="http://www.flickr.com">Flickr</a>.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-original.jpg" width="500" height="750" alt="Hollow Moon by Eric qsimple Flickr" />
<figcaption>
<a href="http://www.flickr.com/photos/qsimple/5636649561/">Hollow Moon</a> by <a href="http://www.flickr.com/photos/qsimple/">qsimple/Eric</a> on <a href="http://www.flickr.com">Flickr</a>. (<a class='cc' href="http://creativecommons.org/licenses/by-nc-sa/2.0/">cbna</a>)
</figcaption>
</figure>

<p>Opening up my <strong>Curves</strong> dialog shows me the following:</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-original.png" width="378" height="524" alt="GIMP Base Curves Dialog" />
</figure>

<p>We can see that I start off with the curve for the <strong>Value</strong> of the pixels.
I could also use the drop down for <strong>“Channel”</strong> to change to red, green or blue curves if I wanted to.
For now let’s look at <strong>Value</strong>, though.</p>
<p>In the main area of the dialog I am presented with a linear curve, behind which I will see a histogram of the value data for the entire image (showing the amount of each value across my image).
Notice a spike in the high values on the right, and a small gap at the brightest values.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-original-IO.png" width="378" height="524" alt="GIMP Base Curves Dialog Input Output" />
</figure>

<p>What we can do right now is to adjust the values of each pixel in the image using this curve.
The best way to visualize it is to remember that the bottom range from black to white represents the <span style="color: #0000ff"><strong><i>current</i></strong> value of the pixels</span>, and the left range is the <span style="color: #ff6f00">value to be mapped to</span>.</p>
<p>So to show an example of how this curve will affect your image, suppose I wanted to remap all the values in the image that were in the midtones, and to make them all lighter.
I can do this by clicking on the curve near the midtones, and dragging the curve higher in the Y direction:</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-midtones.png" width="378" height="524" alt="GIMP Base Curves Dialog Push Midtones" />
</figure>

<p>What this curve does is takes the values around the midtones, and pushes their values to be much lighter than they were.
In this case, values around <span style="color: #0000ff">128</span> were re-mapped to now be closer to <span style="color: #ff6f00">192</span>.</p>
<p>Because the curve is set <strong>Smooth</strong>, there will be a gradual transition for all the tones surrounding my point to be pulled in the same direction (this makes for a smoother fall-off as opposed to an abrupt change at one value).
Because there is only a single point in the curve right now, this means that all values will be pulled higher.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-mid-boostl.jpg" data-swap-src='flickr-qsimple-5636649561-original.jpg' width="500" height="750" alt='Hollow Moon Example Pushed Midtones'>
<figcaption>
The results of pushing the midtones of the value curve higher (click to compare to original).
</figcaption>
</figure>

<p>Care should be taken when fiddling with these curves to not blow things out or destroy detail, of course.
I only push the curves here to illustrate what they do.</p>
<p>A very common curve adjustment you may hear about is to apply a slight “S” curve to your values.
The effect of this curve would be to darken the dark tones, and to lighten the light tones - in effect increasing global contrast on your image.
For instance, if I click on another point in the curves, and adjust the points to form a shape like so:</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-slight-s.png" width="378" height="524" alt="GIMP Base Curves Dialog S shaped curve" />
<figcaption>
A slight “S” curve
</figcaption>
</figure>

<p>This will now cause dark values to become even darker, while the light values get a small boost.
The curve still passes through the midpoint, so middle tones will stay closer to what they were.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-slight-s.jpg" data-swap-src='flickr-qsimple-5636649561-original.jpg' width="500" height="750" alt='Hollow Moon Example S curve applied'>
<figcaption>
Slight “S” curve increases global contrast (click for original).
</figcaption>
</figure>

<p>In general, I find it easiest to visualize in terms of which regions in the curve will effect different tones in your image.
Here is a quick way to visualize it (that is true for value as well as RGB curves):</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-darksmidslights.png" width="378" height="524" alt="GIMP Base Curves darks mids lights zones"  />
</figure>

<p>If there is one thing you take away from reading this, let it be the image above.</p>
<h2 id="curves-span-style-color-red-co-span-span-style-color-green-lo-span-span-style-color-blue-rs-span-">Curves: <span style="color:red;">Co</span><span style="color:green;">lo</span><span style="color:blue;">rs</span><a href="#curves-span-style-color-red-co-span-span-style-color-green-lo-span-span-style-color-blue-rs-span-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>So how does this apply to other channels?  Let’s have a look.</p>
<p>The exact same theory applies in the RGB channels as it did with values.
The relative positions of the darks, midtones, and lights are still the same in the curve dialog.
The primary difference now is that you can control the contribution of color in specific tonal regions of your image.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-value-rgb-select.png" width="378" height="523"/>
<figcaption>
Value, Red, Green, Blue channel picker.
</figcaption>
</figure>

<p>You choose which channel you want to adjust from the <strong>“Channel”</strong> drop-down.</p>
<p>To begin demonstrating what happens here it helps to have an idea of generally what effect you would like to apply to your image.
This is often the hardest part of adjusting the color tones if you don’t have a clear idea to start with.</p>
<p>For example, perhaps we wanted to “cool” down the shadows of our image.
“Cool” shadows are commonly seen during the day in shadows out of direct sunlight.
The light that does fall in shadows is mostly reflected light from a blue-ish sky, so the shadows will trend slightly more blue.  </p>
<p>To try this, let’s adjust the <b style="color: blue;">Blue</b> channel to be a little more prominent in the darker tones of our image, but to get back to normal around the midtones and lighter.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-darks-blue-boost.png"  width="378" height="524"/>
<figcaption>
Boosting blues in darker tones
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-dark-blue-boost.jpg" data-swap-src='flickr-qsimple-5636649561-original.jpg' alt='' width='500' height='750'>
<figcaption>
Pushing up blues in darker tones (click for original).
</figcaption>
</figure>

<p>Now, here’s a question:  If I wanted to “cool” the darker tones with more blue, what if I wanted to “warm” the lighter tones by adding a little yellow?</p>
<p>Well, there’s no “Yellow” curve to modify, so how to approach that?  Have a look at this HSV color wheel below:</p>
<figure>
<img height="400" width="400"  src="https://pixls.us/articles/basic-color-curves/Color_circle_%2528hue-sat%2529_trans.png" />
</figure>

<p>The thing to look out for here is that opposite your blue tones on this wheel, you’ll find yellow.
In fact, for each of the Red, Green, and Blue channels, the opposite colors on the color wheel will show you what an absence of that color will do to your image.
So remember:</p>
<p class='aside'>
<span><span style="color: red;">Red</span> &rarr; <span style="color: cyan;">Cyan</span></span>
<span><span style="color: green;">Green</span> &rarr; <span style="color: magenta;">Magenta</span></span>
<span><span style="color: blue;">Blue</span> &rarr; <span style="color: yellow;">Yellow</span></span>
</p>

<p>What this means to you while manipulating curves is that if you drag a curve for blue up, you will boost the blue in that region of your image.
If instead you drag the curve for blue down, you will be <strong><i>removing</i></strong> blues (or boosting the <strong>Yellows</strong> in that region of your image).</p>
<p>So to boost the blues in the dark tones, but increase the yellow in the lighter tones, you could create a sort of “reverse” S-curve in the blue channel:</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-darks-blue-boost-add-yellow.png"  width="378" height="524"/>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-dark-blue-boost-add-yellow.jpg" data-swap-src='flickr-qsimple-5636649561-original.jpg' alt='' width='500' height='750'>
<figcaption>
Boost blues in darks, boost yellow in high tones (click for original).
</figcaption>
</figure>

<p>In the green channel for instance, you can begin to introduce more magenta into the tones by decreasing the curve.
So dropping the green curve in the dark tones, and letting it settle back to normal towards the high tones will produce results like this:</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-darks-green-suppress.png"  width="378" height="524"/>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-dark-green-suppresst.jpg" data-swap-src='flickr-qsimple-5636649561-original.jpg' alt='' width='500' height='750'>
<figcaption>
Suppressing the <b style="color: green;">green</b> channel in darks/mids adds a bit of <b style="color: magenta;">magenta</b>
<br>(click for original).
</figcaption>
</figure>

<p>In isolation, these curves are fun to play with, but I think that perhaps walking through some actual examples of color toning/grading would help to illustrate what I’m talking about here.
I’ll choose a couple of common toning examples to show what happens when you begin mixing all three channels up.</p>
<h2 id="color-toning-grading">Color Toning/Grading<a href="#color-toning-grading" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="-b-style-color-orange-orange-b-and-b-style-color-teal-teal-b-hell"><b style="color: orange;">Orange</b> and <b style="color: teal;">Teal</b> Hell<a href="#-b-style-color-orange-orange-b-and-b-style-color-teal-teal-b-hell" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I use the (<em>cinema film</em>) term <em>color grading</em> here because the first adjustment we will have a look at to illustrate curves is a horrible hollywood trend that is best described by <a href="http://theabyssgazes.blogspot.com/2010/03/teal-and-orange-hollywood-please-stop.html" target="_blank">Todd Miro on his blog</a>.</p>
<p><em>Grading</em> is a term for color toning on film, and Todd’s post is a funny look at the prevalence of orange and teal in modern film palettes.
So it’s worth a look just to see how silly this is (and hopefully to raise awareness of the obnoxiousness of this practice).</p>
<p>The general thought here is that caucasian skin tones trend towards orange, and if you have a look at a complementary color on the color wheel, you’ll notice that directly opposite orange is a teal color.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/Kuler_orange_teal.jpg" width='600' height='322'/>
<figcaption>
Screenshot from <a href="https://color.adobe.com">Kuler</a> borrowed from Todd.
</figcaption>
</figure>

<p class='aside'>
If you don’t already know about it, Adobe has online a fantastic tool for color visualization and palette creation called <a href="http://kuler.adobe.com"><del>Kuler</del></a> <a href="https://color.adobe.com"><strong>Adobe Color CC</strong></a>.
It lets you work on colors based on some classic rules, or even generate a color palette from images.
Well worth a visit and a fantastic bookmark  for fiddling with color.
</p>

<p>So a quick look at the desired effect would be to keep/boost the skin tones into a sort of orange-y pinkish color, and to push the darker tones into a teal/cyan combination.
(Colorists on films tend to use a Lift, Gamma, Gain model, but we’ll just try this out with our curves here).</p>
<p class='aside'>
Quick disclaimer - I am purposefully exaggerating these modifications to illustrate what they do.
Like most things, moderation and restraint will go a long ways towards not causing your viewers eyeballs to bleed.
<em>Remember - <strong>light touch!</strong></em>
</p>

<p>So I know that I want to see my skin tones head into an orange-ish color.
In my image the skin tones are in the upper mids/low highs range of values, so I will start around there.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-orangeteal-red-high.png" width="378" height="524"/>
</figure>

<p>What I’ve done is put a point around the low midtones to anchor the curve closer to normal for those tones.
This lets me fiddle with the red channel and to isolate it roughly to the mid and high tones only.
The skin tones in this image in the red channel will fall toward the upper end of the mids, so I’ve boosted the reds there.
Things may look a little weird at first:</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-orangeteal-red-highs.jpg"  width="500" height="750"/>
</figure>

<p>If you look back at the color wheel again, you’ll notice that between red and green, there is a yellow, and if you go a bit closer towards red the yellow turns to more of an orange.
What this means is that if we add some more green to those same tones, the overall colors will start to shift towards an orange.</p>
<p>So we can switch to the green channel now, put a point in the lower midtones again to hold things around normal, and slightly boost the green.
Don’t boost it all the way to the reds, but about 2/3<sup>rds</sup> or so to taste.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-orangeteal-green-high.png" width="378" height="524"/>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-orangeteal-green-highs.jpg" width="500" height="750"/>
</figure>

<p>This puts a little more red/orange-y color into the tones around the skin.
You could further adjust this by perhaps including a bit more yellow as well.
To do this, I would again put an anchor point in the low mid tones on the blue channel, then slightly drop the blue curve in the upper tones to introduce a bit of yellow.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-orangeteal-blue-high.png" width="378" height="524"/>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-orangeteal-blue-highs.jpg" width="500" height="750"/>
</figure>

<p>Remember, we’re experimenting here so feel free to try things out as we move along.
I may consider the upper tones to be finished at the moment, and now I would want to look at introducing a more blue/teal color into the darker tones.</p>
<p>I can start by boosting a bit of blues in the dark tones.
I’m going to use the anchor point I already created, and just push things up a bit.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-orangeteal-blue-low.png" width="378" height="524"/>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-orangeteal-blue-lows.jpg" width="500" height="750"/>
</figure>

<p>Now I want to make the darker tones a bit more teal in color.
Remember the color wheel - <b style="color: teal;">teal</b> is the absence of red - so we will drop down the red channel in the lower tones as well.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-orangeteal-red-low.png" width="378" height="524"/>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-orangeteal-red-lows.jpg" width="500" height="750"/>
</figure>

<p>And finally to push a very slight magenta into the dark tones as well, I’ll push down the green channel a bit.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-orangeteal-green-low.png" width="378" height="524"/>
</figure>

<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-orangeteal-green-lows.jpg" width="500" height="750"/>
</figure>

<p>If I wanted to go a step further, I could also put an anchor point up close to the highest values to keep the brightest parts of the image closer to a white instead of carrying over a color cast from our previous operations.  </p>
<p>If your previous operations also darkened the image a bit, you could also now revisit the <strong>Value</strong> channel, and make modifications there as well.
In my case I bumped the midtones of the image just a bit to brighten things up slightly.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/curves-dialog-orangeteal-value-final.png" width="378" height="524"/>
</figure>

<p>Finally to end up at something like this.</p>
<figure>
<img src="https://pixls.us/articles/basic-color-curves/flickr-qsimple-5636649561-orangeteal-value-final.jpg" data-swap-src='flickr-qsimple-5636649561-original.jpg' alt='' width="500" height="750">
<figcaption>
After fooling around a bit - disgusting, isn’t it?
<br>(click for original).
</figcaption>
</figure>

<p>I am exaggerating things here to illustrate a point.
Please don’t do this to your photos. :)</p>
<p class='aside'>
If you’d like to download the curves file of the results we reached above, get it here:<br><a href="https://docs.google.com/open?id=0B21lPI7Ov4CVdmJnOXpkQjN4aWc">Orange Teal Hell Color Curves</a>
</p>


<h2 id="conclusion">Conclusion<a href="#conclusion" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Remember, think about what the color curves represent in your image to help you achieve your final results.
Begin looking at the different tonalities in your image and how you’d like them to appear as part of your final vision.</p>
<p>For even more fun - realize that the colors in your images can help to evoke emotional responses in the viewer, and adjust things accordingly.
I’ll leave it as an exercise for the reader to determine some of the associations between colors and different emotions.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Welcome G'MIC ]]></title>
            <link>https://pixls.us/blog/2015/07/welcome-g-mic/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/07/welcome-g-mic/</guid>
            <pubDate>Wed, 22 Jul 2015 21:49:52 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/07/welcome-g-mic/gmic-logo.jpg" /><br/>
                 <h1>Welcome G'MIC</h1>  
                 <h2>Moving G'MIC to a modern forum</h2>   
                <p>Anyone who’s followed me for a while likely knows that I’m friends with <a href="http://gmic.eu">G’MIC</a> (GREYC’s Magic for Image Computing) creator <a href="https://plus.google.com/100527311518040751439/about">David Tschumperlé</a>.
I was also able to release all of my film <a href="http://blog.patdavid.net/2013/08/film-emulation-presets-in-gmic-gimp.html">emulation</a> <a href="http://blog.patdavid.net/2013/09/film-emulation-presets-in-gmic-gimp.html">presets</a> on G’MIC for everyone to use with David’s help and we collaborated on a bunch of different fun processing filters for photographers in G’MIC (split details/wavelet decompose, <a href="http://blog.patdavid.net/2013/02/calvin-hollywood-freaky-details-in-gimp.html">freaky details</a>, <a href="http://blog.patdavid.net/2013/09/film-emulation-presets-in-gmic-gimp.html">film emulation</a>, <a href="http://blog.patdavid.net/2013/12/mean-averaged-music-videos-g.html">mean/median averaging</a>, and more).</p>
<!-- more -->
<figure>
<img src="https://pixls.us/blog/2015/07/welcome-g-mic/David-and-the-Beauty-Dish.jpg" alt='David Tschumperle beauty dish GMIC'>
<figcaption>
<a href="https://www.flickr.com/photos/patdavid/13898506065/in/dateposted-public/">David</a>, by Me (at <a href="http://libregraphicsmeeting.org/2014/">LGM2014</a>)
</figcaption>
</figure>

<p>It’s also David that helped me by writing a G’MIC script to <a href="http://blog.patdavid.net/2013/12/mean-averaged-music-videos-g.html">mean average images</a> for me when I started making my amalgamations 
(Thus moving me away from my previous method of using <a href="http://imagemagick.org/script/index.php">Imagemagick</a>):</p>
<figure>
<a data-flickr-embed="true" href="https://www.flickr.com/photos/patdavid/17247263555/in/dateposted-public/" title="Mad Max Fury Road Trailer 2 - Amalgamation">
<img src="https://pixls.us/blog/2015/07/welcome-g-mic/max-max-fury-road.jpg" width="640" height="360" alt="Mad Max Fury Road Trailer 2 - Amalgamation"></a>
<figcaption>
<a href="https://www.flickr.com/photos/patdavid/17247263555/in/dateposted-public/">Mad Max Fury Road Trailer 2 - Amalgamation</a>
</figcaption>
</figure>

<p>So when the forums here on <a href="https://discuss.pixls.us">discuss.pixls.us</a> were finally up and running, it only made sense to offer G’MIC its own part of the forums.
They had previously been using a combination of <a href="https://www.flickr.com/groups/gmic">Flickr groups</a> and <a href="http://gimpchat.com/viewforum.php?f=28">gimpchat.com</a>.
These are great forums, they were just a little cumbersome to use.</p>
<p><strong>You can find the new <a href="https://discuss.pixls.us/t/release-of-gmic-1-6-5-1/284">G’MIC category here</a>.</strong>
Stop in and say hello!</p>
<p>I’ll also be porting over the tutorials and articles on work we’ve collaborated on soon (freaky details, film emulation).</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Congratulations ]]></title>
            <link>https://pixls.us/blog/2015/07/congratulations/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/07/congratulations/</guid>
            <pubDate>Wed, 22 Jul 2015 18:40:41 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/07/congratulations/riley-brandt-course-2x.png" /><br/>
                 <h1>Congratulations</h1>  
                 <h2>To the winners of the Open Source Photography Course Giveaway</h2>   
                <p>I compiled the list of entries this afternoon across the various social networks and let <a href="http://random.org">random.org</a> pick an integer in the domain of all of the entries…</p>
<p>So a big congratulations goes out to:</p>
<p><a href="http://dennyweinmann.com/"><strong> Denny Weinmann </strong></a> (<small><a href="https://www.facebook.com/dennyweinmannphotography">Facebook</a>, <a href="https://twitter.com/dennyweinmann">@dennyweinmann</a>, <a href="https://plus.google.com/+DennyWeinmann/posts">Google+</a> </small>)<br>and<br><a href="http://www.nhaines.com/"><strong> Nathan Haines </strong></a> (<small><a href="https://twitter.com/nhaines">@nhaines</a>, <a href="https://plus.google.com/+thenathanhaines">Google+</a></small>)</p>
<p>I’ll be contacting you shortly (assuming you don’t read this announcement here first…)!
I will need a valid email address from you both in order to send your download links.
You can reach me at <a href="mailto:pixlsus@pixls.us">pixlsus@pixls.us</a>.</p>
<!-- more -->
<p>Thank you to everyone who shared the post to help raise awareness!
The lessons are still on sale until August 1<sup>st</sup> for $35<small>USD</small> over on <a href="http://www.rileybrandt.com/lessons/">Riley’s site</a>.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ The Open Source Photography Course ]]></title>
            <link>https://pixls.us/blog/2015/07/the-open-source-photography-course/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/07/the-open-source-photography-course/</guid>
            <pubDate>Wed, 15 Jul 2015 17:12:35 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/07/the-open-source-photography-course/riley-brandt-course-2x.png" /><br/>
                 <h1>The Open Source Photography Course</h1>  
                 <h2>A chance to win a free copy</h2>   
                <p>Photographer <a href="http://www.rileybrandt.com/">Riley Brandt</a> recently released his <a href="http://www.rileybrandt.com/lessons/"><em>Open Source Photography Course</em></a>.
I managed to get a little bit of his time to answer some questions for us about his photography and the course itself.
You can read the full interview <a href="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/">right here</a>:</p>
<p><a href="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/"><strong>A Q&amp;A with Photographer Riley Brandt</strong></a></p>
<p>As an added bonus just for <a href="https://pixls.us//pixls.us">PIXLS.US</a> readers, he has gifted us a nice surprise!</p>
<h2 id="did-someone-say-free-stuff-"><a href="#did-someone-say-free-stuff-" class="header-link-alt">Did Someone Say Free Stuff?</a></h2>
<p>Riley went above and beyond for us.
He has graciously offered us an opportunity for 2 readers to win a <em>free</em> copy of the course (one in an open format like WebM/VP8, and another in a popular format like MP4/H.264)!</p>
<!-- more -->
<p>For a chance to win, I’m asking you to share a link to this post on:</p>
<ul>
<li><a href="https://twitter.com/intent/tweet?hashtags=PIXLSGiveAway&amp;url=https://pixls.us/blog/2015/07/the-open-source-photography-course/">Twitter</a> </li>
<li><a href="https://plus.google.com/share?url=https://pixls.us/blog/2015/07/the-open-source-photography-course/">Google+</a> </li>
<li><a href="https://www.facebook.com/sharer/sharer.php?u=https://pixls.us/blog/2015/07/the-open-source-photography-course/">Facebook</a> </li>
</ul>
<p>with the hashtag <strong>#PIXLSGiveAway</strong> (you can click those links to share to those networks).
Each social network counts as one entry, so you can triple your chances by posting across all three.</p>
<p>Next week (<del>Monday, 2015-07-20</del> Wednesday, 2015-07-22 to give folks a full week), I will search those networks for all the posts and compile a list of people, from which I’ll pick the winners (using random.org).
Make sure you get that hashtag right! :)</p>
<h2 id="some-previews"><a href="#some-previews" class="header-link-alt">Some Previews</a></h2>
<p>Riley has released three nice preview videos to give a taste of what’s in the courses:</p>
<div class='fluid-vid'>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/TGwuMYsuAuY?list=PL33t7emXCBHkg6a6Ao_ULh7fsgWXg5ua9" frameborder="0" allowfullscreen></iframe>
</div>

  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ A Q&A with Photographer Riley Brandt ]]></title>
            <link>https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/</link>
            <guid isPermaLink="true">https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/</guid>
            <pubDate>Wed, 15 Jul 2015 13:47:30 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/riley-brandt-lede.jpg" /><br/>
                 <h1>A Q&A with Photographer Riley Brandt</h1>  
                 <h2>On creating a F/OSS photography course</h2>   
                <p><a href="http://www.rileybrandt.com/">Riley Brandt</a> is a full-time photographer (<em>and sometimes videographer</em>) at the <a href="http://www.ucalgary.ca/">University of Calgary</a>.
He previously worked for the weekly (Calgary) local magazine <a href="http://www.ffwdweekly.com/">Fast Forward Weekly (FFWD)</a> as well as <a href="http://www.sophiamodels.com/">Sophia Models International</a>, 
and his work has been published in many places from the <em>Wall Street Journal</em> to <em>Der Spiegel</em> (and <a href="http://www.rileybrandt.com/about/">more</a>).</p>
<figure>
<a href='http://www.rileybrandt.com/'>
<img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/rb-logo.png" alt='Riley Brandt Logo' width='244' height='46'>
</a>
</figure>

<p>He recently announced the availability of <a href="http://www.rileybrandt.com/lessons/"><em>The Open Source Photography Course</em></a>.
It’s a full photographic workflow course using only free, open source software that he has spent the last <em>ten months</em> putting together.</p>
<p class='aside'>
Riley has graciously offered two free copies for us to give away!<br>For a chance to win, see <a href="https://pixls.us/blog/2015/07/the-open-source-photography-course/">this blog post</a>.
</p>

<figure class='big-vid'>
<a href="http://www.rileybrandt.com/lessons/">
    <img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/riley-brandt-course.png" alt='Riley Brandt Photography Course Banner' width='940' height='345'>
</a>
</figure>

<p>I was lucky enough to get a few minutes of Riley’s time to ask him a few questions about his photography and this course.</p>
<h2 id="a-chat-with-riley-brandt">A Chat with Riley Brandt<a href="#a-chat-with-riley-brandt" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="tell-us-a-bit-about-yourself-">Tell us a bit about yourself!<a href="#tell-us-a-bit-about-yourself-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Hello, my name is Riley Brandt and I am a professional photographer at the University of Calgary. </p>
<p>At work, I get to spend my days running around a university campus taking pictures of everything from a rooster with prosthetic legs made in a 3D printer, to wild students dressed in costumes jumping into freezing cold water for charity. It can be pretty awsome.</p>
<p>Outside of work, I am a supporter of Linux and open source software. I am also a bit of a film geek.</p>
<figure>
<img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/Pixls-Interview-Riley-B_Web_10.jpg" alt='Univ. Calgary Prosthetic Rooster' width='640' height='419' title='Gentlemen, we can rebuild him.  We have the technology.'>
<figcaption>
<small>[<em>ed. note: He’s not kidding - That’s a rooster with prosthetic legs…</em>]</small>
</figcaption>
</figure>


<h3 id="i-see-you-were-trained-in-photojournalism-is-this-still-your-primary-photographic-focus-">I see you were trained in photojournalism.  Is this still your primary photographic focus?<a href="#i-see-you-were-trained-in-photojournalism-is-this-still-your-primary-photographic-focus-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Though I definitely enjoy portraits, fashion and lifestyle photography, my day to day work as a photographer at a university is very similar to my photojournalism days.</p>
<p>I have to work with whatever poor lighting conditions I am given, and I have to turn around those photos quickly to meet deadlines.</p>
<p>However, I recently became an uncle for the first time to a baby boy, so I imagine I will be expanding into new born and toddler photography very soon :)</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/Pixls-Interview-Riley-B_Web_07.jpg" alt='Riley Brandt Environment Portrait Sample' width='960' height='592'>
<figcaption>
<a href="http://www.rileybrandt.com/project/enviro-portraits/">Environmental Portrait</a> by Riley Brandt 
</figcaption>
</figure>


<h3 id="how-long-have-you-been-a-photographer-">How long have you been a photographer?<a href="#how-long-have-you-been-a-photographer-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Photography started as a hobby for me when I was living the Czech Republic in the late 90s and early 2000s. My first SLR camera was the classic Canon AE1 (which I still have).</p>
<p>I didn’t start to work as a full time professional photographer until I graduated from the Journalism program at SAIT Polytechnic in 2008.</p>
<h3 id="what-type-of-photography-do-you-enjoy-doing-the-most-">What type of photography do you enjoy doing the most?<a href="#what-type-of-photography-do-you-enjoy-doing-the-most-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>In a nutshell, I enjoy photographing people. This includes both portraits and candid moments at events.</p>
<p>I love meeting someone with an interesting story, and then trying to capture some of their personality in an image.</p>
<p>At events, I’ve witnessed everything from the joy of someone meeting an astronaut they idolize, to the anguish of a parent at graduation collecting a degree instead of their child who was killed. Capturing genuine emotion at events is challenging, and overwhelming at times, but is also very gratifying.</p>
<p>It would be hard for me to choose between candids or portraits. I enjoy them both.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/Project_Portraits_Update_0003.jpg" alt='Riley Brandt Portraits' width='940' height='715'>
<figcaption>
<a href="http://www.rileybrandt.com/project/portraits/">Portraits</a> by Riley Brandt
</figcaption>
</figure>


<h3 id="how-would-you-describe-your-personal-style-">How would you describe your personal style?<a href="#how-would-you-describe-your-personal-style-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I’ve been told several times that my images are very “clean”. Which I think means I limit the image to only a few key elements, and remove any major distractions.</p>
<h3 id="if-you-had-to-choose-your-favorite-image-from-your-portfolio-what-would-it-be-">If you had to choose your favorite image from your portfolio, what would it be?<a href="#if-you-had-to-choose-your-favorite-image-from-your-portfolio-what-would-it-be-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I don’t have a favorite image in my collection.</p>
<p>However, at the end of a work week, I usually have at least one image that I am really happy with. A photo that I will look at again when I get home from work. An image that I look forward to seeing published. Those are my favorites.</p>
<h3 id="has-free-software-always-been-the-foundation-of-your-workflow-">Has free-software always been the foundation of your workflow?<a href="#has-free-software-always-been-the-foundation-of-your-workflow-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Definitely not. I started with Adobe software, and still use it (and other non-free software) at work. Though hopefully that will change.</p>
<p>I switched to free software for all my personal work at home, because all my computers at home run Linux.</p>
<p>I also dislike at lot of Adobe’s actions as a company, ie: horrible security and switching to a “cloud” version of their software which is really just a DRM scheme. </p>
<p>There many significant reasons to not run non-free software, but what really motivated my switch initially was simply that Adobe never released a Linux version of their software.</p>
<h3 id="what-is-your-normal-os-platform-">What is your normal OS/platform?<a href="#what-is-your-normal-os-platform-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I guess I am transitioning from Ubuntu to Fedora (both GNU/Linux). My main desktop is still running Ubuntu Gnome 14.04. But my laptop is running Fedora 21.</p>
<p>Ubuntu doesn’t offer an up to date version of the Gnome desktop environment. It also doesn’t use the Gnome Software Centre or many Gnome apps. Fedora does. So my desktop will be running Fedora in the near future as well.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/Pixls-Interview-Riley-B_Web_02.jpg" alt='Riley Brandt Summer Days' width='960' height='470' >
<img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/Pixls-Interview-Riley-B_Web_03.jpg" alt='Riley Brandt Summer Days' width='960' height='598' >
<figcaption>
<a href="http://www.rileybrandt.com/project/lifestyle/">Lifestyle</a> by Riley Brandt
</figcaption>
</figure>



<h3 id="what-drove-you-to-consider-creating-a-free-software-centric-course-">What drove you to consider creating a free-software centric course?<a href="#what-drove-you-to-consider-creating-a-free-software-centric-course-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Because it was so difficult for me to transition from Adobe software to free software, I wanted to provide an easier option for others trying to do the same thing.</p>
<p>Instead of spending weeks or months searching through all the different manuals, tutorials and websites, someone can spend a weekend watching my course and be up and running quickly.</p>
<p>Also, it was just a great project to work on. I got to combine two of my passions, Linux and photography.</p>
<h3 id="is-the-course-the-same-as-your-own-approach-">Is the course the same as your own approach?<a href="#is-the-course-the-same-as-your-own-approach-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Yes, it’s the same way I work. </p>
<p>I start with fundamentals like monitor calibration and file management. Then onto basics like correcting exposure, color, contrast and noise. After that, I cover less frequently used tools. It’s the same way I work.</p>
<h3 id="the-course-focuses-heavily-on-darktable-for-raw-processing-have-you-also-tried-any-of-the-other-options-such-as-rawtherapee-">The course focuses heavily on <a href="http://www.darktable.org">darktable</a> for RAW processing - have you also tried any of the other options such as RawTherapee?<a href="#the-course-focuses-heavily-on-darktable-for-raw-processing-have-you-also-tried-any-of-the-other-options-such-as-rawtherapee-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I originally tried <a href="https://www.digikam.org/">digiKam</a> because it looked like it had most of the features I needed. However, KDE and I are like oil and water. The user interface felt impenetrable to me, so I moved on.</p>
<p>I also tried <a href="http://rawtherapee.com/">RawTherapee</a>, but only briefly. I got some bad results in the beginning, but that was probably due to my lack of familiarity with the software. I might give it another go one day.</p>
<p>Once <a href="http://www.darktable.org">darktable</a> added advanced selective editing with masks, I was all in. I like the photo management element as well.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/Pixls-Interview-Riley-B_Web_09.jpg" alt='Riley Brandt Portraits' width='960' height='470'>
</figure>

<h3 id="have-you-considered-expanding-your-course-offerings-to-include-other-aspects-of-photography-">Have you considered expanding your (course) offerings to include other aspects of photography?<a href="#have-you-considered-expanding-your-course-offerings-to-include-other-aspects-of-photography-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Umm.. not just yet. I first need to rest :)</p>
<h3 id="if-you-were-to-expand-the-current-course-what-would-you-like-to-focus-on-next-">If you were to expand the current course, what would you like to focus on next?<a href="#if-you-were-to-expand-the-current-course-what-would-you-like-to-focus-on-next-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>It’s hard to say right now. Possibly a more in depth look at GIMP. Or a series where viewers watch me edit photos from start to finish.</p>
<h3 id="it-took-10-months-to-create-this-course-will-you-be-taking-a-break-or-starting-right-away-on-the-next-installment-">It took 10 months to create this course, will you be taking a break or starting right away on the next installment? :)<a href="#it-took-10-months-to-create-this-course-will-you-be-taking-a-break-or-starting-right-away-on-the-next-installment-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>A break for sure :) I spent most of my weekends preparing and recording a lesson for the past year. So yes, first a break.</p>
<h3 id="some-parting-words-">Some parting words?<a href="#some-parting-words-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p> I would like to recommend the <a href="http://gimpmagazine.org/courses/">Desktop Publishing course</a> created by <a href="http://gimpmagazine.org/">GIMP Magazine</a> editor Steve Czajka for anyone who is trying to transition from Adobe InDesign to Scribus.</p>
<p>I would also love to see someone create a similar course for <a href="https://inkscape.org">Inkscape</a>.</p>
<h2 id="the-course">The Course<a href="#the-course" class="header-link"><i class="fa fa-link"></i></a></h2>
<figure> 
<a href="http://www.rileybrandt.com/lessons/">
    <img src="https://pixls.us/articles/a-q-a-with-photographer-riley-brandt/riley-brandt-course.png" alt='Riley Brandt Photography Course Banner' width='640' height='235'>
</a>
</figure>

<p><a href="http://www.rileybrandt.com/lessons/"><em>The Open Source Photography Course</em></a> is available for order now at <a href="http://www.rileybrandt.com/">Riley’s website</a>.
The course is:</p>
<ul>
<li>Over 5 <em>hours</em> of video material</li>
<li>DRM free</li>
<li>10% of net profits donated back to FOSS projects</li>
<li>Available in open format (WebM/VP8) or popular (H.264), all 1080p</li>
<li>$50USD </li>
</ul>
<p>He has also released some preview videos of the course:</p>
<div class='big-vid'>
<div class='fluid-vid'>
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/TGwuMYsuAuY?list=PL33t7emXCBHkg6a6Ao_ULh7fsgWXg5ua9" frameborder="0" allowfullscreen></iframe>
</div>
</div>

<p>From his site is a nice course outline to get a feel for what is covered:</p>
<h2 id="course-outline">Course Outline<a href="#course-outline" class="header-link"><i class="fa fa-link"></i></a></h2>
<h4 id="chapter-1-getting-started">Chapter 1. Getting Started<a href="#chapter-1-getting-started" class="header-link"><i class="fa fa-link"></i></a></h4>
<ol>
<li>Course Introduction<br><small>Welcome to The Open Source Photography Course</small></li>
<li>Calibrate Your Monitor<br><small>Start your photography workflow the right way by calibrating your monitor with dispcalGUI</small></li>
<li>File Management<br><small>Make archiving and searching for photos easier by using naming conventions and folder organization</small></li>
<li>Download and Rename<br><small>Use Rapid Photo Downloader to rename all your photos during the download process</small></li>
</ol>
<h4 id="chapter-2-raw-editing-in-darktable">Chapter 2. Raw Editing in darktable<a href="#chapter-2-raw-editing-in-darktable" class="header-link"><i class="fa fa-link"></i></a></h4>
<ol>
<li>Introduction to darktable, Part One<br><small>Get to know darktable’s user interface</small></li>
<li>Introduction to darktable, Part Two<br><small>Take a quick look at the slideshow view in darktable</small></li>
<li>Import and Tag<br><small>Import photos into darktable and tag them with keywords, copyright information and descriptions</small></li>
<li>Rating Images<br><small>Learn an efficient way to cull, rate, add color labels and filter photos in lighttable</small></li>
<li>Darkroom Overview<br><small>Learn the basics of the darkroom view including basic module adjustments and creating favorites</small></li>
<li>Correcting Exposure, Part 1<br><small>Correct exposure with the base curves, levels, exposure, and curves modules</small></li>
<li>Correcting Exposure, Part 2<br><small>See several examples of combining modules to correct an image’s exposure</small></li>
<li>Correct White Balance<br><small>Use presets and make manual changes in the white balance module to color correct your images</small></li>
<li>Crop and Rotate<br><small>Navigate through the many crop and rotate options including guides and automatic cropping</small></li>
<li>Highlights and Shadows<br><small>Recover details lost in the shadows and highlights of your photos</small></li>
<li>Adding Contrast<br><small>Make your images stand out by adding contrast with the levels, tone curve and contrast modules</small></li>
<li>Sharpening<br><small>Fix those soft images with the sharpen, equalizer and local contrast modules</small></li>
<li>Clarity<br><small>Sharpen up your midtones by utilizing the local contrast and equalizer modules</small></li>
<li>Lens Correction<br><small>Learn how to fix lens distortion, vignetting and chromatic aberrations</small></li>
<li>Noise Reduction<br><small>Learn the fastest, easiest and best way to clean up grainy images taken in low light</small></li>
<li>Masks, Part one<br><small>Discover the possibilities of selective editing with the shape, gradient and path tools</small></li>
<li>Masks, Part Two<br><small>Take you knowledge of masks further in this lesson about parametric masks</small></li>
<li>Color Zones<br><small>Learn how to limit your adjustments to a specific color’s hue, saturation or brightness</small></li>
<li>Spot Removal<br><small>Save time by making simple corrections in darktable, instead of opening up GIMP</small></li>
<li>Snapshots<br><small>Quickly compare different points in your editing history with snapshots</small></li>
<li>Presets and Styles<br><small>Save your favorite adjustments for later with presets and styles</small></li>
<li>Batch Editing<br><small>Save time by editing one image, then quickly applying those same edits to hundreds of images</small></li>
<li>Searching for Images<br><small>Learn how to sort and search through a large collection of images in Lighttable</small></li>
<li>Adding Effects<br><small>Get creative in the effects group with vignetting, framing, split toning and more</small></li>
<li>Exporting Photos<br><small>Learn how to rename, resize and convert you RAW photos to JPEG, TIFF and other formats</small></li>
</ol>
<h4 id="chapter-3-touch-ups-in-gimp">Chapter 3. Touch Ups in GIMP<a href="#chapter-3-touch-ups-in-gimp" class="header-link"><i class="fa fa-link"></i></a></h4>
<ol>
<li>Introduction to GIMP<br><small>Install GIMP, then get to know your way around the user interface</small></li>
<li>Setting Up GIMP, Part 1<br><small>Customize the user interface, adjust a few tools and install color profiles</small></li>
<li>Setting Up GIMP, Part 2<br><small>Set keyboard shortcuts that mimic Photoshop’s and install a couple of plugins</small></li>
<li>Touch Ups<br><small>Use the heal tool and the clone tool to clean up your photos</small></li>
<li>Layer Masks<br><small>Learn how to make selective edits and non-destructive edits using layer masks</small></li>
<li>Removing Distractions<br><small>Combine layers, a helpful plugin and layer masks to remove distractions from your photos</small></li>
<li>Preparing Images for the Web<br><small>Reduce file size while retaining quality before you upload your photos to the web</small></li>
<li>Getting Help and Finding the Community<br><small>Find out which websites, mailing lists and forums to go to for help and friendly discussions</small></li>
</ol>
<hr>
<div class='center'><small>All the images in this post &copy; <a href="http://www.rileybrandt.com/">Riley Brandt</a>.</small></div>

  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ darktable on Windows ]]></title>
            <link>https://pixls.us/blog/2015/07/darktable-on-windows/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/07/darktable-on-windows/</guid>
            <pubDate>Mon, 13 Jul 2015 21:54:23 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/07/darktable-on-windows/three-windows.jpg" /><br/>
                 <h1>darktable on Windows</h1>  
                 <h2>Why don't you provide a Windows build?</h2>   
                <p>Due to the heated debate lately, a short foreword:</p>
<p>We do not want to harass, insult or criticize anyone due to his or her choice of operating system. Still, from time to time we encounter comments from people accusing us of ignorance or even disrespect towards Windows users. If any of our statements can be interpreted such, we want to apologize for that – and once more give the full explanation of our lacking Windows support.</p>
<h2 id="the-darktable-project"><a href="#the-darktable-project" class="header-link-alt">The darktable project</a></h2>
<p>darktable is developed and maintained by a small group of people in their spare time, just for fun. We do not have any funds, do not provide travel reimbursements for conferences or meetings, and don’t even have a legal entity at the moment. In other words: None of the developers has ever seen (and most likely will ever see) a single $(INSERT YOUR CURRENCY) for the development of darktable, which is thus a project purely driven by enthusiasm and curiosity.</p>
<!-- more -->
<h2 id="the-development-environment"><a href="#the-development-environment" class="header-link-alt">The development environment</a></h2>
<p>The team is quite mixed, some have a professional background in computing, others don’t. But all love photography and like exploring the full information recorded by the camera themselves. Most new features are added to darktable as an expert for, let’s say GPU computing, steps up and is willing to provide and maintain code for the new feature.</p>
<p>Up till now there is one technical thing that unites all developers: None of them is using Windows as operating system. Some are using Mac OSX, Solaris, etc, but most run some Linux distribution. New flavors of operating systems kept being added to our list with people willing to support their favorite system joining the team.</p>
<p>Also (since this stands out a bit as “commercial operating system”) Mac OS X support arrived in exactly this way. Someone (parafin!) popped up, said: “I like this software, and I want to run darktable on my Mac.”, compiled it on OS X and since then does testing and package building for the Mac OS X operating system. And this is not an easy job. Initially there were just snapshot builds from git, no official releases, not even release candidates – but already the first complaints about the quality arrived. Finally, there was a lot of time invested in working around specific peculiarities of this operating system to make it work and provide builds for every new version of darktable released.</p>
<p>This nicely shows one of the consequences of the project’s organizational (non-) structure and development approach: at first, every developer cares about darktable running on his personal system.</p>
<h2 id="code-contributions-and-feature-requests"><a href="#code-contributions-and-feature-requests" class="header-link-alt">Code contributions and feature requests</a></h2>
<p>Usually feature requests from users or from the community are treated like a brainstorming session. Someone proposes a new feature, people think and discuss about it – and if someone likes the idea and has time to code it, it might eventually come – if the team agrees on including the feature.</p>
<p>But life is not a picnic. You probably wouldn’t pass by your neighbor and demand from him to repair your broken car – just because you know he loves to tinker with his vintage car collection at home.<br> Same applies here. No one feels comfortable if suddenly request are being made that would require a non-negligible amount of work – but with no return for the person carrying out the work, neither moneywise nor intellectually.</p>
<p>This is the feeling created every time someone just passes by leaving as only statement: “Why isn’t there a Windows build (yet)?”.</p>
<h2 id="providing-a-windows-build-for-darktable"><a href="#providing-a-windows-build-for-darktable" class="header-link-alt">Providing a Windows build for darktable</a></h2>
<p>The answer has always been the same: because no one stepped up doing it. None of the passers-by requesting a Windows build actually took the initiative, just downloaded the source code and started the compilation. No one approached the development team with actual build errors and problems encountered during a compilation using MinGW or else on Windows. The only thing ever aired were requests for ready-made binaries.</p>
<p>As stated earlier here, the development of darktable is totally about one’s own initiative. This project (as many others) is not about ordering things and getting them delivered. It’s about starting things, participating and contributing. It’s about trying things out yourself. It’s FLOSS.</p>
<p>One argument that pops up from time to time is: “darktable’s user base would grow immensely with a Windows build!”. This might be true. But – what’s the benefit from this? Why should a developer care how many people are using the software if his or her sole motivation was producing a nice software that he/she could process raw files with?</p>
<p>On the contrary: more users usually means more support, more bug tracker tickets, more work. And this work usually isn’t the pleasing sort, hunting seldom bugs occurring with some rare camera’s files on some other operating system is usually not exactly what people love to spent their Saturday afternoon on.</p>
<p>This argumentation would totally make sense if darktable would be sold, the developers paid and the overall profit would depend on the number of people using the software. No one can be blamed for sending such requests to a company selling their software or service (for your money or your data, whatever) – and it is up to them to make an economical decision on whether it makes sense to invest the time and manpower or not.</p>
<p>But this is different.</p>
<p>Not building darktable on Windows is not a technical issue after all. There certainly are problems of portability, and code changes would be necessary, but in the end it would probably work out. The real problem is (as has been pointed out by the darktable development team many times in the past) the maintenance of the build as well as all the dependencies that the package requires.</p>
<p>The darktable team is trying to deliver a high-quality reliable software. Photographers rely on being able to re-process their old developments with recent versions of darktable obtaining exactly the same result – and that on many platforms, being it CPUs or GPUs with OpenCL. Satisfying this objective requires quite some testing, thinking and maintenance work.</p>
<p>Spawning another build on a platform that not a single developer is using would mean lots and lots of testing – in unfamiliar terrain, and with no fun attached at all. Releasing a half-way working, barely tested build for Windows would harm the project’s reputation and diminish the confidence in the software treating your photographs carefully.</p>
<p>We hope that this reasoning is comprehensible and that no one feels disrespected due to the choice of operating system.</p>
<h1 id="references">References</h1>
<p><a href="http://www.darktable.org/2011/07/that-other-os/">That other OS</a></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ PhotoFlow Blended Panorama Tutorial ]]></title>
            <link>https://pixls.us/blog/2015/07/photoflow-blended-panorama-tutorial/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/07/photoflow-blended-panorama-tutorial/</guid>
            <pubDate>Tue, 07 Jul 2015 14:29:45 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/07/photoflow-blended-panorama-tutorial/pano_final2.jpg" /><br/>
                 <h1>PhotoFlow Blended Panorama Tutorial</h1>  
                 <h2>Andrea Ferrero has been busy!</h2>   
                <p>After quite a bit of back and forth I am quite happy to be able to announce that the latest tutorial is up: <a href="https://pixls.us/articles/a-blended-panorama-with-photoflow/">A Blended Panorama with PhotoFlow</a>!
This contribution comes from <a href="http://photoflowblog.blogspot.fr/">Andrea Ferrero</a>, the creator of a new project: <a href="http://aferrero2707.github.io/PhotoFlow/">PhotoFlow</a>.</p>
<p>In it, he walks through a process of stitching a panorama together using Hugin and blending multiple exposure options through masking in PhotoFlow (see lede image).
The results are quite nice and natural looking!</p>
<!-- more -->
<h2 id="local-contrast-enhancement-gaussian-vs-bilateral"><a href="#local-contrast-enhancement-gaussian-vs-bilateral" class="header-link-alt">Local Contrast Enhancement: Gaussian vs. Bilateral</a></h2>
<p>Andrea also runs through a quick video comparison of doing LCE using both a Gaussian and Bilateral blur, in case you ever wanted to see them compared side-by-side:</p>
<div class='fluid-vid'>
<iframe width="640" height="480" src="https://www.youtube-nocookie.com/embed/Uj4cmXlezVc?rel=0" frameborder="0" allowfullscreen></iframe>
</div>

<p>He <a href="https://discuss.pixls.us/t/local-contrast-enhancement-gaussian-vs-bilateral-blurring/241">started a topic post</a> about it in the forums as well.</p>
<h2 id="thoughts-on-the-main-page"><a href="#thoughts-on-the-main-page" class="header-link-alt">Thoughts on the Main Page</a></h2>
<p>Over on <a href="https://pixls.us//discuss.pixls.us">discuss</a> I started a thread to <a href="https://discuss.pixls.us/t/main-site-frontpage-lede/244/4">talk about some possible changes</a> to the main page of the site.</p>
<p>Specifically I’m talking about the background lede image at the very top of the main page:</p>
<figure>
<img src='https://discuss.pixls.us/uploads/default/optimized/1X/ef803873985000ea678778d99362ad0666dd7c49_1_690x437.png'>
</figure>

<p>I had originally created that image as a placeholder in <a href="https://pixls.us//blender.org">Blender</a>.
The site is intended as a photography-centric site, so the natural thought was why not use photos as a background instead?</p>
<p>The thought is to rotate through images as provided by the community.
I’ve also mocked up two version of using an image as a background.</p>
<p><a href="https://pixls.us/lede-image.html"><strong>Simple replacement of the image</strong></a> with photos from the community.
This is the most popular in the poll on the forum at the moment.
The image will be rotated amongst images provided by community members.
I just need to make sure that the text shown is legible over whatever the image may be…</p>
<p><a href="https://pixls.us/lede-image-full.html"><strong>Full viewport splash</strong></a> version, where the image fills the viewport.
This is not very popular from the feedback I received (thank you akk, ankh, muks, DrSlony, LebedevRI, and others on irc!). 
I personally like the idea but I can understand why others may not like it.</p>
<p>If anyone wants to chime in (or vote in the poll) then head <a href="https://discuss.pixls.us/t/main-site-frontpage-lede/244/4">over to the forum topic</a> and let us know your thoughts!</p>
<p>Also, a big <strong>thank you</strong> to <a href="http://londonlight.org/zp/">Morgan Hardwood</a> for allowing us to use that image as a background example.
If you want a nice way to support F/OSS development, it just so happens that Morgan is a developer for <a href="https://pixls.us//www.rawtherapee.com">RawTherapee</a>, and a print of that image is available for purchase.
<a href="mailto:photography2015@londonlight.org">Contact him</a> for details.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ A Blended Panorama with PhotoFlow ]]></title>
            <link>https://pixls.us/articles/a-blended-panorama-with-photoflow/</link>
            <guid isPermaLink="true">https://pixls.us/articles/a-blended-panorama-with-photoflow/</guid>
            <pubDate>Fri, 26 Jun 2015 16:31:39 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_lede.jpg" /><br/>
                 <h1>A Blended Panorama with PhotoFlow</h1>  
                 <h2>Creating panoramas with Hugin and PhotoFlow</h2>   
                <p>The goal of this tutorial is to show how to create a sort-of-HDR panoramic image using only Free and Open Source tools.
To explain my workflow I will use the image below as an example.</p>
<p>This panorama was obtained from the combination of six views, each consisting of three bracketed shots at -1EV, 0EV and +1EV exposure.
The three exposures are stitched together with the <a href="http://hugin.sourceforge.net/">Hugin</a> suite, and then exposure-blended with <a href="">enfuse</a>.
The <a href="https://github.com/aferrero2707/PhotoFlow">PhotoFlow RAW editor</a> is used to prepare the initial images and to finalize the processing of the assembled panorama.
The final result of the post-processing is below:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_final2.jpg" data-swap-src="pano_+1EV.jpg" alt="Final result" width="960" height="457"> 
<figcaption>
Final result of the panorama editing (click to compare to simple +1EV exposure) 
</figcaption>
</figure>

<p>In this case I have used the brightest image for the foreground, the darkest one for the sky and clouds, and and exposure-fused one for a seamless transition between the two.</p>
<p>The rest of the post will show how to get there…</p>
<p>Before we continue, let me advise you that I’m not a pro, and that the tips and “recommendations” that I’ll be giving in this post are mostly derived from trial-and-error and common sense.
Feel free to correct/add/suggest anything… <strong>we are all here to learn</strong>! </p>
<h2 id="taking-the-shots">Taking the shots<a href="#taking-the-shots" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Shooting a panorama requires a bit of preparation and planning to make sure that one can get the best out of Hugin when stitching the shots together. Here is my personal “checklist”:</p>
<ul>
<li><strong>Manual Focus</strong> - set the camera to manual focus, so that the focus plane is the same for all shots</li>
<li><strong>Overlap Shots</strong> - make sure that each frame has sufficient overlap with the previous one (something between 1/2 and 1/3 of the total area), so that hugin can find enough control points to align the images and determine the lens correction parameters</li>
<li><strong>Follow A Straight Line</strong> - when taking the shots, try to follow as much as possible a straight line (keeping for example the horizon at the same height in your viewfinder); if you have a tripod, use it!</li>
<li><strong>Frame Appropriately</strong> - to maximize the angle of view, frame vertically for an horizontal panorama (and vice-versa for a vertical one)</li>
<li><strong>Leave Some Room</strong> - frame the shots a bit wider than needed, to avoid bad surprises when cropping the stitched panorama</li>
<li><strong>Fixed Exposure</strong> - take all shots with a fixed exposure (manual or locked) to avoid luminance variations that might not be fully compensated by hugin</li>
<li><strong>Bracket if Needed</strong> - if you shoot during a sunny day, the brightness might vary significantly across the whole panorama; in this case, take three or more bracketed exposures for each view (we will see later how to blend them in the post-processing)</li>
</ul>
<h2 id="processing-the-raw-files">Processing the RAW files<a href="#processing-the-raw-files" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>If you plan to create the panorama starting from the in-camera Jpeg images, you can safely skip this section. On the other hand, if you are shooting RAW you will need to process and prepare all the input images for Hugin. In this case it is important to make sure that the RAW processing parameters are exactly the same for all the shots. The best is to adjust the parameters on one reference image, and then batch-process the rest of the images using those settings.</p>
<h3 id="using-photoflow">Using PhotoFlow<a href="#using-photoflow" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Loading and processing a RAW file is rather easy:</p>
<ol>
<li><p>Click the “Open” button and choose the appropriate RAW file from your hard disk; the image preview area will show at this point a grey and rather dark image</p>
</li>
<li><p>Add a “RAW developer” layer; a configuration dialog will show up which allows to access and modify all the typical RAW processing parameters (white balance, exposure, color conversion, etc… see screenshots below).</p>
</li>
</ol>
<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_raw_wb2.png" width="380" height="409">
</figure>

<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_raw_exposure.png" width="380" height="243" > 
</figure>

<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_raw_demo.png" width="380" height="243" > 
</figure>

<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_raw_output.png" width="380" height="243" > 
</figure>

<p>More details on the RAW processing in PhotoFlow can be found in <a href="http://photoflowblog.blogspot.fr/2014/09/tutorial-how-to-process-raw-image-in.html">this tutorial</a>.</p>
<p>Once the result is ok the RAW processing parameters need to be saved into a preset. This can be done following a couple of simple steps:</p>
<ol>
<li><p>Select the “RAW developer” layer and click on the “Save” button below the layers list widget (at the bottom-right of the photoflow’s window)</p>
</li>
<li><p>A file chooser dialog chooser dialog will pop-up, where one has to choose an appropriate file name and location for the preset and then click “Save”;<br><strong>the preset file name must have a “.pfp” extension</strong></p>
</li>
</ol>
<p>The saved preset needs then to be applied to all the RAW files in the set. Under Linux, PhotoFlow comes with an handy script that automates the process. The script is called <em>pfconv</em> and can be found <a href="https://github.com/aferrero2707/PhotoFlow/blob/master/tools/pfconv">here</a>. It is a wrapper around the <em>pfbatch</em> and <em>exiftool</em> commands, and is used to process and convert a bunch of files to TIFF format. Save the script in one of the folders included in your <code>PATH</code> environment variable (for example <code>/usr/local/bin</code>) and make it executable:</p>
<pre><code>sudo chmod u+x /usr/local/bin/pfconv
</code></pre><p>Processing all RAW files of a given folder is quite easy. Assuming that the RAW processing preset is stored in the same folder under the name <code>raw_params.pfp</code>, run this commands in your preferred terminal application:</p>
<pre><code>cd panorama_dir
pfconv -p raw_params.pfp *.NEF
</code></pre><p>Of course, you have to change <code>panorama_dir</code> to your actual folder and the <code>.NEF</code> extension to the one of your RAW fles.</p>
<p>Now go for a cup of coffee, and be patient… a panorama with three or five bracketed shots for each view can easily have more than 50 files, and the processing can take half an hour or more. Once the processing completed, there will be one tiff file for each RAW image, an the fun with Hugin can start!</p>
<h2 id="assembling-the-shots">Assembling the shots<a href="#assembling-the-shots" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Hugin is a powerful and free software suite for stitching multiple shots into a seamless panorama, and more. Under Linux, Hugin can be usually installed through the package manager of your distribution. In the case of Ubuntu-based distros it can be usually installed with:</p>
<pre><code>sudo apt-get install hugin
</code></pre><p>If you are running Hugin for the first time, I suggest to switch the interface type to <strong>Advanced</strong> in order to have full control over the available parameters. </p>
<p>The first steps have to be done in the <em>Photos</em> tab:</p>
<p><img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/hugin_1.png" width="667" height="500"> </p>
<ol>
<li><p>Click on <em>Add images</em> and load all the tiff files included in your panorama. Hugin should automatically determine the lens focal length and the exposure values from the EXIF data embedded in the tiff files. </p>
</li>
<li><p>Click on <em>Create control points</em> to let hugin determine the anchor points that will be used to align the images and to determine the lens correction parameters so that all shots overlap perfectly. If the scene contains a large amount of clouds that have likely moved during the shooting, you can try setting the feature matching algorithm to <em>cpfind+celeste</em> to automatically exclude non-reliable control points in the clouds.</p>
</li>
<li><p>Set the geometric parameters to <em>Positions and Barrel Distortion</em> and hit the <em>Calculate</em> button.</p>
</li>
<li><p>Set the photometric parameters to <em>High dynamic range, fixed exposure</em> (since we are going to stitch bracketed shots that have been taken with fixed exposures), and hit the <em>Calculate</em> button again.</p>
</li>
</ol>
<p>At this point we can have a first look at the assembled panorama. Hugin provides an OpenGL-based previewer that can be opened by clicking on the on the <em>GL</em> icon in the top toolbar (marked with the arrow in the above screenshot). This will open a window like this:</p>
<p><img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/hugin_2.png" width="689" height="417"> </p>
<p>If the shots have been taken handheld and are not perfectly aligned, the panorama will probably look a bit “wavy” like in my example. This can be easily fixed by clicking on the <em>Straighten</em> button (at the top of the <em>Move/Drag</em> tab). Next, the image can be centered in the preview area with the <em>Center</em> and <em>Fit</em> buttons.</p>
<p>If the horizon is still not straight, you can further correct it by dragging the center of the image up or down:</p>
<p><img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/hugin_3.png" width="690" height="417"> </p>
<p>At this point, one can switch to the <em>Projection</em> tab and play with the different options. I usually find the <em>Cylindrical</em> projection better than the <em>Equirectangular</em> that is proposed by default (the vertical dimension is less “compressed”). For architectural panoramas that are not too wide, the <em>Rectilinear</em> projection can be a good option since vertical lines are kept straight.</p>
<p><img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/hugin_4.png" width="690" height="398"> </p>
<p>If the projection type is changed, one has to click once more on the <em>Center</em> and <em>Fit</em> buttons.</p>
<p>Finally, you can switch to the <em>Crop</em> tab and click on the <em>HDR Autocrop</em> button to determine the limits of the area containing only valid pixels.</p>
<p>We are now done with the preview window; it can be closed and we can go back to the main window, in the <em>Stitcher</em> tab. Here we have to set the options to produce the output images the way we want. The idea is to blend each bracketed exposure into a separate panorama, and then use <strong>enfuse</strong> to create the final exposure-blended version. The intermediate panoramas, which will be saved along with the enfuse output, are already aligned with respect to each other and can be combined using different type of masks (luminosity, gradients, freehand, etc…).</p>
<p>The <em>Stitcher</em> tab has to be configured as in the image below, selecting <em>Exposure fused from any arrangement</em> and <em>Blended layers of similar exposure, without exposure correction</em>. I usually set the output format to <em>TIFF</em> to avoid compression artifacts.</p>
<p><img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/hugin_5.png" width="592" height="500"> </p>
<p>The final act starts by clicking on the <em>Stitch!</em> button. The input images will be distorted, corrected for the lens vignetting and blended into seamless panoramas. The whole process is likely to take quite long, so it is probably a good opportunity for taking a pause…</p>
<p>At the end of the processing, few new images should appear in the output directory: one with an “_blended_fused.tif” suffix containing the output of the final enfuse step, and few with an “<em>exposure</em>????.tif” suffix that contain intermediate panoramas for each exposure value.</p>
<h2 id="blending-the-exposures">Blending the exposures<a href="#blending-the-exposures" class="header-link"><i class="fa fa-link"></i></a></h2>
<blockquote>
<p><em>Very often, photo editing is all about getting <strong>what your eyes have seen</strong> out of <strong>what your camera has captured</strong>.</em> </p>
</blockquote>
<p>The image that will be edited through this tutorial is no exception: the human vision system can “compensate” large luminosity variations and can “record” scenes with a wider dynamic range than your camera sensor. In the following I will attempt to restore such large dynamics by combining under- and over-exposed shots together, in a way that does not produce unpleasing halos or artifacts. Nevertheless, I have intentionally pushed the edit a bit “over the top” in order to better show how far one can go with such a technique. </p>
<p>This second part introduces a certain number of quite general editing ideas, mixed with details specific to their realization in PhotoFlow. Most of what is described here can be reproduced in GIMP with little extra effort, but without the ease of non-destructive editing.</p>
<p>The steps that I followed to go from one to the other can be more or less outlined like that:</p>
<ol>
<li><p>take the foreground from the +1EV version and the clouds from the -1EV version; use the exposure-blended Hugin output to improve the transition between the two exposures</p>
</li>
<li><p>apply an S-shaped tonal curve to increase the overall brightness and add contrast. </p>
</li>
<li><p>apply a combination of the <em>a</em> and <em>b</em> channels of the CIE-Lab colorspace in <strong>overlay</strong> blend mode to give more “pop” to the green and yellow regions in the foreground</p>
</li>
</ol>
<p>The image below shows side-by-side three of the output images produced with Hugin at the end of the first part. The left part contains the brightest panorama, obtained by blending the shots taken at +1EV. The right part contains the darkest version, obtained from the shots taken at -1EV. Finally, the central part shows the result of running the <strong>enfuse</strong> program to combine the -1EV, 0EV and +1EV panoramas. </p>
<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_exp_comp.jpg" width="640" height="299">
<figcaption> Comparison between the +1EV exposure (left), the enfuse output (center) and the -1EV exposure (right) 
</figcaption> </figure>




<h3 id="exposure-blending-in-general">Exposure blending in general<a href="#exposure-blending-in-general" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>In scenes that exhibit strong brightness variations, one often needs to combine different exposures in order to compress the dynamic range so that the overall contrast can be further tweaked without the risk of losing details in the shadows or highlights.</p>
<p>In this case, the name of the game is “seamless blending”, i.e. combining the exposures in a way that looks natural, without visible transitions or halos.
In our specific case, the easiest thing would be to simply combine the +1EV and -1EV images through some smooth transition, like in the example below.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_+1EV_-1EV_blend.jpg" width="925" height="433" style="width: initial;"> 
<figcaption>
Simple blending of the +1EV and -1EV exposures 
</figcaption>
</figure>

<p>The result is not too bad, however it is very difficult to avoid some brightening of the bottom part of the clouds (or alternatively some darkening of the hills), something that will most likely look artificial even if the effect is subtle (our brain will recognize that something is wrong, even if one cannot clearly explain the reason…). We need something to “bridge” the two images, so that the transition looks more natural. </p>
<p>At this point it is good to recall that the last step performed by Hugin was to call the <strong>enfuse</strong> program to blend the three bracketed exposures. The enfuse output is somehow intermediate between the -1EV and +1EV versions, however a side-by-side comparison with the 0EV image reveals the subtle and sophisticated work done by the program: the foreground hill is brighter and the clouds are darker than in the 0EV version. And even more importantly, this job is done without triggering any alarm in your brain! Hence, the enfuse output is a perfect candidate to improve the transition between the hill and the sky.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_enfuse.jpg" data-swap-src="pano_0EV.jpg" alt="Final result" width="960" height="449"> 
<figcaption> Enfuse output (click to see 0EV version) 
</figcaption> </figure>




<h3 id="exposure-blending-in-photoflow">Exposure blending in PhotoFlow<a href="#exposure-blending-in-photoflow" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>It is time to put all the stuff together.
First of all, we should open <strong>PhotoFlow</strong> and load the +1EV image.
Next we need to add the enfuse output on top of it: for that you first need to add a new layer (<strong>1</strong>) and choose the <em>Open image</em> tool from the dialog that will open up (<strong>2</strong>)(see below).</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_add_layer_edit.png" width="960" height="578"> 
<figcaption> Inserting as image from disk as a layer
</figcaption> </figure>

<p>After clicking the “OK” button, a new layer will be added and the corresponding configuration dialog will be shown. There you can choose the name of the file to be added; in this case, choose the one ending with “_blended_fused.tif” among those created by Hugin:</p>
<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_open_image_edit.png" width="469" height="235"> 
<figcaption> “Open image” tool dialog
</figcaption> </figure>



<h4 id="layer-masks-theory-a-bit-and-practice-a-lot-">Layer masks: theory (a bit) and practice (a lot)<a href="#layer-masks-theory-a-bit-and-practice-a-lot-" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>For the moment, the new layer completely replaces the background image. This is not the desired result: instead, we want to keep the hills from the background layer and only take the clouds from the “_blended_fused.tif” version. In other words, we need a <strong>layer mask</strong>.</p>
<p>To access the mask associated to the “enfuse” layer, double-click on the small gradient icon next to the name of the layer itself. This will open a new tab with an initially empty stack, where we can start adding layers to generate the desired mask.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_enfuse_before_blend_edit.png" width="960" height="581"> 
<figcaption>
How to access the grayscale mask associated to a layer
</figcaption>
</figure>

<p>In PhotoFlow, masks are edited the same way as the rest of the image: through a stack of layers that can be associated to most of the available tools. In this specific case, we are going to use a combination of gradients and curves to create a smooth transition that follows the shape of the edge between the hills and the clouds. The technique is explained in detail in <a href="https://www.youtube.com/watch?v=kapppq-PbTk">this screencast</a>.</p>
<div class='big-vid'>
<div class='fluid-vid'>
<iframe width="960" height="540" src="https://pixls.us//www.youtube.com/embed/kapppq-PbTk?rel=0" frameborder="0" allowfullscreen></iframe>
</div>
</div>


<p>To avoid the boring and lengthy procedure of creating all the necessary layers, you can download  <a href="http://aferrero2707.github.io/PhotoFlow/data/presets/gradient_modulation.pfp">this preset file</a> and load it as shown below:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_enfuse_mask_initial.png" width="960" height="456"> 
</figure>

<p>The mask is initially a simple vertical linear gradient. At the bottom (where the mask is black) the associated layer is completely transparent and therefore hidden, while at the top (where the mask is white) the layer is completely opaque and therefore replaces anything below it. Everywhere in between, the layer has a degree of transparency equal to the shade of gray in the mask.</p>
<p>In order to show the mask, activate the “show active layer” radio button below the preview area, and then select the layer that has to be visualized. In the example above, I am showing the output of the topmost layer in the mask, the one called “transition”. Double-clicking on the name of the “transition layer allows to open the corresponding configuration dialog, where the parameters of the layer (a <a href="http://photoflowblog.blogspot.fr/2014/09/tutorial-using-curves-tool-in-photoflow.html"><strong>curves</strong> adjustment</a> in this case) can be modified. The curve is initially a simple diagonal: output values exactly match input ones.</p>
<p>If the rightmost point in the curve is moved to the left, and the leftmost to the right, it is possible to modify the vertical gradient and the reduce the size of the transition between pure black and pure white, as shown below:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_transition_example.jpg" width="960" height="581"> 
</figure>

<p>We are getting closer to our goal of revealing the hills from the background layer, by making the corresponding portion of the mask purely black. However, the transition we have obtained so far is straight, while the contour of the hills has a quite complex curvy shape… this is where the second <strong>curves</strong> adjustment, associated to the “modulation” layer, comes into play.</p>
<p>As one can see from the screenshot above, between the bottom gradient and the “transition” curve there is a group of three layers: an <strong>horizontal</strong> gradient, a modulation curve and <strong>invert</strong> operation. Moreover, the group itself is combined with the bottom vertical gradient in <a href="http://docs.gimp.org/en/gimp-concepts-layer-modes.html"><strong>grain merge</strong></a> blending mode.</p>
<p>Double-clicking on the “modulation” layer reveals a tone curve which is initially flat: output values are always 50% independently of the input. Since the output of this “modulation” curve is combined with the bottom gradient in <strong>grain merge</strong> mode, nothing happens for the moment. However, something interesting happens when a new point is added and dragged in the curve: the shape of the mask matches exactly the curve, like in the example below.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_modulation_example.jpg" width="960" height="581"> 
</figure>




<h3 id="the-sky-hills-transition">The sky/hills transition<a href="#the-sky-hills-transition" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The technique introduced above is used here to create a precise and smooth transition between the sky and the hills. As you can see, with a sufficiently large number of points in the modulation curve one can precisely follow the shape of the hills:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_enfuse_mask.png" width="960" height="433"> 
</figure>

<p>The result of the blending looks like that (click the image to see the initial +1EV version):</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_enfuse_blended.jpg" data-swap-src="pano_+1EV.jpg" alt="Final result" width="690" height="328"> 
<figcaption>
Enfuse output blended with the +1EV image (click to see the initial +1EV version) 
</figcaption>
</figure>

<p>The sky looks already much denser and saturated in this version, and the clouds have gained in volume and tonal variations. However, the -1EV image looks even better, therefore we are going to take the sky and clouds from it. </p>
<p><a name="sky_blend"></a>
To include the -1EV image we are going to follow the same procedure done already in the case of the enfuse output:</p>
<ol>
<li><p>add a new layer of type “Open image” and load the -1EV Hugin output (I’ve named this new layer “sky”)</p>
</li>
<li><p>open the mask of the newly created layer and add a transition that reveals only the upper portion of the image</p>
</li>
</ol>
<p>Fortunately we are not obliged to recreate the mask from scratch. PhotoFlow includes a feature called <strong>layer cloning</strong>, which allows to <strong>dynamically</strong> copy the content of one layer into another one. Dynamically in the sense that the pixel data gets copied <em>on the fly</em>, such that the destination always reflects the most recent state of the source layer.</p>
<p>After activating the mask of the “sky” layer, add a new layer inside it and choose the “clone layer” tool (see screenshot below).</p>
<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_clone_layer.png" width="640" height="487"> 
<figcaption>
Cloning a layer from one mask to another
</figcaption>
</figure>

<p>In the tool configuration dialog that will pop-up, one has to choose the desired source layer among those proposed in the list under the label “Layer name”. The generic naming scheme of the layers in the list is “[root group name]/root layer name/OMap/[mask group name]/[maks layer name]”, where the items inside square brackets are optional. </p>
<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_sky_mask_clone_layer.png" width="470" height="398"> 
<figcaption>
Choice of the clone source layer 
</figcaption>
</figure>

<p>In this specific case, I want to apply a smoother transition curve to the same base gradient already used in the mask of the “enfuse” layer. For that we need to choose “enfuse/OMap/gradient modulation (blended)” in order to clone the output of the “gradient modulation” group <strong>after the <em>grain merge</em> blend</strong>, and then add a new <strong>curves</strong> tool above the cloned layer:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_sky_mask.jpg" width="960" height="413"> 
<figcaption>The final transition mask between the hills and the sky
</figcaption>
</figure>

<p>The result of all the efforts done up to now is shown below; it can be compared with the initial starting point by clicking on the image itself:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_sky_blended.jpg" data-swap-src="pano_+1EV.jpg" alt="Final result" width="690" height="322"> 
<figcaption>
Edited image after blending the upper portion of the -1EV version through a layer mask. Click to see the initial +1EV image.
</figcaption>
</figure>

<h2 id="contrast-and-saturation">Contrast and saturation<a href="#contrast-and-saturation" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>We are not quite done yet, as the image is still a bit too dark and flat, however this version will “tolerate” some contrast and luminance boost much better than a single exposure. In this case I’ve added a <strong>curves</strong> adjustment at the top of the layer’s stack, and I’ve drawn an S-shaped RGB tone curve as shown below:</p>
<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_tone_curve_edit.png" width="468" height="672"> 
</figure>

<p>The effect of this tone curve is to increase the overall brightness of the image (the middle point is moved to the left) and to compress the shadows and highlights without modifying the black and white points (i.e. the extremes of the curve). This curve definitely gives “pop” to the image (click to see the version before the tone adjustment):</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_contrast.jpg" data-swap-src="pano_sky_blended.jpg" alt="Final result" width="960" height="457"> 
<figcaption>
Result of the S-shaped tonal adjustment (click the image to see the version before the adjustment).
</figcaption>
</figure>

<p>However, this comes at the expense of an overall increase in the color saturation, which is a typical side effect of RGB curves.
While this saturation boost looks quite nice in the hills, the effect is rather disastrous in the sky.
The blue as turned electric, and is far from what a nice, saturated blue sky should look like!</p>
<p>However, there is a simple fix to this problem: change the blend mode of the <strong>curves</strong> layer from <strong>Normal</strong> to <strong>Luminosity</strong>. 
The tone curve in this case only modified the luminosity of the image, but preserves as much as possible the original colors.
The difference between normal and lumnosity blending is shown below (click to see the <strong>Normal</strong> blending).
As one can see, the <strong>Luminosity</strong> blend tends to produce a duller image, therefore we will need to fix the overall saturation in the next step.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_contrast_lumi.jpg" data-swap-src="pano_contrast.jpg" alt="Luminosity blend" width="960" height="457"> 
<figcaption>
S-shaped tonal adjustment with <strong>Luminosity</strong> blend mode (click the image to see the version with <strong>Normal</strong> blend mode).
</figcaption>
</figure>

<p>To adjust the overall saturation of the image, let’s now add an <strong>Hue/Saturation</strong> layer above the tone curve and set the saturation value to <strong>+50</strong>.
The result is shown below (click to see the <strong>Luminosity</strong> blend output).</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_saturation.jpg" data-swap-src="pano_contrast_lumi.jpg" alt="Saturation boost" width="960" height="457"> 
<figcaption>
Saturation set to <strong>+50</strong> (click the image to see the <strong>Luminosity</strong> blend output).
</figcaption>
</figure>

<p>This definitely looks better on the hills, however the sky is again “too blue”.
The solution is to decrease the saturation of the top part through an opacity mask.
In this case I have followed the same steps as for the mask of the <a href="#sky_blend">sky blend</a>, but I’ve changed the transition curve to the one shown here:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_saturation_mask.jpg" alt="Saturation mask" width="960" height="488">
</figure>

<p>In the bottom part the mask is perfectly white, and therefore a <strong>+50</strong> saturation boost is applied. On the top the mask is instead just about 30%, and therefore the saturation is increased of only about <strong>+15</strong>. This gives a better overall color balance to the whole image:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_saturation_masked.jpg" data-swap-src="pano_contrast_lumi.jpg" alt="Saturation boost after mask" width="960" height="457"> 
<figcaption>Saturation set to <strong>+50</strong> through a transition mask (click the image to see the <strong>Luminosity</strong> blend output).
</figcaption>
</figure>




<p>###Lab blending
The image is already quite ok, but I still would like to add some more tonal variations in the hills.
This could be done with lots of different techniques, but in this case I will use one that is very simple and straightforward, and that does not require any complex curve or mask since it uses the image data itself.
The basic idea is to take the <strong>a</strong> and/or <strong>b</strong> channels of the <a href="https://en.wikipedia.org/wiki/Lab_color_space"><strong>Lab</strong></a> colorspace, and combine them with the image itself in <strong>Overlay</strong> blend mode.
This will introduce <strong>tonal</strong> variations depending on the <strong>color</strong> of the pixels (since the <strong>a</strong> and <strong>b</strong> channels only encode the color information).
Here I will assume you are quite familiar wit the Lab colorspace.
Otherwise, <a href="https://en.wikipedia.org/wiki/Lab_color_space">here</a> is the link to the Wikipedia page that should give you enough informations to follow the rest of the tutorial.</p>
<p>Looking at the image, one can already guess that most of the areas in the hills have a yellow component, and will therefore be positive in the <strong>b</strong> channel, while the sky and clouds are neutral or strongly blue, and therefore have <strong>b</strong> values that are negative or close to zero. The grass is obviously green and therefore <strong>negative</strong> in the <strong>a</strong> channel, while the wineyards are brownish and therefore most likely with positive <strong>a</strong> values. In PhotoFlow the <strong>a</strong> and <strong>b</strong> values are re-mapped to a range between 0 and 100%, so that for example <strong>a=0</strong> corresponds to 50%. You will see that this is very convenient for channel blending.</p>
<p>My goal is to lighten the green and the yellow tones, to create a better contrast around the wineyards and add some “volume” to the grass and trees. Let’s first of all inspect the <strong>a</strong> channel: for that, we’ll need to add a group layer on top of everything (I’ve called it “ab overlay”) and then added a <strong>clone</strong> layer inside this group. The source of the clone layer is set to the <strong>a</strong> channel of the “backgroud” layer, as shown in this screenshot:</p>
<figure>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_a_channel_clone.png" alt="a channel clone" width="470" height="263"> 
<figcaption>
Cloning of the Lab “a” channel of the background layer
</figcaption>
</figure>

<p>A copy of the <strong>a</strong> channel is shown below, with the contrast enhanced to better see the tonal variations (click to see the original versions):</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_a_contrast.jpg" data-swap-src="pano_a_channel.jpg" alt="Saturation boost after mask" width="960" height="457"> 
<figcaption>
The Lab <strong>a</strong> channel (boosted contrast)
</figcaption>
</figure>

<p>As we have already seen, in the <strong>a</strong> channel the grass is negative and therefore looks dark in the image above. If we want to lighten the grass we therefore need to invert it, to obtain this:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_a_invert_contrast.jpg" alt="Saturation boost after mask" width="960" height="457"> 
<figcaption> The inverted Lab <strong>a</strong> channel (boosted contrast)
</figcaption> </figure>

<p>Let’s now consider the <strong>b</strong> channel: as sursprising as it might seem, the grass is actually more yellow than green, or at least the <strong>b</strong> channel values in the grass are higher than the inverted <strong>a</strong> values. In addition, the trees at the top of the hill stick nicely out of the clouds, much more than in the <strong>a</strong> channel. All in all, a combination of the two Lab channels seems to be the best for what we want to achieve.</p>
<p>With one exception: the blue sky is very dark in the <strong>b</strong> channel, while the goal is to leave the sky almost unchanged. The solution is to blend the <strong>b</strong> channel into the <strong>a</strong> channel in <strong>Lighten</strong> mode, so that only the <strong>b</strong> pixels that are lighter than the corresponding <strong>a</strong> ones end up in the blended image. The result is shown below (click on the image to see the <strong>b</strong> channel).</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_b_lighten_contrast.jpg" data-swap-src="pano_b_contrast.jpg" alt="b channel lighten blend" width="960" height="457"> 
<figcaption>
<strong>b</strong> channel blended in <strong>Lighten</strong> mode (boosted contrast, click the image to see the <strong>b</strong> channel itself).
</figcaption>
</figure>

<p>And this are the blended <strong>a</strong> and <strong>b</strong> channels with the original contrast:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_b_lighten.jpg" alt="b channel lighten blend" width="960" height="457"> 
<figcaption>
The final <strong>a</strong> and <strong>b</strong> mask, without contrast correction
</figcaption>
</figure>

<p>The last act is to change the blending mode of the “ab overlay” group to <strong>Overlay</strong>: the grass and trees get some nice “pop”, while the sky remains basically unchanged:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_ab_overlay.jpg" data-swap-src="pano_saturation_masked.jpg" alt="ab overlay" width="960" height="457"> 
<figcaption> Lab channels overlay (click to see the image after the saturation adjustment).
</figcaption> </figure>

<p>I’m now almost satisfied with the result, except for one thing: the Lab overlay makes the yellow area on the left of the image way too bright. The solution is a gradient mask (horizontal this time) associated to the “ab overlay group”, to exclude the left part of the image as shown below:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_ab_overlay_mask.jpg" alt="overlay blend mask" width="960" height="491">
</figure>

<p>The final, masked image is shown here, to be compared with the initial starting point:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_ab_overlay_masked.jpg" data-swap-src="pano_+1EV.jpg" alt="final result" width="960" height="457"> 
<figcaption> The image after the masked Lab overlay blend (click to see the initial +1EV version).
</figcaption> </figure>




<h2 id="the-final-touch">The Final Touch<a href="#the-final-touch" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Through the tutorial I have intentionally pushed the editing quite above what I would personally find acceptable. The idea was to show how far one can go with the techniques I have described; fortunatey, the non-destructive editing allows us to go back on our steps and reduce the strength of the various effects until the result looks really ok.</p>
<p>In this specific case, I have lowered the opacity of the <strong>“contrast”</strong> layer to <strong>90%</strong>, the one of the <strong>“saturation”</strong> layer to <strong>80%</strong> and the one of the <strong>“ab overlay”</strong> group to <strong>40%</strong>. Then, feeling that the <strong>“b channel”</strong> blend was still brightening the yellow areas too much, I have reduced the opacity of the <strong>“b channel”</strong> layer to <strong>70%</strong>.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_adjusted_opacity.jpg" data-swap-src="pano_ab_overlay_masked.jpg" alt="opacity adjustment" width="960" height="457"> 
<figcaption> Opacities adjusted for a “softer” edit (click on the image to see the previous version).
</figcaption> </figure>

<p>Another thing I still did not like in the image was the overall color balance: the grass in the foreground looked a bit too <strong>“emerald”</strong> instead of <strong>“yellowish green”</strong>, therefore I thought that the image could profit of a general warming up of the colors. For that I have added a curves layer at the top of the editing stack, and brought down the middle of the curve in both the <strong>green</strong> and <strong>blue</strong> channels. The move needs to be quite subtle: I brought the middle point down from <strong>50%</strong> to <strong>47%</strong> in the greens and <strong>45%</strong> in the blues, and then I further reduced the opacity of the adjustment to <strong>50%</strong>. Here comes the warmed-up version, compared with the image before:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_warmer.jpg" data-swap-src="pano_adjusted_opacity.jpg" alt="opacity adjustment" width="960" height="457"> 
<figcaption> “Warmer” version (click to see the previous version)
</figcaption> </figure>

<p>At this point I was almost satisfied. However, I still found that the green stuff at the bottom-right of the image attracted too much my attention and distracted the eye. Therefore I darkened the bottom of the image with a slightly curved gradient applied in <strong>“soft light”</strong> blend mode. The gradient was created with the same technique used for blending the various exposures. The transition curve is shown below: in this case, the top part was set to <strong>50% gray</strong> (remember that we blend the gradient in <strong>“soft light”</strong> mode) and the bottom part was moved a bit below 50% to obtain a slightly darkening effect:</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pf_vignetting.png" alt="vignetting gradient" width="960" height="415"> 
<figcaption>
Gradient used for darkening the bottom of the image.
</figcaption>
</figure>

<p><strong>It’s done!</strong> If you managed to follow me ‘till the end, you are now rewarded with the final image in all its glory, that you can again compare with the initial starting point.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/a-blended-panorama-with-photoflow/pano_final2.jpg" data-swap-src="pano_+1EV.jpg" alt="final result" width="960" height="457"> 
<figcaption> 
The final image (click to see the initial +1EV version).
</figcaption>
</figure>

<p>It has been a quite long journey to arrive here… and I hope not to have lost too many followers on the way!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Basic Landscape Exposure Blending with GIMP and G'MIC ]]></title>
            <link>https://pixls.us/articles/basic-landscape-exposure-blending-with-gimp-and-g-mic/</link>
            <guid isPermaLink="true">https://pixls.us/articles/basic-landscape-exposure-blending-with-gimp-and-g-mic/</guid>
            <pubDate>Tue, 09 Jun 2015 15:34:49 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/basic-landscape-exposure-blending-with-gimp-and-g-mic/basic landscape exposure blend lede.jpg" /><br/>
                 <h1>Basic Landscape Exposure Blending with GIMP and G'MIC</h1>  
                 <h2>Exploring exposure blending entirely in GIMP</h2>   
                <p>Photographer <a href="http://lightsweep.co.uk/">Ian Hex</a> had previously explored the topic of exposure blending with us by <a href="https://pixls.us/articles/luminosity-masking-in-darktable/">using luminosity masks in darktable</a>.
For his first <em>video</em> tutorial he’s revisiting the subject entirely in <a href="http://www.gimp.org">GIMP</a> and <a href="http://gmic.eu">G’MIC</a>.</p>
<!-- more -->
<div class="big-vid">
<div class="fluid-vid">
<iframe width="1280" height="720" src="https://www.youtube-nocookie.com/embed/OmwnHoIP2vE?rel=0&amp;showinfo=0" frameborder="0" allowfullscreen></iframe>
</div>
</div>

<p>Have a look and let him know what you think in the forum.
He’s promised more if he gets a good response from people - so let’s give him some encouragement!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Interesting Usertest and Incoming ]]></title>
            <link>https://pixls.us/blog/2015/06/interesting-usertest-and-incoming/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/06/interesting-usertest-and-incoming/</guid>
            <pubDate>Sat, 06 Jun 2015 01:00:37 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/06/interesting-usertest-and-incoming/pano_heading.jpg" /><br/>
                 <h1>Interesting Usertest and Incoming</h1>  
                 <h2>A view of someone using the site and contributing</h2>   
                <p>I ran across a neat website the other day for getting actual user feedback when viewing your website: <a href="http://www.usertesting.com/">UserTesting</a>.
They have a free option called <a href="http://peek.usertesting.com/">peek</a> that records a short (~5 min.) screencast of a user visiting the site and narrating their impressions.</p>
<figure>
<img src="https://pixls.us/blog/2015/06/interesting-usertest-and-incoming/peeklogo.png" alt="Peek Logo" >
</figure>

<p>You can imagine this to be quite interesting to someone building a site.</p>
<!-- more -->
<p>It appears the service asks its testers to answer three specific questions (I am assuming this is for the free service mainly):</p>
<ul>
<li>What is your first impression of this web page? What is this page for?</li>
<li>What is the first thing you would like to do on this page?
Please go ahead and try to do that now.
Please describe your experience.</li>
<li>What stood out to you on this website?
What, if anything, frustrated you about this site?
Please summarize your thoughts regarding this website.</li>
</ul>
<p>Here’s the actual video they sent me (can also be found <a href="http://peek.usertesting.com/result/40917409038587">on their website</a>):</p>
<div class="fluid-vid">
<iframe width="640" height="360" src="https://www.youtube-nocookie.com/embed/p3CBdw6E9bc?rel=0&amp;showinfo=0" frameborder="0" allowfullscreen></iframe>
</div>

<p>I don’t have much to say about the testing.
It was very insightful and helpful to hear someones view coming to the site fresh.
I’m glad that my focus on simplicity is appreciated!</p>
<p>It was interesting that the navigation drawer wasn’t used, or found, until the very end of the session.
It was also interesting to hear the testers thoughts around scrolling down the main page (is it so rare these days for content to be longer than a single screen - above the fold?).</p>
<h2 id="exposure-blended-panorama-coming-soon"><a href="#exposure-blended-panorama-coming-soon" class="header-link-alt">Exposure Blended Panorama Coming Soon</a></h2>
<p>The creator of new processing project <a href="http://photoflowblog.blogspot.com/">PhotoFlow</a>, Andrea Ferrero, is being kind enough to take a break from coding to write a new tutorial for us: <em>“Exposure Blended Panoramas with Hugin and Photoflow”</em>!</p>
<p>I’ve been collaborating with him on getting things in order to publish and this looks like it’s going to be a fun tutorial!</p>
<h2 id="submitting"><a href="#submitting" class="header-link-alt">Submitting</a></h2>
<p>We’ve been talking back and forth trying to find a good workflow for contributors to be able to provide submissions as easily as possible.
At the moment I translate any submissions into <a href="http://daringfireball.net/projects/markdown/syntax">Markdown</a>/<a href="https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/HTML5">HTML</a> as needed from whatever source the author decides to throw at me.  This is less than ideal (but at least it’s nice and easy for authors - which is more important to me than having to port them manually).</p>
<h3 id="github-submissions"><a href="#github-submissions" class="header-link-alt">Github Submissions</a></h3>
<p>For those comfortable with <a href="https://git-scm.com/">Git</a> and <a href="https://github.com">Github</a> I have created a neat option to submit posts.
You can fork my <a href="https://github.com/patdavid/PIXLSUS">PIXLS.US repository</a> from here:</p>
<p><a href="https://github.com/patdavid/PIXLSUS">https://github.com/patdavid/PIXLSUS</a></p>
<p>Just follow the instructions on that page, and issue a pull request when you’re done.
Simple! :)
You may want to communicate with me to let me know the status of the submission, in case you’re still working on it, or it’s ready to be published.</p>
<h3 id="any-old-files"><a href="#any-old-files" class="header-link-alt">Any Old Files</a></h3>
<p>Of course, if you want to submit some content, please don’t feel you have to use Github if you’re not comfortable with it.
Feel free to write it any way that works best for you (as I said, my native build files are usually simple Markdown).
You can also reach out to me and let me know what you may be thinking ahead of time, as I might be able to help out.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ A New (Old) Tutorial ]]></title>
            <link>https://pixls.us/blog/2015/05/a-new-old-tutorial/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/05/a-new-old-tutorial/</guid>
            <pubDate>Wed, 27 May 2015 18:32:07 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/05/a-new-old-tutorial/Mairi Deux 3.jpg" /><br/>
                 <h1>A New (Old) Tutorial</h1>  
                 <h2>Revisiting an Open Source Portrait (Mairi)</h2>   
                <p>A little while back I had attempted to document a shoot with my friend and model, Mairi.
In particular I wanted to capture a start-to-finish workflow for processing a portrait using free software.
There are often many tutorials for individual portions of a retouching process but rarely do they get seen in the context of a full workflow.</p>
<p>The results became a <a href="http://blog.patdavid.net/2013/03/the-open-source-portrait-equipment.html" title="An Open Source Portrait (Equipment)">two</a>-<a href="http://blog.patdavid.net/2013/03/the-open-source-portrait-postprocessing.html" title="An Open Source Portrait (Postprocessing)">part</a> post on my blog.
For posterity (as well as for those who may have missed it the first time around) I am republishing the second part of the tutorial <a href="https://pixls.us/articles/an-open-source-portrait-mairi/"><em>Postprocessing</em></a> here.</p>
<!-- more -->
<p>Though the post was originally published in 2013 the process it describes is still quite current (and mostly still my same personal workflow).
This tutorial covers the retouching in post while the <a href="http://blog.patdavid.net/2013/03/the-open-source-portrait-equipment.html" title="An Open Source Portrait (Equipment)">original article</a> about setting up and conducting the shoot is still over on my personal blog.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Sharpen-Wavelet-2.jpg" alt="Mairi Portrait Final"/>
<figcaption>
The finished result from the tutorial.<br>by Pat David (<a class='cc' href='https://creativecommons.org/licenses/by-sa/2.0/'>cba</a>).
</figcaption>
</figure>

<p>The tutorial may read a little long but the process is relatively quick once it’s been done a few times.
Hopefully it proves to be helpful to others as a workflow to use or tweak for their own process!</p>
<h2 id="coming-soon"><a href="#coming-soon" class="header-link-alt">Coming Soon</a></h2>
<p>I am still working on getting some sample shots to demonstrate the previously mentioned <a href="https://discuss.pixls.us/t/noise-free-shadows-dual-exposure/204">noise free shadows</a> idea using dual exposures.
I just need to find some sample shots that will be instructive while still at least being something nice to look at…</p>
<p>Also, another guest post is coming down the pipes from the creator of <a href="http://photoflowblog.blogspot.com/">PhotoFlow</a>, Andrea Ferrero!
He’ll be talking about creating blended panorama images using <a href="http://hugin.sourceforge.net/">Hugin</a> and PhotoFlow.
Judging by the results on his sample image, this will be a fun tutorial to look out for!</p>
<figure class="big-vid">
<img src="https://pixls.us/blog/2015/05/a-new-old-tutorial/pano-sample.jpg">
</figure>



  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ An Open Source Portrait (Mairi) ]]></title>
            <link>https://pixls.us/articles/an-open-source-portrait-mairi/</link>
            <guid isPermaLink="true">https://pixls.us/articles/an-open-source-portrait-mairi/</guid>
            <pubDate>Mon, 18 May 2015 17:04:49 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi.jpg" /><br/>
                 <h1>An Open Source Portrait (Mairi)</h1>  
                 <h2>Processing a portrait session</h2>   
                <p>This is an article I had written long ago (<a href="http://blog.patdavid.net/2013/03/the-open-source-portrait-postprocessing.html">originally published</a> in 2013).
The material is still quite relevant and the workflow hasn’t really changed, so I am republishing it here for posterity and those that may have missed it the first time around.</p>
<p><a href="http://blog.patdavid.net/2013/03/the-open-source-portrait-equipment.html">The previous post</a> for this article went over the shoot that led to this image.</p>
<ul>
<li><a href="#picking-your-image">Picking Your Image</a></li>
<li><a href="#raw-processing">RAW Processing</a><ul>
<li><a href="#adjust-exposure">Adjust Exposure</a><ul>
<li><a href="#exposure-compensation">Exposure Compensation</a></li>
<li><a href="#black-point">Black Point</a></li>
</ul>
</li>
<li><a href="#white-balance">White Balance</a></li>
<li><a href="#noise-reduction-amp-sharpening">Noise Reduction</a></li>
<li><a href="#in-summary">In Summary</a></li>
</ul>
</li>
<li><a href="#gimp-retouching">GIMP Retouching</a><ul>
<li><a href="#touchup-flyaway-hairs">Touchup Hair</a></li>
<li><a href="#fixing-the-background-amp-cropping">Fixing the Background/Cropping</a></li>
<li><a href="#skin-retouching-with-wavelet-decompose">Skin Retouching &amp; Wavelet Decompose</a></li>
<li><a href="#contour-painting-highlights">Contour Painting Highlights</a></li>
<li><a href="#color-curves">Color Curves</a></li>
<li><a href="#sharpening">Sharpening</a></li>
</ul>
</li>
<li><a href="#finally-at-the-end">The End</a></li>
</ul>
<p>If you’d like to follow along with the image of Mairi, you can download the files from the links below.</p>
<p class="aside" style="font-size: 1rem;">
<a href="https://docs.google.com/uc?export=download&amp;id=0B21lPI7Ov4CVNUk1Y01HQUNPckk">Download the .ORF RAW file [Google Drive]</a><br><a href="Mairi-RAW-Final.jpg">Download the full resolution .JPG output from RawTherapee.</a><br><a href="https://docs.google.com/uc?export=download&amp;id=0B21lPI7Ov4CVMl9lZFJWb1Rxa3c">Download the Full Resolution .XCF file [.7zip - 265MB]</a><br>If you want to use the .XCF file just to see what I did, I recommend the ½ resolution file, as it’s smaller: 
<a href="https://docs.google.com/uc?export=download&amp;id=0B21lPI7Ov4CVaXA4bkNJdDhGRkU">Download the ½ Resolution .XCF file [.7zip - 60MB]</a><br><small><em>These files are being made available under a <a href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">Creative Commons Attribution, Non-Commercial, Share Alike</a> license (<a href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">CC-BY-SA-NC</a>).</em></small>
</p>


<p>To whet your appetite, here is the final result of all of the postprocessing done in this tutorial (click to compare it to no retouching):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Sharpen-Wavelet-2.jpg" data-swap-src="Mairi-RAW-Final.jpg" alt="Mairi Final Result" width="598" height="800" />
<figcaption>
The final result I’m aiming for.<br>Click to compare to original.
</figcaption>
</figure>

<hr>
<h2 id="picking-your-image">Picking Your Image<a href="#picking-your-image" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>This is a hard thing to quantify, as each of us is driven by our own vision and style.
In my case, I wanted something a little more somber looking with a focus on her eyes (<em>they are the window to the soul,</em> right?).
There’s just something I like about big, bright eyes in a portrait, particularly in women.</p>
<p>I also personally liked the grey sweater against the grey background as well.
I felt that it put more focus on the colors of her skin, hair, and eyes.
So that pretty much narrowed me down to this contact sheet:</p>
<figure class="big-vid">
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/contact-grey.jpg" alt="Mairi contact sheet" width="960" height="902">
<figcaption>
    Narrowing it down to this set.
</figcaption>
</figure>

<p>Looking over the shots, I decided I liked the images with the hood up, but her hair down and flowing around her.
This puts me in the top two rows, with only a few left to decide upon.
At this point I narrowed it down to one that I liked best - grey sweater, hood up but not pulled back against her head, hair flowing out of it, and big eyes.</p>
<p>This is pretty common, I’d imagine.
You can grab several frames, but in the end hopefully just the right amount of small details will come together and you’ll find something that you really like.
In my case it was this one:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/P2160427.jpg" alt="Mairi Raw" width="600" height="800">
<figcaption>
    I finally decided on this shot based on the color, hair, eyes, and slight smile.
</figcaption>
</figure>

<p><strong>Now hold on a minute</strong>. The image above is the JPG straight out of the camera.
As you can see, I’ve underexposed this one a little bit, and the colors are not anywhere near where I’d like them to be.
If you’re following along <em>don’t download this version of the image</em>.
I’ll have a much better starting JPG after we run it through some RAW development first!</p>
<p>If you’re impatient, <a href="#raw-summary">jump to that section</a> and get the image there.</p>
<h2 id="raw-processing">Raw Processing<a href="#raw-processing" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>There are a few RAW conversion options out there in the land of F/OSS.
Here’s a small list of popular ones to peruse:</p>
<ul>
<li><a href="http://www.rawtherapee.com">RawTherapee</a></li>
<li><a href="http://www.darktable.org/">darktable</a></li>
<li><a href="http://ufraw.sourceforge.net/">UFRaw</a></li>
<li><a href="http://photivo.org/">Photivo</a></li>
<li><a href="http://aferrero2707.github.io/PhotoFlow/">PhotoFlow</a></li>
</ul>
<p>One of the reasons I love using F/OSS is the availability (usually) of the software across my OS’s.
In my case I went with RawTherapee a while back and liked it, so I’ve stuck with it so far (even though I had to build my own OSX versions).</p>
<p>So, my workflow includes RawTherapee at this point.
You should be able to follow along in other converters, but I’m going to focus on RT because that’s what I’m using.
If you shoot only in JPG (seriously, use RAW if you can), you can skip this section and head directly down to <a href="#GIMP">GIMP Retouching</a>.</p>
<h3 id="load-it-up">Load it up<a href="#load-it-up" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>After starting up RawTherapee, you’ll be in the <strong>File Browser</strong> interface, waiting for you to select a folder of images.
You can navigate to your folder of images through the file browser on the left side of the window.
It may take a bit while RawTherapee generates thumbnails of all the images in your directory.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-file-browser.png" alt="RawTherapee File Browser" width="600" height="369">
<figcaption>
RawTherapee file browser view.<br>(Navigate folders on the left pane)
</figcaption>
</figure>

<p>Once you’ve located your image, double clicking it in the main window will open it up for editing.
If you’re using a default install/options on RT, chances are a “Default” profile will be applied to your image that has <strong>Auto Levels</strong> turned on.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-RT-Default.jpg" alt="Mairi RawTherapee Default" width="598" height="800">
<figcaption>
The base image with “Default” profile applied (auto levels).
</figcaption>
</figure>

<p>Chances are that <strong>Auto Levels</strong> will not look very good.
My <strong>Default</strong> processing profile usually does not look so hot (no noise reduction, auto levels, etc.).
That’s ok, because we are going to fix this right up in the next few sections.</p>
<h3 id="adjust-exposure">Adjust Exposure<a href="#adjust-exposure" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I like to control the exposure and processing on my RAW images.
Auto Levels may work for some, but once you get used to some basic corrections and how to use them it’s relatively quick and painless to dial-in something you like quickly.</p>
<p class="aside">Again - much of what I’m going to describe is subjective, and will depend on personal taste and vision.
This just happens to be how I work, adjust as needed for you own workflow. :)</p>

<p>To give me a good starting point I will usually remove all adjustments to the image, and reset everything back to zero.
This is easy to do as my <strong>Default</strong> profile has nothing done to it other than <strong>Auto Levels</strong>.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-Exposure-Default.png" alt="RawTherapee Default Exposure Values" width="284" height="845">
<figcaption>
Auto Levels values on the Exposure panel.
</figcaption>
</figure>

<p>A quick and easy way to reset the <strong>Exposure</strong> values on the <strong>Exposure</strong> panel is to use the <b style="color:#20a020;">Neutral button</b> on that panel (I’ve outlined it in <b style="color:#20A020;">green</b> above).
You can also hit the small “undo” arrows next to each slider to set that slider back to zero as well.</p>
<p>At this point the image exposure is set to a baseline we can begin working on.
For reference, here is my image after zeroing out all of the exposure sliders and the saturation:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-RT-Zeroed.jpg" alt="Mairi RawTherapee Zero Values" width="598" height="800">
<figcaption>
With all exposure adjustments (and saturation) set to zero.
</figcaption>
</figure>




<h4 id="exposure-compensation">Exposure Compensation<a href="#exposure-compensation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The first thing I’ll begin adjusting is the <em>Exposure Compensation</em> for the image.
You want to be paying careful attention to the histogram for the image to know what your adjustments to <em>Exposure Compensation</em> are doing, and to keep from blowing things out.</p>
<p>I personally begin pushing the <em>Exposure Compensation</em> until one of the RGB channels just begins butting up against the right side of the histogram.
Here is what the histogram looks like for the neutral exposure:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-Histogram-Neutral.png" alt="RawTherapee Neutral Histogram" width="282" height="155">
<figcaption>
Neutral exposure histogram.
</figcaption>
</figure>

<p>After adjusting <em>Exposure Compensation</em> I get the Red channel snug up against the right side of the histogram:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-Histogram-Exp-Comp.png" alt="RawTherapee Histogram Exposure Compensation" width="282" height="155">
<figcaption>
<em>Exposure Compensation</em> until the values just touch the right side.
</figcaption>
</figure>

<p>If you go a little too far, you’ll notice one of the channels will spike against the side, and if you really go too far, you’ll get a small colored box in the upper right corner indicating that channel has gone out of range (is blown out).</p>
<p>So here is what my image looks like now with only the <em>Exposure Compensation</em> adjusted to a better range:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-RT-Exp-Comp.jpg" alt="Mairi RawTherapee Exposure Compensation" width="598" height="800">
<figcaption>
<em>Exposure Compensation</em> adjusted to 2.40.
</figcaption>
</figure>

<p>The <strong>Exposure</strong> panel in RT now looks like this (only the <em>Exposure Compensation</em> has been adjusted):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-Exposure-Exp-Comp.png" alt="RawTherapee Exposure Compensation Panel" width="286" height="630">
<figcaption>
<em>Exposure Compensation</em> set to 2.40 for this image.
</figcaption>
</figure>

<p>If the highlights in your image begin to get slightly out of range, you may need to make adjustments to the <strong>Highlight recovery amount/threshold</strong>, but in my case the image was slightly under-exposed, so I kept it zero.</p>
<p>There is also a great visual method of seeing where your exposures for each channel are at, and to avoid hightlight/shadow clipping.
Along the top of your main image window, to the right, there are some icons that look like this:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-Clipping-Channels.png" alt="RawTherapee Clipping Channels" width="326" height="40">
<figcaption>
<i style="color:rgb(0,255,255); background-color: gray;">Channel previews</i>, <i style="color:rgb(255,0,255); background-color: gray;">Highlight</i> &amp; <i style="color:rgb(255,255,0); background-color: gray;">Shadow</i> clipping indicators
</figcaption>
</figure>

<p>The <i style="color:rgb(0,255,255); background-color: gray;">Channel previews</i> let’s you individually toggle each of the R,G,B, and Luminosity previews for the image.
You can use these with the <i style="color:rgb(255,0,255); background-color: gray;">Highlight</i> and <i style="color:rgb(255,255,0); background-color: gray;">Shadow</i> clipping indicators to see which channels are clipping and where.</p>
<p><i style="color:rgb(255,0,255); background-color: gray;">Highlight</i> and <i style="color:rgb(255,255,0); background-color: gray;">Shadow</i> clipping indicators will visually show you on your image where the values go beyond the threshold for each.
For highlights, it’s any values that are greater than <strong>253</strong>, and for shadows it’s any values that are lower than 8.</p>
<p>To illustrate, here is what my image looks like in RT with the <em>Exposure Compensation</em> set to 2.40 from above:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-RT-Clipping.jpg" alt="Mairi RawTherapee Clipping Channels" width="598" height="800">
<figcaption>
With Highlight &amp; Shadow clipping turned on.
</figcaption>
</figure>

<p>I don’t mind the shadows clipping in the dark regions of the image, though I can make adjustments to the <strong>Black Point</strong> (below) to modify that.
The highlight clipping on her face is of more concern to me.
I certainly don’t want that!</p>
<p>At this point I can dial in my <em>Exposure Compensation</em> for the highlights by backing it down slightly.
As I ease off it I should be seeing the dark patch for <em>Highlight Clipping</em> growing smaller.
I’ll stop when it’s either all gone, or just about all gone.</p>
<p>I wasn’t too far off in my initial adjustment, and only had to back the <em>Exposure Compensation</em> off to <strong>2.30</strong> to remove most of the highlight clipping.</p>
<p>Settings so far (everything else zero)…</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Exposure Compensation</td>
<td>2.30</td>
</tr>
</tbody>
</table>
<hr>
<h4 id="black-point">Black Point<a href="#black-point" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>At this point I will usually zoom a bit into a shadow area of my image that might include dark/black tones.
The blacks feel a little flat to me, and I’m going to increase the black level just a bit to darken them up.</p>
<p>I want to be zoomed in a bit so I can determine at which point the black point crushes any details that I want to be visible still.
You want your blacks to be dark if possible, but you want to keep details in the shadows if possible (it’s really, really subjective where this point is, but I’ll err on the conservative side since I am still going to process colors a little bit in GIMP later).</p>
<p>Starting with a <strong>Black</strong> point of zero:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-Detail-Black-0.jpg" alt="Mairi Detail Black 0" width="600" height="600">
</figure>

<p>I will increase the <strong>Black</strong> point while keeping an eye on those shadow details, increasing it until I like how the blacks look and I haven’t destroyed detail in the dark tones.
I finally settled on a <strong>Black</strong> value of 150 as seen here:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-Detail-Black-150.jpg" data-swap-src='Mairi-Detail-Black-0.jpg' alt="Mairi Detail Black 150" width="600" height="600">
<figcaption>
Black value set at 150 (still keeping sweater details in the shadows).<br>Click to compare to previous.
</figcaption>
</figure>

<p>Watch out for <em>Shadow Recovery</em> when you first start adjusting the <em>Black Point</em>.
It’s default might be a different value than zero (mine is at 50), and the <strong>Neutral</strong> button won’t set it back to zero (resetting it will give it back to it’s default value of 50).
You may want to push it manually to zero, and if you feel you want to bump shadow details a bit, <em>then</em> start pushing it up.</p>
<p>I know things look noisy at the moment, but we’ll deal with that in the next section (there is no noise reduction being applied at this point).</p>
<p>Settings so far (everything else zero)…</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Exposure Compensation</td>
<td>2.30</td>
</tr>
<tr>
<td>Black</td>
<td>150</td>
</tr>
</tbody>
</table>
<hr>
<h4 id="brightness-contrast-and-saturation">Brightness, Contrast, and Saturation<a href="#brightness-contrast-and-saturation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>For this image I didn’t feel the need to modify these values, but this is purely subjective (<em>again</em>).
If you do modify these values, keep an eye on the histogram and what it’s doing to keep things from getting out of range/whack again.</p>
<h3 id="white-balance">White Balance<a href="#white-balance" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Hopefully you had the right <strong>White Balance</strong> set during your shoot in camera.
If not, it’s ok - we’re shooting in RAW so we can just set it as needed now.</p>
<p>I happen to have had my in-camera WB set to <em>Flash</em>, so the embedded WB settings in my RAW file metadata are pretty close.
In my shot, however, you’ll notice that there is a bit of a white window visible in the left of the frame.
I happen to know that the window is quite white, and should be rendered as such in my image.</p>
<p>As a side note, what I <em>really</em> should have done was to get myself a good reference for balancing the white balance, and to shoot it as part of my setup.
Something like the <a href="http://www.amazon.com/gp/product/B000JLO31C/ref=as_li_ss_tl?ie=UTF8&amp;camp=1789&amp;creative=390957&amp;creativeASIN=B000JLO31C&amp;linkCode=as2&amp;tag=httpblogpatda-20">X-Rite MSCCC ColorChecker Classic</a>, or even a <a href="http://www.amazon.com/gp/product/B000ARHJPW/ref=as_li_ss_tl?ie=UTF8&amp;camp=1789&amp;creative=390957&amp;creativeASIN=B000ARHJPW&amp;linkCode=as2&amp;tag=httpblogpatda-20">WhiBal G7 Certified Neutral White Balance Card</a>.
These are a little pricey, but any good 18% grey card will do, really.
I just happen to know that my window borders are a pure white, so I’m cheating a bit here…</p>
<p>So here is what our image looks like at the moment:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-WB-Camera.jpg" alt="Mairi White Balance Camera" width="598" height="800">
<figcaption>
Image so far, with <strong>White Balance</strong> set to <em>Camera</em> (Default).
</figcaption>
</figure>

<p>The <strong>White Balance</strong> for your image can be adjusted from the <strong>Color</strong> panel:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-Color-Default.png" alt="RawTherapee Default Color" width="288" height="422">
<figcaption>
Default Color panel showing <em>Camera</em> white balance.
</figcaption>
</figure>

<p>You can try out some of the presets in the <em>Method</em> drop-down - there are the typical settings there for Sunny, Shade, Flashes, etc…
In my case I am going to use the <strong>Spot WB</strong> option.
Clicking that button will let me pick a section of my image that should be color neutral.</p>
<p>In my case, I know that the window border should be white (and color neutral), so I will pick from that area on my image.
Doing so will shift my WB, and will produce a result that looks like this:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-WB-window.jpg" data-swap-src="Mairi-WB-Camera.jpg" alt="Mairi Camera White Balance" width="598" height="800">
<figcaption>
WB based on white window border.<br>Click to compare <em>Camera</em> based
</figcaption>
</figure>

<p>I also happen to know that the grey colored walls in the background are close to neutral, but with the slightest hint of blue in them.
If I used the grey wall instead of the white window, I would introduce the slightest warm cast to the image.
I tried it (choosing a section of the grey wall on the right side of the background), and actually prefer the slightly warmer color, personally:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-WB-Wall.jpg" data-swap-src="Mairi-WB-window.jpg" alt="Mairi White Balance Wall" width="598" height="800">
<figcaption>
WB based on the grey wall background (right side of image).<br/>
Click to compare to window WB.
</figcaption>
</figure>

<p>The difference is ever so slight, but it is there.
In my original final image, I went with the balance pulled from the wall, so I will continue with that version here.
If you’re curious, here is what my WB values look like:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/RT-Color-SpotWB-Window.png" alt="RawTherapee Spot White Balance Window" width="288" height="420">
<figcaption>
After setting <strong>Spot WB</strong> to the window.
</figcaption>
</figure>

<p>Seriously, though, don’t rely on luck.
Get a grey/color card to correct color casts if you can…</p>
<p>Settings so far (everything else zero)…</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Exposure Compensation</td>
<td>2.30</td>
</tr>
<tr>
<td>Black</td>
<td>150</td>
</tr>
<tr>
<td>WB Temperature</td>
<td>7300</td>
</tr>
<tr>
<td>WB Tint</td>
<td>0.545</td>
</tr>
</tbody>
</table>
<hr>
<h3 id="noise-reduction-amp-sharpening">Noise Reduction &amp; Sharpening<a href="#noise-reduction-amp-sharpening" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Chances are the RAW image is going to look pretty noisy zoomed in a bit.
This isn’t unusual since we are dealing with RAW data.
There are two noise reduction (NR) options in RT, and we are going to want to use both.</p>
<h4 id="impulse-noise-reduction">Impulse Noise Reduction<a href="#impulse-noise-reduction" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This NR will remove pixels that have a high impulse deviation from surrounding pixels.
Basically the “salt and pepper” noise you may notice in your images where individual pixels are oddly brighter/darker than the surrounding pixels.</p>
<p>If I zoom into a portion of my image (not far from where I was looking at shadows for setting a black point), I’ll see this:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/NR-Impulse-Crop-None.png" alt="Noise Reduction Crop None" height="600" width="600">
<figcaption>
Closeup crop with no <strong>Impulse Noise Reduction</strong>.
</figcaption>
</figure>

<p>I’ll normally play a bit with the <strong>Impulse NR</strong> to alleviate the specks while still retaining details.
As with most NR methods - going a bit too far will obliterate some details with the noise.
The trick is to find a happy medium between the two.
In my case, I settled on a value of 55 (the default is 50):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/NR-Impulse-Crop-55.png" data-swap-src="NR-Impulse-Crop-None.png" alt="Impulse Noise Reduction 55" width="600" height="600">
<figcaption>
<strong>Impulse NR</strong> set to a value of 55.<br>Click to compare to no NR.
</figcaption>
</figure>

<p>I could have gone a bit further (and have in others from this series), and pushed it up to the 60-70 range, but it’s a matter of taste and weighing the tradeoffs.</p>
<h4 id="luminance-chrominance-noise-reduction">Luminance/Chrominance Noise Reduction<a href="#luminance-chrominance-noise-reduction" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>These two NR methods will suppress noise in the luminance channel (brightness), and the blue/red chrominances.</p>
<p>I will use a light hand with these NR values.
The defaults are 5 for each, and it should make a noticeable difference just with the default values.
If you push the <strong>Luminance</strong> NR too far, you’ll smear fine details right off your image.
If you push the <strong>Chrominance</strong> NR too far, you’ll suck the life out of the colors in your image.</p>
<p>Not surprisingly, it’s another trade off.
In my case, I pushed the L/C NR just a tiny bit past the default to 6 and 6 respectively.</p>
<p>You’ll be able to see the effect of chrominance NR by looking at the flat colored grey wall in the background.
Just don’t forget to check other areas of your image with the settings you choose.
For me it was a close look at her iris, where pushing the chrominance NR too far lost some of the beautiful colors in her eye.</p>
<p>Compare the same crop from above with and without Luminance/Chrominance noise reduction applied:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/NR-LC-Crop-6-6.png" data-swap-src="NR-Impulse-Crop-55.png" alt="Noise Reduction Luminance Chrominance 6 6" width="600" height="600">
<figcaption>
With Luminance &amp; Chrominance NR set to 6.<br>Click to compare without. 
</figcaption>
</figure>

<p>If you’ve read my previous article on B&amp;W conversion, you’ll know that I don’t mind a little noise/grain in my images at all, so this level doesn’t bother me in the least.
I could chase the noise even further if I really wanted to, but always remember that doing so is going to be at the expense of detail/color in your final result.
As with most things in life, moderation is key!</p>
<h4 id="sharpening">Sharpening<a href="#sharpening" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>If you are going to sharpen your image a bit, this is probably the best time to do so.
The problem is that <em>usually</em> sharpening is the last bit of post-processing you should do to your image, due to it’s destructive nature.
Plus, lately I’ve grown accustomed to sharpening by using an extra wavelet scale during my skin retouching in GIMP (you’ll see below in a bit).</p>
<p>So, I’ll avoid sharpening at this stage.
If I was going to use it here at all, it would be just very, very light.
Also, if you do any sharpening at this stage, try to make sure that it happens <em>after</em> any noise reduction in the pipeline.</p>
<p>Settings so far (everything else zero)…</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Exposure Compensation</td>
<td>2.30</td>
</tr>
<tr>
<td>Black</td>
<td>150</td>
</tr>
<tr>
<td>WB Temperature</td>
<td>7300</td>
</tr>
<tr>
<td>WB Tint</td>
<td>0.545</td>
</tr>
<tr>
<td>Impulse NR</td>
<td>55</td>
</tr>
<tr>
<td>Luminance NR</td>
<td>6</td>
</tr>
<tr>
<td>Chrominance NR</td>
<td>6</td>
</tr>
</tbody>
</table>
<hr>
<h3 id="lens-correction">Lens Correction<a href="#lens-correction" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This is actually a section that deserves its own post, detailing methods for correcting for lens barrel distortion with Hugin.
RawTherapee actually has an “Automatic Distortion Correction” that will effect pincushion distortion in your images.</p>
<p>In my case, I was shooting at the long end of the lens at 50mm, and the distortion is minimal.
So I didn’t bother with correcting this (it might have been needed at a shorter focal length, and being closer to the subject, though).</p>
<h3 id="in-summary">In Summary<a href="#in-summary" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>That about wraps up the RAW “development” I’m going to do on this image.
I try to keep things minimal where possible, though I could have gone further and adjusted color tones and LAB adjustments here as well.
In fact, with the exception of Wavelet Decompose for skin retouching, and some other masking/painting operations, I could do most of what I want for this portrait entirely in RawTherapee.</p>
<p>I know that this reads really long, but the truth is that once I am accustomed to a workflow, this takes less than 5 minutes from start to finish (faster if I’ve already fiddled with other images from the same set).
All I really modified here was <strong>Exposure</strong>, <strong>White Balance</strong>, and <strong>Noise Reduction</strong>.</p>
<p>Finally, as I hinted at earlier, here is the final version after doing all of these RAW edits, as we get ready to bring the image into GIMP for further processing:</p>
<figure>
<a href="Mairi-RAW-Final.jpg" target="_blank">
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Mairi-RAW-Final.jpg" alt="Mairi Final Version from RawTherapee" width="598" height="800">
</a>
<figcaption>
<strong>This</strong> is the one to download if you want to follow along in GIMP below.<br>Just click the image to open in a new window, then save it from there.
</figcaption>
</figure>




<h2 id="gimp-retouching">GIMP Retouching<a href="#gimp-retouching" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Well, here we are.
Finally.
It’s the home stretch now, so don’t give up just yet!</p>
<p>If you didn’t follow along with the RAW processing earlier, you can download the full resolution JPG output from RawTherapee by clicking here:</p>
<p class="aside">
<a href="Mairi-RAW-Final.jpg">Download the full resolution JPG output from RawTherapee</a>
</p>

<p>Armed with our final results from RawTherapee, we’re now ready to do a little retouching to the image.</p>
<p>The overall workflow and the order in which I approach them is dependent on my mood mostly.
Most times, I enjoy doing skin retouching, so I’ll often jump right in with <strong>Wavelet Decompose</strong> and play around.
Really, though, I should start shifting Wavelet Decompose to a later part of my workflow, and fix other things like removing objects from the background and fixing flyaway hairs first.</p>
<p>This way, I can directly re-use wavelet scales for a slight wavelet sharpening while I have them.</p>
<p>Looking at this image so far, I can spot a few broad things that I want to correct, and I’m going to address them in this order:</p>
<ol>
<li>Touchup flyaway hairs</li>
<li>Crop &amp; remove distracting background elements</li>
<li>Skin retouching with Wavelet Decompose</li>
<li>Contour paint highlights</li>
<li>Apply some color curves</li>
</ol>
<hr>
<h3 id="touchup-flyaway-hairs">Touchup Flyaway Hairs<a href="#touchup-flyaway-hairs" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>If you can have the model bring a hairbrush with them to a shoot - DO IT.
Seriously.
Your eyes and carpal tunnel will thank me later.</p>
<p>Even with a brush or hairstylist/make-up artist the occasional hair will decide to rebel and do its own thing.
This will require us to get down to the details and fix those hairs up.</p>
<p>Luckily for me, Mairis hair mostly cooperated with us during the shoot (and where it didn’t I kind of liked it).
To illustrate this step, though, I’m going to clean up some of the stray hairs on the left side of the image (the right side of her face).</p>
<p>Luckily for me, the background is a consistent color/texture.
This means cloning out these hairs shouldn’t be too much of a problem, but there are still some things you should keep in mind while doing this.</p>
<p>Here is the area that I’d like to clean up a little bit:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Hair-Left-Original.jpg" alt="Mairi Hair Left Original" width="600" height="1256">
<figcaption>
Sometimes you just have to work one strand of hair at a time…
</figcaption>
</figure>

<figure style="float:right; margin: 0 0 1rem 1rem;">
<img border="0" src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Hair-Clone-Tool.png" alt="GIMP Clone Tool Hair" width="165" height="587">
</figure>

<p>I will usually use a hard-edged brush because a soft-edge will smear details on its edges, and can often be spotted pretty easily by the eye.
This works because the background is relatively constant in grain and color.</p>
<p>I’ll sample from an area near the hair I want to remove, and set the brush to be <strong>“Aligned”</strong>.
I also try to keep the brush size as small as I can and still remove the hair.</p>
<p>The thing to keep in mind is how the hair is actually <em>flowing</em>, and to follow that.
I will often follow outlying strands of hair back to where they start from the head, and begin cloning them out from there.</p>
<p>I also try not to get too ambitious (some stray hairs are sometimes fine).
Removing too many at once can lead to unrealistic results, so I try to be conservative, and to constantly zoom out and check my work visually.</p>
<p>Try not to leave hairs prematurely cut off in space if possible, it tends to look a bit distracting.
If you want to remove a hair that crosses over another strand that you may want to keep, make sure to adjust the source of the clone brush so you can do it without leaving a gap in the leftover strand.</p>
<p>Here is a quick 5 minute touchup of some of the stray hairs (click to compare to the original):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Hair-Left-Clean.jpg" alt="GIMP Hair Clean Clone" data-swap-src="GIMP-Hair-Left-Original.jpg" width="600" height="1256">
<figcaption>
Click to compare.
</figcaption>
</figure>

<p>Occasionally, you’ll need to fix hairs that are crossing over other hair (sort of like a virtual “brushing” of the hair).
In these cases, you really have to pay careful attention to <em>how the hair flows</em> and to use that as a guide when choosing a sample point with either the clone or heal brush.</p>
<p>If this sounds like a lot of work - it is.
Thankfully, once you’ve become accustomed to doing it, and doing it well, you’ll find yourself picking up a lot of speed.
It’s one of those things that’s worth learning to do right, and to let practice speed it up for you.</p>
<p>I actually like the cascading hair around her face opening up to a pretty color, so that’s about as far as I’m going to go with stray hairs on this image.</p>
<h3 id="fixing-the-background-amp-cropping">Fixing the Background &amp; Cropping<a href="#fixing-the-background-amp-cropping" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>With the limited space I had to shoot this portrait, it’s no surprise that I had gotten some undesirable background elements, like the window edges.</p>
<p>There’s a couple of ways I could go about fixing these - I could fix the background in place, or I can crop out the elements I don’t want.</p>
<p>In my final version shown in the previous post, I wanted to crop tighter, so it worked out well to remove the window on the left.
To illustrate how we can remove the window, I’m going to leave the aspect ratio as it is, and walk through removing the distracting background elements.</p>
<h4 id="removing-background-elements">Removing Background Elements<a href="#removing-background-elements" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Because most of the background is already a (relatively) solid color, this isn’t too hard.
There’s just a couple of simple things to keep in mind.</p>
<p>The way I’m going to approach this is to make a duplicate of my current layer, and to move the duplicate into place such that the background will cover up parts of the window I want to remove.
Then I’ll mask the duplicate layer to hide the window.</p>
<p>I start by choosing an area of the background that’s similar in color/tone:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-Start.jpg" alt="GIMP Mairi Background Fix Start" width="598" height="800">
<figcaption>
Thankfully the background is relatively consistent.
</figcaption>
</figure>

<p>I’ll then move the duplicate layer so that the green area covers up the window to the left:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-End.jpg" alt="GIMP Mairi Background Fix End" width="598" height="800">
<figcaption>
Position the duplicate layer so the green area now covers up the window.
</figcaption>
</figure>

<p>Here is what this looks like in GIMP, with the duplicate layer set to 90% opacity over the base layer (so you can see where the window edge is):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-Shifted.jpg" alt="GIMP Mairi Background Shift" width="600" height="720">
<figcaption>
Moving the duplicate layer over to cover the window.
</figcaption>
</figure>

<p>Now I’ll add a black (fully transparent) layer mask over the duplicate layer, and I’ll paint white on the mask to cover up the window edge (with a soft-edged brush).
This give me results that look like this:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-Shifted-Masked.jpg" alt="Mairi GIMP background shift masked" width="600" height="720">
<figcaption>
After applying a transparent mask, and painting white over the window edge.
</figcaption>
</figure>

<p>The problem is that the background area from the duplicate is a bit darker than the base layer background, and the seam is visible where they are masked.
To fix this, I can just adjust the lightness of the duplicate layer until I get a good match.</p>
<p>I used Hue-Saturation to adjust the lightness (because I wasn’t sure if I would need to adjust the hue slightly as well - turns out I didn’t).
I found that increasing the <em>Lightness</em> value to 3 got me reasonably close:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-Shifted-Masked-Lightened.jpg" alt="GIMP Mairi Background lightened" width="600" height="720">
<figcaption>
After increasing duplicate layer <em>Lightness</em> to 3.
</figcaption>
</figure>

<p>To further fix the lower part of the window, I just repeated all the steps above with another duplicate of the base layer, just shifted to cover the lower part of the window.
I had to mask along her sweater.
Here is the result after repeating the above steps:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-Masked.jpg" alt="GIMP Mairi background masked finished" width="598" height="800">
<figcaption>
After repeating above steps for the lower left corner.
</figcaption>
</figure>

<p>The results are ok, but could be just a little bit better.
Visually, the falloff of light on the background doesn’t match what’s happening on her body, so I added a small gradient to the lower left corner to give it a more natural looking light falloff:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-Masked-Gradient.jpg" alt="GIMP Mairi background masked gradient" width="598" height="800">
<figcaption>
Adding a gradient to the lower left background helps it look more natural.
</figcaption>
</figure>

<p>Fixing the slight window/shadow on the right is easily done with a clone/heal tool combination.
The final result of quickly cleaning up the background is this:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/GIMP-Background-Final.jpg" alt="GIMP Mairi background final fix" width="598" height="800">
<figcaption>
Finished cleaning up the background.
</figcaption>
</figure>

<p>I could have spent a little more with this, but I’m happy with the results for the purpose of this post.
If your cloning efforts leave obvious transitions between tones, the Heal tool can be helpful for alleviating this (especially when used with large brush radii, just be prepared to wait a bit).</p>
<p>With the background squared away, we can move on to one of my favorite things to play with, skin retouching!</p>
<h3 id="skin-retouching-with-wavelet-decompose">Skin Retouching with Wavelet Decompose<a href="#skin-retouching-with-wavelet-decompose" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I had <a href="http://blog.patdavid.net/2011/12/getting-around-in-gimp-skin-retouching.html">previously written about using Wavelet Decompose</a> as a means for touching up skin.
As I said in that post, and will repeat here:</p>
<blockquote>
<p>The best way to utilize this tool is <strong>with a light touch</strong>.</p>
</blockquote>
<p>Re-read that sentence and keep it in mind as we move forward.</p>
<p>Don’t make mannequins.</p>
<p>Ok, with a layer that contains all of the changes we’ve made so far rolled up, we can now decompose the image to wavelet scales.
In my case I almost always use the default of 5 scales unless there’s a good reason to increase/decrease that number.</p>
<p>For anyone new to this method, the basic idea of Wavelet Decompose is that it will break down your images to multiple layers, each containing a specific set of details based on their relative size, and a residual layer with color/tonal information.
For instance, Wavelet scale 1 will contain only the finest details in your image, while each successive scale will contain larger and larger details.</p>
<p>The benefit to us is that these details are isolated on each layer, meaning we can modify details on one layer without affecting other details from other layers (or adjust the colors/tones on the residual layer without modifying the details).</p>
<p>Here is an example of the resulting layers we get when running Wavelet Decompose:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Example.jpg" alt="GIMP Wavelet Separation Example" width="600" height="400">
<figcaption>
Wavelet scales from 1 (finest) to the Residual
</figcaption>
</figure>

<p>After running Wavelet Decompose, we’ll find ourselves with 6 new layers: Residual + 5 Wavelet scales.
I am going to start on Wavelet scale 5.</p>
<p>If you hold down <strong>Shift</strong> and click on a layer visibility icon, you’ll isolate just that single layer as visible.
Do this now to <em>Wavelet scale 5</em>, and let’s have a look at what we’re dealing with.</p>
<p>I usually work on skin retouching in sections.
Usually I’ll consider the forehead, nose, cheeks to smile lines, chin, and upper lip all as separate sections (trying to follow normal facial contours).
Something like this:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Breakdown.jpg" alt="GIMP Wavelet Decompose Region Breakdown" width="587" height="800">
<figcaption>
Rough breakdown of each area I’ll work on separately
</figcaption>
</figure>

<p>I’m going to start with the forehead.
I’ll work with detail scales first, and follow up with touchups on the residual scale if needed to even out color tones.
Here is what Wavelet scale 5 looks like isolated:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Forehead-5.jpg" alt="GIMP Wavelet Scale 5 forehead" width="600" height="303">
<figcaption>
Forehead, Wavelet scale 5
</figcaption>
</figure>

<p>It may not seem obvious, especially if you don’t use wavelet scales much, but there’s a lot of large scale tonal imperfections here.
Look at the same image, but with the levels normalized:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Forehead-5-normalized.jpg" alt="GIMP Wavelet Scale 5 forehead" width="600" height="303">
<figcaption>
These are the tones we want to smooth out
</figcaption>
</figure>

<p>Normalizing the wavelet scale lets you see the tones that we want to smooth out.</p>
<p>My normal workflow is to have all of the wavelet scales and residual visible (each of the wavelet scales has a layer blending mode of <strong>Grain Merge</strong>).
This way I’m visually seeing the overall image results.
Then I will select each wavelet scale as I work on it.</p>
<p>I’ll normally use the <strong>Free Select Tool</strong> to select the forehead.
I’ll usually have the <strong>Feather edges</strong> option turned on, with a large radius (maybe 1% of the smallest image dimensions roughly - so ~35 pixels here).
Remember to have your layer selected that you want to work on.</p>
<p>With my area selected, I’ll often run a <strong>Gaussian Blur</strong> (IIR) over the skin to smooth out those imperfections.
The radius you use is dependent on how strong you want to smooth the tones out.
Too much, and you’ll obliterate the details on that scale, so start small.</p>
<p>Here is my selection I’ll work with (remember - my active layer is Wavelet scale 5):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Forehead-orig-selection.jpg" alt="GIMP Wavelet Scale selection" width="600" height="303">
<figcaption>
Forehead with selection (feather turned on to 35px)
</figcaption>
</figure>

<p>Now I’ll experiment with different <strong>Gaussian Blur</strong> radii to get a feel for how it will effect my entire image.
I settled on a high-ish value of 35px radius, which gave me this as a result (click to compare to original):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Forehead-5-35px.jpg" data-swap-src="Wavelet-Forehead-orig.jpg" alt="GIMP Wavelet Scale selection" width="600" height="303">
<figcaption>
Forehead, Wavelet scale 5 after <strong>Gaussian Blur (IIR)</strong> 35px radius.<br>Click to compare.
</figcaption>
</figure>

<p>Just with this small change to a single wavelet scale, we can already see a remarkable improvement to the underlying skin tones, and we haven’t hurt any of the fine details in the skin!</p>
<p>In some cases, this may be all that is required for a particular area of skin.
I could push things just a tiny bit further if I wanted by working globally again on a finer wavelet scale, but I’ve learned the hard way to back off early if possible.</p>
<p>Instead, I’ll look at specific areas of the skin that I may want to touch up.
For instance, the two frown lines in the center of the forehead.
I may not want to remove them completely, but I may want to downplay how visible they are.
Wavelet scales are perfect for this.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Forehead-5-35px-frown.jpg" alt="GIMP Wavelet Scale selection" width="600" height="303">
<figcaption>
Small frown lines I want to reduce
</figcaption>
</figure>

<p>Because each of the Wavelet scales are set to a layer blend mode of <strong>Grain Merge</strong>, this means that any area that has a completely grey color will not effect the final image.
This means that you can paint with medium grey RGB(128,128,128) to completely remove a detail from a layer.</p>
<p>You can also use the Blur/Sharpen brush to selectively blur an area of the image as well.
(I’ve found that the Blur tool works best at smaller wavelet scales - it doesn’t appear to make a big difference on larger scales).</p>
<p>So, if we look at Wavelet scale 5 where the frown lines are, we’ll see there’s not much there - it was already smoothed earlier.
If we look at Wavelet scale 4 though, we’ll see them prominently.</p>
<p>I’ll use the <strong>Heal Tool</strong> to sample from the same wavelet scale in a different location, and paint over just the frown lines.
I’ll work on Wavelet scale 4 first.
If needed, I can also move down to Wavelet scale 3 and repeat the same procedure there.</p>
<p>A couple of quick passes just over the frown lines, and the results look like this:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Forehead-5-35px-frown-fixed.jpg" data-swap-src="Wavelet-Forehead-5-35px.jpg" alt="GIMP Wavelet Scale selection" width="600" height="303">
<figcaption>
Cloning over frown line on scale 4 &amp; 3.<br>Click to compare. 
</figcaption>
</figure>

<p>I could continue over any other blemishes I may want to correct, but small individual blemishes can usually be fixed with a little spot healing quickly.</p>
<p>Moving on to the nose, the tones have different requirements.
Overall, the tones on Wavelet scale 5 are similar to the forehead.
In this case, a similar amount of blurring as the forehead on scale 5 will nicely smooth out the tones.
Here is the nose after a slight blurring (click to see original):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Nose-5-35px.jpg" data-swap-src="Wavelet-Nose-Orig.jpg" alt="GIMP mairi wavelet decompose nose" width="275" height="510">
<figcaption>
Nose with 35px Gaussian blur on Wavelet scale 5.<br>Click to compare.
</figcaption>
</figure>

<p>There is a bit of color in the nose that is slightly uneven that I’d like to fix.
This is relatively easy to do with wavelet scales, because I can modify the underlying color tones of the nose without destroying the details on the other scale layers.</p>
<p>In this case, I’ll work on the Wavelet residual layer.</p>
<p>I’ll use a <strong>Heal Tool</strong> with a large, soft brush.
I’ll sample from about the middle of the nose, and clean up the slightly redder skin by healing new tones into that area.
I’ll follow the contours of the nose and the way that the light is hitting it in order to match the underlying tones to what is already there.</p>
<p>After a little work these are the results (click to compare to original):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Nose-5-35px-heal.jpg" data-swap-src="Wavelet-Nose-Orig.jpg" alt="GIMP Wavelet Scale selection nose" width="275" height="510">
<figcaption>
Healing on the Wavelet residual scale to even tones.<br>Click to compare.
</figcaption>
</figure>

<p>Next I’ll take a look at the eyes and cheek on the brighter side of her face.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Cheek-Orig.jpg" alt="GIMP Mairi wavelet decompose cheek original" width="473" height="716">
<figcaption>
Overall tones are good here, just some slight retouching required
</figcaption>
</figure>

<p>The tones here are not bad, particularly on scale 5.
After making my selection, I’ve applied a blur at 25px just to smooth things a bit.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Cheek-5-25px.jpg" data-swap-src="Wavelet-Cheek-Orig.jpg" alt="GIMP Mairi wavelet decompose cheek " width="473" height="716">
<figcaption>
A slight 25px blur to smooth overall tones.<br>Click to compare. 
</figcaption>
</figure>

<p>The dark tones under/around the eyes is a bit different to deal with.
As before, I’ll turn to working on the Wavelet residual layer to brighten up the color tones under the eyes.</p>
<p>I use the <strong>Heal Tool</strong> to sample from a brighter area of skin near the eye.
Then I’ll carefully paint into the dark tones to brighten them up, and to even the colors out with the surrounding skin.</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Cheek-residual-eyes.jpg" data-swap-src="Wavelet-Cheek-Orig.jpg" alt="GIMP Mairi wavelet residual eyes" width="473" height="716">
<figcaption>
Carefully cloning/healing brighter skin tones under the eyes.<br>Click to compare to original.
</figcaption>
</figure>

<p>Wavelets are amazing for this type of adjustment, because I can brighten up/change the skin tones under the eyes without effecting the fine skin details here like small wrinkles and pores.
The textual character remains unchanged, but the underlying skin tones can be modified easily.</p>
<p>The same can be done for the slightly red tones on the cheek, and at the edge of her jaw.
Which I did.</p>
<p>I’m purposefully not going to modify the fine wrinkles under the eyes, either.
These small imperfections will often bring great character to a face, and unless they are very distracting or bad, I find it best to leave them be.</p>
<p>A good tip is that even though these small imperfections may seem large when you’re pixel peeping, get into the habit of zooming out to a sane zoom level and evaluate the image then.
Sometimes you’ll find you’ve gone too far, and things begin to creep into mannequin territory.</p>
<p>Don’t make mannequins!</p>
<h4 id="in-summary-again">In Summary Again<a href="#in-summary-again" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>This entire post is getting a little long, so I’m going to stop here with the skin retouching breakdown.</p>
<p>Also, that’s honestly about it as far as the process goes.
Just repeat on the areas that are left (right cheek, chin, and upper lip).
You can just apply the processes I described above to those other areas, in the same way.</p>
<p>To summarize, here are the tools/steps I’ll use with Wavelet Decompose to retouch skin:</p>
<ul>
<li>Area selection with Gaussian blur to even out overall tones at a particular scale</li>
<li>Paint with grey, Clone, Heal on wavelet scales to modify specific details</li>
<li>Clone/Heal on wavelet residual scale to modify underlying skin tones/colors (but leave details intact)</li>
</ul>
<p>Here are the final results after using only Wavelet Decompose (click to compare to original):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Wavelet-Face-Final.jpg" data-swap-src="Wavelet-Face-Original.jpg" alt="Mairi GIMP Wavelet face final retouching" width="587" height="800">
<figcaption>
After retouching in Wavelet Scales only.<br>Click to compare to original.
</figcaption>
</figure>




<h3 id="spot-touchups">Spot Touchups<a href="#spot-touchups" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>There may be a few things that still need a little spot touchup that I didn’t bother to mess with in Wavelet scales.</p>
<p>In my case, I’ll clone/heal out some small hairs along the jaw line, and touch up some small spots of skin individually.
This is really just a light cleaning, and I usually do this at the pixel level (obnoxiously zoomed in, and small brush sizes).</p>
<p>I also use a method for checking the skin for areas that I may want to touchup, but might not be immediately visible or noticeable.
It uses the fact that the Blue channel of an image can show you just how scary skin can look (seriously, color decompose any image of skin, and look at the blue channel).</p>
<h3 id="contour-painting-highlights">Contour Painting Highlights<a href="#contour-painting-highlights" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>One of the downsides of using Wavelet scales for modifying skin is that if you’re blurring on some of the scales, you’ll sometimes decrease the local contrast in your image.
This isn’t so bad, but you may want to bring back some of the contrast in areas you’ve touched up.</p>
<p>What I’m going to do is basically add some transparent layers over my image, and set their layer blend modes to <strong>“Overlay”</strong>.</p>
<p>Then I’ll paint white over contours I want to enhance, and adjust the opacity of the layer to taste.
(This is highly subjective, so I’m going to just show a quick idea of how I might approach it - you can get as nuts with this as you like…).</p>
<p>Here I’ve added a new transparent layer on top of my image, and set the Layer Blend Mode to <em>Overlay</em>.
Then I painted white onto contours that I want to highlight:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Contour-Face.jpg" alt="Mairi GIMP Contour dodge burn highlight" width="587" height="800">
<figcaption>
Painting on the <em>Overlay</em> layer along contours to highlight
</figcaption>
</figure>

<p>It looks strange right now, but I’ll add a large radius Gaussian Blur to smooth these tones out.
I used a blur radius of <strong>111 pixels</strong>.
Here is what it looks like after the blur:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Contour-Face-Blur.jpg" alt="Mairi GIMP Contour dodge burn highlight gaussian blur" width="587" height="800">
<figcaption>
Blurring the <em>Overlay</em> layer with Gaussian Blur (111 pixel radius)
</figcaption>
</figure>

<p>Finally, I’ll adjust the opacity of the <em>Overlay</em> layer to taste.
I’ll usually dial this way, way down so that it’s not so obvious.
Here, I’ve dialed the opacity back to about 20%, which leaves us with this (click to compare):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Contour-Face-Blur-Opacity-20.jpg" data-swap-src="Wavelet-Face-Final.jpg" alt="Mairi GIMP Contour dodge burn highlight final One" width="587" height="800">
<figcaption>
After setting the <em>Overlay</em> layer to 20% opacity (still a little high for me, but it’s good for illustration).<br>Click to compare.
</figcaption>
</figure>

<p>I will sometimes add a few more of these layers to enhance other parts of the image as well.
I’ll use it (very lightly!!!) to enhance the eyes a bit, and in this case, I used an even larger layer to add some volume and highlights to her hair as well.</p>
<p>Here is the results after adding some eye and hair highlight layers as well (click to compare no highlights):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Contour-Final.jpg" data-swap-src="Contour-Original.jpg" alt="mairi gimp contour dodge burn final" width="598" height="800">
<figcaption>
Face, eyes, and hair contour painting result.<br>Click to compare. 
</figcaption>
</figure>




<h3 id="color-curves">Color Curves<a href="#color-curves" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Finally, I like to apply some color curves that I have around and use often.
I’ve been heavily favoring a Portra emulation curve from <a href="http://www.prime-junta.net/pont/How_to/100_Curves_and_Films/_Curves_and_films.html">Petteri Sulonen</a> that he calls <em>Portra-esque</em>, especially for skin.
It has a very pretty rolloff in the highlights that really renders pretty colors.</p>
<p>If I feel it’s too much, I can always apply it on a duplicate of my image so far, and adjust opacity to suit.
Here is the same image with only the <em>Potra-esque</em> curve applied:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Curves-Portra.jpg" data-swap-src="Contour-Final.jpg" alt="mairi gimp color tone curve portra" width="598" height="800">
<figcaption>
Image so far, with a <em>Portra-esque</em> color curve applied.<br>Click to compare.
</figcaption>
</figure>

<p>If you’re curious, I had written up a much more in-depth look at color curves for skin here: <a href="http://blog.patdavid.net/2012/07/getting-around-in-gimp-more-color.html">Getting Around in GIMP - More Color Curves (Skin)</a>.
You can actually download the curves for Portra, Velvia, Provia emulation on that page.</p>
<h3 id="final-sharpening">Final Sharpening<a href="#final-sharpening" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Finally.
The last step before saving out our image!</p>
<p>For sharpening, I actually like to use one of the Wavelet scales that I generated earlier.
I’ll just duplicate a low scale, like 2 or 3, and drag it on top of my layer stack to sharpen the details from that scale.</p>
<p>In this case, I liked the details from Wavelet scale 2, so I duplicated that layer, and dragged it on top of my layer stack.
The blend mode is already set to <em>Grain Merge</em>, so I don’t have to do anything else:</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Sharpen-Wavelet-2.jpg" data-swap-src="Curves-Portra.jpg" alt="mairi gimp sharpen wavelet scale" width="598" height="800">
<figcaption>
Wavelet scale 2 copied to the top of the layer stack for sharpening.<br>Click to compare.
</figcaption>
</figure>




<h2 id="finally-at-the-end">Finally at the End<a href="#finally-at-the-end" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>If you’re still with me - you really deserve a medal.
I’m sorry this has run as long as it has, but I wanted to try to be as complete as I could.</p>
<p>So, for a final comparison, here is the image we finished with (click to compare to what we started with before retouching in GIMP):</p>
<figure>
<img src="https://pixls.us/articles/an-open-source-portrait-mairi/Sharpen-Wavelet-2.jpg" data-swap-src="Mairi-RAW-Final.jpg" alt="mairi gimp final sharpen wavelet" width="598" height="800">
<figcaption>
Our final result.<br>Click to compare.
</figcaption>
</figure>

<p>Not too bad for a little bit of fiddling, I think!  I know that this tutorial reads really, really long, but I promise that once you’ve understood the processes being used, it’s actually very quick in practice.</p>
<p>I hope that this has been helpful to you in some way!  If you happen to use anything from this tutorial please share it.
I’d love to see what others do with these techniques.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Software and Noise ]]></title>
            <link>https://pixls.us/blog/2015/05/software-and-noise/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/05/software-and-noise/</guid>
            <pubDate>Mon, 18 May 2015 16:38:01 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/05/software-and-noise/Unnecessary_Noise.jpg" /><br/>
                 <h1>Software and Noise</h1>  
                 <h2>Wonderful response from everyone</h2>   
                <p>I want to take a moment to thank everyone for all of the kind words and support over the past week.
A positive response can be a great motivator to help keep the momentum rolling (and everyone really has been super positive)!</p>
<h2 id="software"><a href="#software" class="header-link-alt">Software</a></h2>
<p>The <strong><a href="https://pixls.us/software/">Software page</a></strong> is live with a decent start at a list.</p>
<p>I posted an announcement of the site launch over on <a href="http://www.reddit.com">reddit</a> and one of the comments (from <a href="http://www.reddit.com/r/photography/comments/35b7y4/new_community_for_freeopen_source_photography/cr30jeo">/u/cb900crdr</a>) was that it might be helpful to have a list of links to programs.
I had originally planned on having a page to list the various projects but removed it just before launch (until I could find some time to gather all the links).</p>
<p>This was as good a reason as any to take a shot at putting a page together.
I brought the topic up <a href="https://discuss.pixls.us/t/free-software-list-and-links/193/8">on the forums</a> to get input from everyone as well.
If you see that I’ve missed anything, please consider adding it to the list on the forum.
<!-- more --></p>
<p>I think it may be helpful to add at least a sentence or two description to identify what each project does for those not familiar with them.
For instance, if you didn’t know what Hugin was before, the name by itself is not very helpful (or GIMP, or G’MIC, etc…).
The problem is how to do it without cluttering up the page too much.</p>
<h2 id="noise"><a href="#noise" class="header-link-alt">Noise</a></h2>
<p>I had also mentioned <a href="https://discuss.pixls.us/t/noise-free-shadows-dual-exposure/204">in this post</a> on the forums about a neat method for basically replacing shadow tones in one image with those from second, overexposed image.
The approach is similar in theory to tonemapping an HDR and is originally described by <a href="http://www.guillermoluijk.com/article/nonoise/index_en.htm">Guillermo Luijk</a> (back in 2007).</p>
<p>The process basically exploits the fact that digital sensors have a linear response (a basis for the advice ETTR - <em>“Expose to the Right”</em>).
His suggested workflow is to use a second exposure of the scene but exposed +4EV.
Then to adjust the exposure of the second image down -4EV and then replace the shadow tones in the base image with the adjusted (noise-reduced) one.</p>
<p>I will write an article soon describing the workflow in a bit more detail.  Stay tuned!</p>
<p><small class="lede-attr">Lede image: 
<a href='https://www.flickr.com/photos/pamhule/4461831240'><em>Unnecessary Noise Prohibited</em> </a> by <a href='https://www.flickr.com/photos/pamhule/'>Jens Schott Knudsen</a> <a class='cc' href='https://creativecommons.org/licenses/by-sa/2.0/' target='_blank'>cbn</a>
</small></p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ It's Alive! ]]></title>
            <link>https://pixls.us/blog/2015/05/it-s-alive/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/05/it-s-alive/</guid>
            <pubDate>Thu, 07 May 2015 21:25:16 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/05/it-s-alive/nautilus.jpg" /><br/>
                 <h1>It's Alive!</h1>  
                 <h2>Time to finally launch...</h2>   
                <p>Well, here we are.
I just checked the first blog post and it was dated August 24<sup>th</sup>, 2014.
I had probably been working on the back end of the site getting things running for the basic blog setup a few weeks prior to that.
It’s <strong>almost</strong> been a full year since I started working on this idea.</p>
<p>So it is with great pleasure that I can finally say…</p>
<h2 id="welcome-to-pixls-us-https-pixls-us-"><a href="#welcome-to-pixls-us-https-pixls-us-" class="header-link-alt">Welcome to <a href="https://pixls.us">PIXLS.US</a>!</a></h2>
<p>If you’re just now joining us, let me re-iterate the mission statement for this website.</p>
<blockquote>
<p><strong>PIXLS.US Mission Statement</strong></p>
<p>To provide tutorials, workflows and a showcase for high-quality photography using Free/Open Source Software.</p>
</blockquote>
<p>I started this site because the world of F/OSS photography is fractured across different places.
There’s no good single place for photographers to collaborate around free software workflows, as well as a lack of good tutorials aimed at high-quality processing with free software.</p>
<!-- more -->
<h3 id="tutorials"><a href="#tutorials" class="header-link-alt">Tutorials</a></h3>
<p>I have personally been writing tutorials on my blog for a few years now (holy crap).
I primarily started doing it because while there are many tutorials for photo editing, they almost always stopped short of working towards high-quality results.
The few tutorials that did try to address high quality results were all quite a few years old (and often in need of updating).</p>
<p>With your help, I’m hoping to change that here.</p>
<h3 id="workflows"><a href="#workflows" class="header-link-alt">Workflows</a></h3>
<p>Workflows is something that doesn’t often get described either.
Specifically, what a workflow looks like with free software.
For instance, some thoughts off the top of my head:</p>
<ul>
<li>Creating a panorama image from start to finish.</li>
<li>Shooting and editing fashion images.</li>
<li>Taking great portrait images, and how to retouch them.</li>
<li>What to watch out for when shooting macro.</li>
<li>Planning and shooting great astrophotography.</li>
<li>How to approach landscape editing.</li>
<li>Creating a composite dream image.</li>
</ul>
<p>These are just some of the ideas around workflows.
It also doesn’t have to be only software-focused.
There is a wealth of knowledge about practical techniques that we can all share as well.</p>
<h3 id="showcase"><a href="#showcase" class="header-link-alt">Showcase</a></h3>
<p>Quick - name 5 photographers whose work you love, that use free software.
Did you have trouble reaching five?
That’s another of the things that I would like to focus on here: showcasing amazing work from talented photographers that happen to use free software (and in some cases may be willing to share with us).</p>
<p>I even <a href="https://discuss.pixls.us/t/notable-fl-oss-photographers/139">started a thread on the forum</a> to try and note some amazing photographers.  I will try to work through that list and get them to open up and speak with us a bit about their work and process.</p>
<h2 id="by-us-for-us"><a href="#by-us-for-us" class="header-link-alt">By Us, For Us</a></h2>
<p>I am floored by how awesome the community has been.
As I mentioned on my blog, the main reason for me to write was to give something back to the community.
I learned so much for so long from others before me and the least I could do is try to help others as well.</p>
<p>This community will be what <strong>we</strong> make it.
Come help make it something awesome that we can all be proud of.</p>
<p>Go <a href="https://discuss.pixls.us">sign up</a> on the forum and let your voice be heard.</p>
<p>Have an idea for an article?  Let me know (in the <a href="https://discuss.pixls.us">forums</a> or by <a href="mailto:pat@patdavid.net">email</a>)!</p>
<h2 id="make-some-noise-"><a href="#make-some-noise-" class="header-link-alt">Make Some Noise!</a></h2>
<p>Finally, we are just starting out and are a small community at the moment.
If you’re feeling up to it, please consider letting your social circles know that we’re here and what we’re trying to do.
The only way for the community to grow is for people to know it’s here in the first place!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ What's In Your Bag? ]]></title>
            <link>https://pixls.us/blog/2015/05/what-s-in-your-bag/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/05/what-s-in-your-bag/</guid>
            <pubDate>Mon, 04 May 2015 14:47:58 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/05/what-s-in-your-bag/MyBag.jpg" /><br/>
                 <h1>What's In Your Bag?</h1>  
                 <h2>Thoughts on a next article as well</h2>   
                <p>That lede image above is a quick (and dirty) snapshot of my go-to bag for running out the door.
I thought it might be fun to take a diversion and talk about gear a little bit.
Here’s the full image again:</p>
<!-- more -->
<figure class="big-vid">
<img src="https://pixls.us/blog/2015/05/what-s-in-your-bag/MyBag.jpg" alt="Pat David Camera Bag Gear"/>
<figcaption>
My gear + bag.  Not shown, spare battery and memory cards.
</figcaption>
</figure>

<p>I had decided years ago on going with Micro Four Thirds (MFT) as a camera system because I like to travel light, and wanted options to adapt old lenses.
(On a side note, I’m still angry that there is not focus-peaking on the E-M5…)</p>
<p>My camera is the Olympus OM-D E-M5 (usually paired with the 12-50mm weatherproof lens when I’m out and about). 
This is a perfect combination for me, particularly when I’m chasing around a 4 year old in who-knows-where situations.
A water and dust resistant lens/body is nice to have.</p>
<p>On the far left is a Promaster 5-in-1 reflector (41 inch).
These are usually relatively inexpensive and absolutely indispensable pieces of gear that can be adapted to many different situations.</p>
<p>I was recently reminded of this yet again while on a walk through some gardens…</p>
<figure>
<img src="https://pixls.us/blog/2015/05/what-s-in-your-bag/with-without-reflector2.jpg" alt="Dot with/without reflector" />
<figcaption>
Both images straight out of the camera, with/without reflector, same settings.
</figcaption>
</figure>

<p>The base of the reflector (without its covering) is a great translucent scrim that is handy to use with flashes if you need to soften things up a bit (and not lug around a softbox).</p>
<figure>
<img src="https://pixls.us/blog/2015/05/what-s-in-your-bag/dot-eyes-open.jpg" alt="Dot Eyes Open by Pat David" />
<figcaption>
Speedlight shooting into the reflector scrim, ~2 feet away from model, camera left.
</figcaption>
</figure>

<p>Speaking of flashes, you’ll also find my pair of Yongnuo YN-560 manual speedlights.
I’ve been slowly teaching myself <a href="https://www.flickr.com/photos/patdavid/sets/72157626359784129/">lighting with speedlights</a>, so rarely will I <em>not</em> have them with me.
To use them off-camera I also have a pair of Cactus V5 transceivers (one to transmit, one to receive).</p>
<p>Everything (except the reflector) packs nice and neatly into my wife’s old camera bag (a  precursor to the Domke bags) that I ran off with.
(That is, the old camera bag of my wife, <strong>not</strong> the old bag, my wife).</p>
<p>The bag is canvas and I waxed it myself to give it some water resistance.
This basically consisted of me melting some wax and brushing it all over the bag, then using a hairdryer to further melt it into the fibers.
This was a great DIY project that was relatively inexpensive (about $8USD for more wax than you’ll need) and relatively quick to do (just a few hours total).</p>
<h3 id="share-your-gear"><a href="#share-your-gear" class="header-link-alt">Share Your Gear</a></h3>
<p>I’d love to see what others are using out there!  Take a minute, snap a photo of your gear/bag, and share it with us.
Bonus points if you arrange it by <a href="http://en.wikipedia.org/wiki/Knoll_%28verb%29">knolling</a>.</p>
<h2 id="sharpening"><a href="#sharpening" class="header-link-alt">Sharpening</a></h2>
<p>I was recently poked by someone on the <a href="https://mail.gnome.org/archives/gimp-web-list/">GIMP-Web mailing list</a> to update one of the tutorials on <a href="http://www.gimp.org/tutorials">www.gimp.org</a> about sharpening.
I thought about it, then decided it may be better just to write some new material from scratch.</p>
<p>I figured why stop there?  I might as well make it a fun post here taking a look at what methods we have for sharpening, why you may (or may not) want to use them, and where in the processing pipeline it makes sense.
(While still pushing the GIMP specific sharpening thoughts to a separate tutorial there).</p>
<p>If anyone has thoughts around this or just wants to share what they’re doing, please let us know in the comments below.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Back to Writing ]]></title>
            <link>https://pixls.us/blog/2015/04/back-to-writing/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/04/back-to-writing/</guid>
            <pubDate>Wed, 22 Apr 2015 17:00:15 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/04/back-to-writing/Tacky.jpg" /><br/>
                 <h1>Back to Writing</h1>  
                 <h2>Hiccups and Other Things</h2>   
                <p>I took a bit of a break from writing articles to <a href="https://pixls.us//pixls.us/blog/2015/04/a-forum">work on</a> getting <a href="https://discuss.pixls.us">the forums</a> up and running.
We are almost back to a stable enough point that I want to turn my attention back to writing.</p>
<p>I say almost because there are still a few wonky things that I’d like to work out.
There is still a little bit of an issue with the comment embeds from the forum for full-blown <a href="https://pixls.us/articles/">articles</a>.</p>
<h2 id="ssl-and-https"><a href="#ssl-and-https" class="header-link-alt">SSL and https</a></h2>
<p>One of the reasons for the possibly strange behavior for articles in the forums is that darix convinced me to go ahead and get SSL setup for the domains.  So working on it yesterday we got it running for both the <a href="https://pixls.us">main site here</a>, as well as at <a href="https://discuss.pixls.us">the forums</a>.</p>
<p>You should notice an indicator in your browser that your connection is over https somewhere (a little green lock?) for this page right now.
I’ve set all connections to <a href="https://pixls.us//pixls.us">PIXLS.US</a> to use SSL now (same thing with the forums).</p>
<!-- more -->
<p>The only drawback was that we uncovered some strange behavior when importing posts into the forum for embedding.
If you care, the way things work is that:</p>
<ol>
<li>I publish an RSS feed of all of the content on the site (<a href="https://pixls.us/feed.xml">https://pixls.us/feed.xml</a> if you’re curious).</li>
<li>Every hour the forum polls this feed.</li>
<li>If there’s new posts, the forum imports them and creates a new topic.
This is what you see under the “PIXLS.US” category on the forum.</li>
<li>Some small code on each post (on the website) references the forum topic entry to embed as comments.</li>
</ol>
<p>There have been a couple of strange things going on with importing those posts, but darix resolved most of them.
The only thing that is still strange is the article objects themselves, which at the moment show up twice in the forum.</p>
<p>I should not that all of this could very well be caused by my writing the RSS feeds.
I know just enough to be dangerous and annoying to those who know better (this should probably be my epitaph).</p>
<blockquote>
<p><strong>Here Lies Pat David</strong></p>
<p>He knew just enough to be dangerous and annoy those who knew better…</p>
</blockquote>
<p>Fitting!</p>
<p>On the good side, thanks to the efforts of those smarter than I, even though we had some import hiccups, things have continued to run smoothly for the most part.
The correct comments were maintained in the correct topic threads, and those were in turn correctly associated with the posts they belonged to (well, <em>blog</em> posts at any rate).</p>
<p>Coming soon(<em>ish</em>) - creating showcase posts!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Skin Retouching with Wavelet Decompose ]]></title>
            <link>https://pixls.us/articles/skin-retouching-with-wavelet-decompose/</link>
            <guid isPermaLink="true">https://pixls.us/articles/skin-retouching-with-wavelet-decompose/</guid>
            <pubDate>Mon, 20 Apr 2015 16:47:07 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-after-opt.jpg" /><br/>
                 <h1>Skin Retouching with Wavelet Decompose</h1>  
                 <h2>A better alternative to smearing textures</h2>   
                <p>Skin retouching is a delicate art.</p>
<p><em>Effective</em> skin retouching can feel like a black art.</p>
<p>There have been various methods detailed in the past for ways to “smooth” skin in <a href="http://www.gimp.org">GIMP</a>.
Those methods ranged from disappointing at best to downright ridiculous at worst.
The disappointing methods were simply a product of the best methods available at the time.
The ridiculous ones seemed to be due to a lack of subtlety.</p>
<h2 id="subtlety">Subtlety<a href="#subtlety" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Subtlety is a key requirement when approaching skin retouching.
There are certainly exceptions when required (high-fashion for instance) but it should always be approached from a minimalist perspective first.</p>
<p>Too often retouching skin is approached with a very heavy hand. 
In an attempt to <em>“clean”</em> the skin many will chase every last drop of detail out of an image, resulting in a fake and overly smoothed result (making mannequins).
<strong>This is bad</strong>.</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi-Oversmooth.jpg" width="640" height="640" alt="Oversmooth Mairi" />
<figcaption>
To reiterate: <strong>This is bad</strong>.
</figcaption>
</figure>

<p>Real skin has pores, bumps, spots, color, and other interesting things going on. 
The goal shouldn’t be to remove those characteristics, but rather to make some them less pronounced <em>as needed</em>. 
A good rule of thumb is: </p>
<blockquote>
<p>“Never do more than good makeup can achieve.”</p>
</blockquote>
<p>Of course, some makeup artists are magicians. 
In fact, it can be very helpful to go out and research how they work and what their process and reasons are.
This can help you understand better how to approach all manner of retouching, particularly when using techniques like dodging/burning and color theory (as it relates to makeup and skin).</p>
<p>Keep in mind the context as well.
Candid images may only require a very minimum of retouching (<em>if at all</em>), while a fashion shoot may desire a stronger application.
For the best results, it helps to have a clear vision of what you want to achieve.</p>
<h2 id="tools">Tools<a href="#tools" class="header-link"><i class="fa fa-link"></i></a></h2>
<h3 id="blurring">Blurring<a href="#blurring" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>One method of smoothing skin that can be found in many old tutorials on the subject involves using some variation of blurring the base image and masking the blurred regions into the image.
In theory the idea may seem sound but fails quickly on closer inspection.</p>
<p>A combination of the broad effects of blurring coupled with the indiscriminate application across all the textures in the skin make this a less than ideal approach.
All of those pores, spots, bumps, and colors get lost when using an indiscriminate function such as blurring the image.
While there may be a desire to remove some of those details, visually our eyes are expecting for there to be some sort of texture and detail there.
Loss of those details is what pushes the results into “mannequin” territory.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mannequin.jpg" width="960" height="640" alt='Mannequin by Horia Varlan'/>
<figcaption>
Mannequin territory</br>
<em>“White male mannequin head in storefront window”</em> by <a href='https://www.flickr.com/photos/horiavarlan/4269156697'>Horia Varlan</a> (<a href='https://creativecommons.org/licenses/by/2.0/' class='cc'>cb</a>)
</figcaption>
</figure>

<p>Overall, this method should not even be considered as an option for skin retouching.
The results are never good and are indiscriminately too destructive to the image.</p>
<h3 id="high-pass-low-pass-frequency-separation">High Pass/Low Pass Frequency Separation<a href="#high-pass-low-pass-frequency-separation" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>A slightly more advanced way to approach skin retouching is to use a “high pass/low pass” (or “high frequency/low frequency”, or just “frequency separation”) technique to separate the image into two layers.
One layer would contain all of the high-frequency (fine) details while the other layer would contain the low-frequency (coarse) information.</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi-Base.jpg" height="640" width="640" alt="Mairi Base by Pat David"/>
<figcaption>
Mairi 
</figcaption>
</figure>

<p>The resulting layers can look strange to those not accustomed to seeing them.
The important thing to notice is the ability to isolate all high frequency details on a separate layer.
This allows us to independently modify the colors/tones from the details.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi-HFLF.jpg" width="960" height="480" alt='Mairi Frequency Separation'/>
<figcaption>
Low Frequency (left) and High Frequency (right)<br/>
Created with a blur radius of 15px
</figcaption>
</figure>



<h4 id="create-frequency-separated-layers">Create Frequency Separated Layers<a href="#create-frequency-separated-layers" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Creating the frequency separated layers is relatively easy in GIMP.
Starting with the base image layer:</p>
<ol>
<li>Duplicate base layer<br/>
[<em>Layer &rarr; Duplicate Layer</em>]<ul>
<li>Name it “LF”</li>
</ul>
</li>
<li>Apply a Gaussian Blur to the “LF” layer<br/>
[<em>Filters &rarr; Blur &rarr; Gaussian Blur</em>]<ul>
<li>Choose an appropriate radius to isolate your desired high-frequency details (15px in the example)</li>
<li>The blur radius is ~1.5% of the width of the face</li>
</ul>
</li>
<li>Change “LF” layer blend mode to <em>Grain Extract</em></li>
<li>Create a new layer from visible<br/>
[<em>Layer &rarr; New from Visible</em>]<ul>
<li>Name it “HF”</li>
<li>Change “HF” layer blend mode to <em>Grain Merge</em></li>
</ul>
</li>
<li>Change “LF” layer blend mode back to <em>Normal</em></li>
</ol>
<p>Visually, the result should look identical to the original base layer.
Technically the separated frequency layers now allow for much finer targeted editing.
The layers for the image will now have an HF layer (in <em>Grain Merge</em> blend mode) over a LF layer:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/HFLF Layers.png" alt="GIMP Layers Dialog Frequency Separation" />
<figcaption>
Layers after going through a frequency separation.
</figcaption>
</figure>

<p>The choice of radius for the <em>Gaussian Blur</em> operation will determine the level of details that get separated from the low-frequency layer.  Smaller blur radii will isolate finer details (conversely larger radii include larger details).</p>
<h4 id="skin-retouching-with-frequency-separation">Skin Retouching with Frequency Separation<a href="#skin-retouching-with-frequency-separation" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Consider now the results from the separation.  In particular notice which types of skin features occur in each layer.</p>
<p>Pores, light wrinkles, crows-feet, and small details are separated into the HF layer, while larger skin tones remain on the LF layer.
Overall skin tones can be evened out out by smoothing the tones in the low frequency layer.</p>
<p class='aside'>
<span>A note on smoothing</span>
There are various ways of softening details on the different layers.
<br/>
The standard <em>Gaussian Blur</em> is one method that works well and quickly.
<span class='Cmd'>Filters &rarr; Blur &rarr; Gaussian Blur…</span>
<br/>
A better method might be using a <em>Selective Gaussian Blur</em> to only blur certain areas (based on the value difference between the pixel in consideration and its neighbors).
<span class='Cmd'>Filters &rarr; Blur &rarr; Selective Gaussian Blur…</span>
<br/>
If <a href="http://gmic.sourceforge.net/">G’MIC</a> is installed, there is also access to a <em>bilateral blur</em> filter (similar to <em>Surface Blur</em> in Adobe Photoshop) that is also an edge-preserving blur function.
<span class='Cmd'>Filters &rarr; G’MIC…<br/>
Repair &rarr; Smooth [bilateral]</span>
</p>

<p>When considering a face for skin retouching it’s often best to consider each general contour area of the face separately.
This is mostly due to different areas of the skin having different characteristics (<em>ie</em>: forehead wrinkles are often at a different scale than crows feet or smile lines).  </p>
<p>Below is one example of a good starting point for contour consideration when smoothing.
The key is to vary the smoothing intensity for each region to obtain a good result.
There may not be a change required all the time, but it’s a good habit to get into for when it is needed.</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi Smooth Contour.jpg" width="640" height="640" alt="Mairi Contour Smoothing Areas"/>
<figcaption>
Areas of smoothing consideration
</figcaption>
</figure>

<p>A good place to start is often to address any “blotchiness” or uneven tones in the skin.
(Ideally this would be addressed through the use of foundation makeup.)
As seen above, those types of tones can be found on the Low Frequency layer.</p>
<p>Following the contour areas from above a <em>Bilateral Blur</em> (from G’MIC) is used to smooth the regions.
When using the <em>Free Select Tool</em> to select a region, remember to enable <em>Feather edges</em> from the tool options to make a smooth transition from the working area to the surrounding image.</p>
<p><span class='Cmd'>Filters &rarr; G’MIC…<br/>
Repair &rarr; Smooth [bilateral]</span></p>
<p>The defaults of <em>spatial variance</em>: 10, <em>value variance</em>: 7, and <em>iterations</em>: 2 are used.</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi LF Smooth.jpg" alt="Mairi Low Frequency Smoothed" data-swap-src="Mairi-Base.jpg" width="640" height="640" />
<figcaption>
After smoothing the LF layer with a bilateral blur<br/>
Click to compare to original
</figcaption>
</figure>

<p>Visually, smoothing the Low Frequency skin tones provides a marked improvement to the perceived quality.
Importantly, notice that none of the finer details have been modified (wrinkles, pores, etc…).</p>
<p>At this point, regular workflows could still be used such as spot healing or dodging &amp; burning (on either LF or HF layers as needed).</p>
<h4 id="hf-lf-summary">HF/LF Summary<a href="#hf-lf-summary" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>High/Low frequency separation is a great tool to approach skin retouching due to it’s ability to allow a retoucher to approach the work in discrete layers.</p>
<p>If one wanted to isolate a series of frequencies, then things get a little trickier.
It would require the user to generate HF/LF separately for each size they wanted to isolate.
The workflow would be along the lines of do the separation, retouch, do the separation again for a different size, retouch.  Rinse and repeat.</p>
<p>It turns out that there is already a very handy way to isolate multiple frequencies at once and still have a visual means of combining them easily to see the edits as they are being made:
<strong>Wavelet Decompose</strong>.</p>
<h3 id="wavelet-decompose">Wavelet Decompose<a href="#wavelet-decompose" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Wavelet Decompose allows you to generate multiple High Frequency layers (and a Low Frequency “Residual” layer) all at once.
Each of the HF layers use the <strong>Grain Merge</strong> layer blending mode so that the composite image is reconstituted correctly.
This allows the retoucher to make modifications to any of the scale (frequency) layers while seeing the results immediately on the canvas.</p>
<p class='aside'>
<span>Getting Wavelet Decompose [Plugin]</span>
The original plugin for Wavelet Decompose by the user <em>marcor</em> on the <a href="http://registry.gimp.org">GIMP registry</a> can be found here:
<span class='Cmd'><a href="http://registry.gimp.org/node/11742">Wavelet Decompose</a> [registry.gimp.org]</span>
<br/>
Once installed the command is:
<span class='Cmd'>Filters &rarr; Generic &rarr; Wavelet Decompose …</span>
<br/>

<span>Getting Wavelet Decompose [Script-Fu]</span>
There is also a Script-Fu version by Christoph A. Traxler that can be downloaded from us here:
<span class='Cmd'><a href="wavelet-decompose.scm">Wavelet Decompose Script-Fu</a> [pixls.us]</span>
<br/>
Once installed the command is:
<span class='Cmd'>Image &rarr; Wavelet Decompose …</span>

</p>


<p>The advantage to using a wavelet decomposition over a simple HF/LF separation is cases where there may be details of a different size than your HF layer that you still want to isolate.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi Wavelet Scales Horiz.jpg" alt='Mairi Wavelet Decomposed Scales' data-swap-src='Mairi%20Wavelet%20Scales%20Horiz%20Normal.jpg' width='960' height='640' />
<figcaption>
Wavelet Decomposed to 5 levels<br/>
Click to view equalized levels and enhance details
</figcaption>
</figure>

<p>Examining the equalized version of the previous image immediately shows the various scale features isolated through the decomposition.
In particular, the top row shows the finest details while the bottom row shows broad details with the color residual layer last.</p>
<p>With the various detail scales separated, the retoucher can easily make modifications on any given scale while seeing the results directly on the canvas.
This is due to the detail scale layers being set to “Grain Merge” blending mode in GIMP.</p>
<h2 id="application">Application<a href="#application" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Use of wavelet scales for retouching are done similarly to using a frequency separation.
The major difference is choosing which detail scale to apply the smoothing operations to, and at what intensity.</p>
<p>I have found that a good workflow is to generally start at the largest detail scale.
Experiment with smoothing methods and parameters until a good result is achieved without going too far.
If needed, repeat the operations with different parameters on the next smaller detail scale (with reduced parameters).</p>
<p>For this example, running the <em>Bilateral Blur</em> from G’MIC with the same values as in the <strong>Frequency Separation</strong> example above yields:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi Wavelet Decompose 5 Smooth.jpg" alt="Mairi Wavelet Decompose Smooth 5 by Pat David" width="640" height="640" />
<figcaption>
Click to compare:
<span class="toggle-swap" data-fig-swap="Mairi-Base.jpg">Original</span>
<span class="toggle-swap" data-fig-swap="Mairi LF Smooth.jpg">Low Frequency Smooth</span>
<span class="toggle-swap" data-fig-swap="Mairi Wavelet Decompose 5 Smooth.jpg">Wavelet Smooth</span>
</figcaption>
</figure>

<p>The smoothing of the largest detail scale produces pleasing skin tones without removing too many details. </p>
<p>Having the detail scales separated out also allow for spot modifications without disrupting the textures of other scale layers.
For example, there is some slight skin discoloration on the models lit cheek:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi Wavelet Residual Cheek Before Highlight.jpg" alt="Mairi Wavelet Residual Cheek Before Highlight.jpg" width="640" height="640" />
<figcaption>
A small color tone difference to repair.
</figcaption>
</figure>

<p>By working on the color (low-frequency) <strong>Residual</strong> layer, the color tones can be evened out using a <em>Heal Brush</em> and sampling from nearby skin.</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Mairi Wavelet Residual Cheek After.jpg" alt="Mairi Wavelet Residual Cheek After" data-swap-src="Mairi-Wavelet-Residual-Cheek-Before.jpg" width="640" height="640" />
<figcaption>
After healing the area on the <strong>Residual</strong> color layer<br/>
Click to compare to original
</figcaption>
</figure>

<p>Notice in particular that the fine details that make up the skin composition here are not modified.
Wrinkles, pores, and skin texture are kept intact while the underlying color tones for that region are blended smoothly into the surrounding area.</p>
<p>This same technique can come in very handy for lightening under eyes that might have dark circles under them, for instance.</p>
<h3 id="spot-healing">Spot Healing<a href="#spot-healing" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Once the skin tones have been smoothed as desired work can continue with spot healing discrete problems as needed.
Simple skin blemishes that are discrete are best approached with a spot healing tool after the global skin tones have been modified (to avoid having to apply the healing on all of the detail layers one at a time).</p>
<h2 id="example-nikki">Example: Nikki<a href="#example-nikki" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>A good image to see what this approach can accomplish is the lede image to this article, <a href="https://www.flickr.com/photos/patdavid/14490236250/">Nikki</a>.
This is a crop from the raw image untouched:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-Base-crop.jpg" alt="Nikki Base" width="640" height="640" />
<figcaption>
Crop from <em>Nikki</em>, no retouching.
</figcaption>
</figure>

<p>To follow along you can <a href="Nikki-Base-crop-noresize.jpg">download the full-size base image</a> (360KB).</p>
<p>Running Wavelet decompose (plugin) against the image with the default of 5 scales,</p>
<p><span class="Cmd">Filters &rarr; Generic &rarr; Wavelet decompose …</span></p>
<p>will leave the image with layers that look like this:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/WD Layers.png" alt="GIMP Layers Wavelet Decompose" />
<figcaption>
Detail scales and residual layers from Wavelet decompose
</figcaption>
</figure>



<h3 id="what-we-re-targeting">What We’re Targeting<a href="#what-we-re-targeting" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>After running a wavelet decompose on a layer there is a very simple method of exaggerating the details that will be targeted for smoothing the skin tones.
Simply toggle off the visibility of the <em>Wavelet residual</em> layer:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-Base-crop-no-residual.jpg" alt="Nikki Base" width="640" height="640" />
<figcaption>
<em>Nikki</em> with only the detail scales visible over the base image (no residual layer).
</figcaption>
</figure>

<p>I <strong><em>highly</em></strong> recommend that you do <em>not</em> do this with the subject in the room!
Nobody looks good when the residual scale is removed from the image stack…</p>
<p>But it does nicely exaggerate the types of tonal variations that are prime candidates for smoothing and suppression.</p>
<h3 id="regions">Regions<a href="#regions" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Similar to the regions previously shown, we will walk through the retouching process based on that type of facial contour: forehead, nose, cheeks, chin, and lip.</p>
<p>I’ll normally use the <em>Free Select Tool</em> with a feathered radius around one-half an iris length (~30px in this case).
The radius value is mostly arbitrary and serves only to smooth the transition from areas being worked on (so adjust to taste).
I will also usually select regions as I go and remember to save the selections to a channel to make it easier to come back to them later if desired: </p>
<p><span class="Cmd">Select &rarr; Save to Channel</span></p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-Base-crop-regions.jpg" alt="Nikki Base Regions" width="640" height="640" />
<figcaption>
Crop from <em>Nikki</em>, no retouching.
</figcaption>
</figure>

<p><em>Wavelet Decompose</em> is run on the layer using the default number of wavelet detail scales: <strong>5</strong>.</p>
<h3 id="forehead">Forehead<a href="#forehead" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>With the forehead region selected a first pass can be made to smooth out the tones.
As mentioned previously, we’ll start on the largest detail scale <em>Wavelet scale 5</em>.</p>
<h4 id="wavelet-scale-5">Wavelet Scale 5<a href="#wavelet-scale-5" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Due to the size of the blemishes in this area, a slightly more aggressive smoothing amount can be used and adjusted to taste.  A <em>Bilateral Blur</em> can be used again, with slightly higher values than default:</p>
<ul>
<li>Spatial variance: 15</li>
<li>Value variance: 12</li>
<li>Iterations: 2</li>
</ul>
<p>Those parameters do a good job of initially dampening the skin tones here:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-forehead-w5.jpg" alt="Nikki Forehead Wavelet 5" data-swap-src="Nikki-Base-crop.jpg" width="640" height="640" />
<figcaption>
<em>Bilateral Blur</em> on Wavelet scale 5 results<br/>
Click to compare to original
</figcaption>
</figure>


<h4 id="wavelet-scale-4">Wavelet Scale 4<a href="#wavelet-scale-4" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>There are still some uneven tones that were not affected by the smoothing on scale 5.
These are mostly smaller tones around blemishes.
So continuing with the same region, but now working on <em>Wavelet scale 4</em> should help dampen those even further.</p>
<p>Using the <em>bilateral blur</em> again with smaller parameter values than previously:</p>
<ul>
<li>Spatial variance: 7</li>
<li>Value variance: 4</li>
<li>Iterations: 1</li>
</ul>
<p>These values are determined through experimentation on the image. They are tuned in iterations until the result is visually pleasing, then dialed back a little bit more.</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-forehead-w5-w4.jpg" alt="Nikki Forehead Wavelet 5 & 4" data-swap-src="Nikki-Base-crop.jpg" width="640" height="640" />
<figcaption>
<em>Bilateral Blur</em> on Wavelet scale 4 results<br/>
Click to compare to original
</figcaption>
</figure>

<p>At this point, most of the skin tones have been evened out and what is left is mostly discrete skin blemishes that can be cleaned up with a heal tool later.
Working on just two wavelet scales significantly decreased the prominence of the blemishes and the overall smoothness of the tones.</p>
<h3 id="nose">Nose<a href="#nose" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>There is not as much smoothing required on the nose (vs. the forehead).
An initial pass on <em>Wavelet scale 5</em> with the default <em>bilateral blur</em> values:</p>
<ul>
<li>Spatial variance: 10</li>
<li>Value variance: 7</li>
<li>Iterations: 2</li>
</ul>
<p>helps to even the underlying tones nicely.
A second pass on <em>Wavelet scale 4</em> with much lower values on the blur help to smooth the slightly finer details as well:</p>
<ul>
<li>Spatial variance: 5</li>
<li>Value variance: 2</li>
<li>Iterations: 1</li>
</ul>
<p>These two passes result in this for the nose:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-nose-w5-w4.jpg" alt="Nikki Nose Wavelet 5 & 4" data-swap-src="Nikki-crop-forehead-w5-w4.jpg" width="640" height="640" />
<figcaption>
Smoothing on scales 5 &amp; 4 results<br/>
Click to compare to original
</figcaption>
</figure>



<h3 id="cheeks">Cheeks<a href="#cheeks" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Similar to the first pass on the nose, the cheeks can use an initial smoothing on <em>Wavelet scale 5</em> with the default values for the <em>bilateral blur</em>.</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-cheeks-w5.jpg" alt="Nikki Cheeks Wavelet 5" data-swap-src="Nikki-crop-nose-w5-w4.jpg" width="640" height="640" />
<figcaption>
Smoothing the cheeks on wavelet scale 5<br/>
Click to compare to original
</figcaption>
</figure>

<p>To finish the cheeks a slight smoothing on <em>scale 4</em> with slight values,</p>
<ul>
<li>Spatial variance: 5</li>
<li>Value variance: 2</li>
<li>Iterations: 1</li>
</ul>
<p>This smooths just a bit more than previously usually without being too much (if it is too much, dial it back of course).</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-cheeks-w5-w4.jpg" alt="Nikki Cheeks Wavelet 5 & 4" data-swap-src="Nikki-crop-cheeks-w5.jpg" width="640" height="640" />
<figcaption>
Smoothing the cheeks on wavelet scale 4<br/>
Click to compare to previous step 
</figcaption>
</figure>

<p>When clicking to compare in the above image, notice that the result of smoothing with low values on <em>scale 4</em> are subtle but they are there.
Combined with the previous step the overall result is a much visually smoother looking complexion without smearing details.</p>
<h3 id="chin-lip">Chin &amp; Lip<a href="#chin-lip" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Considering both the upper lip and chin, and as before a good starting point is to try the default <em>bilateral blur</em> values on the largest scale (<em>scale 5</em>).</p>
<ul>
<li>Spatial variance: 10</li>
<li>Value variance: 7</li>
<li>Iterations: 2</li>
</ul>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-chin-lip-w5.jpg" alt="Nikki Cheeks Wavelet 5 & 4" data-swap-src="Nikki-crop-cheeks-w5-w4.jpg" width="640" height="640" />
<figcaption>
Smoothing the chin with default <em>bilateral blur</em> values
<br/>
Click to compare to original 
</figcaption>
</figure>

<p>Similar to the previous step a further refinement of the skin tones can be achieved by smoothing on the next detail scale down, <em>wavelet scale 4</em>.
As before, using slight values:</p>
<ul>
<li>Spatial variance: 5</li>
<li>Value variance: 2</li>
<li>Iterations: 2</li>
</ul>
<p>Will produce a nice finishing to the detail tones in this area:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-chin-lip-w5-w4.jpg" alt="Nikki Chin Wavelet 5 & 4" data-swap-src="Nikki-crop-chin-lip-w5.jpg" width="640" height="640" />
<figcaption>
Further refining the chin and lip with smaller blur values on wavelet scale 4
<br/>
Click to compare to previous step 
</figcaption>
</figure>



<h3 id="results-wavelet-smoothing-only-">Results (Wavelet Smoothing Only)<a href="#results-wavelet-smoothing-only-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This process relied only on smoothing the tones on the largest detail scales, 4 &amp; 5.
Without doing any targeted modifications (beyond regions) here are the final results:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-crop-chin-lip-w5-w4.jpg" alt="Nikki Wavelet Final" data-swap-src="Nikki-Base-crop.jpg" width="640" height="640" />
<figcaption>
End result working only on wavelet scales 4 &amp; 5
<br/>
Click to compare to original 
</figcaption>
</figure>

<p>This is a fantastic base to continue working from (particularly when compared to the starting original image).
A few areas of spot healing as needed would be enough to make a great final image from here.</p>
<blockquote>
<p>The concept to keep in mind when working with Wavelet scales is to build up a series of small changes that together will produce a pleasing visual result.</p>
</blockquote>
<p>At this point only a few minor spot corrections and some color toning are required to reach a pleasing final result:</p>
<figure>
<img src="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/Nikki-Final.jpg" alt="Nikki Final" data-swap-src="Nikki-crop-chin-lip-w5-w4.jpg" width="640" height="640" />
<figcaption>
Final result after spot corrections and color toning
<br/>
Click to compare to Wavelet smoothing only 
</figcaption>
</figure>

<hr>
<h2 id="moderation">Moderation<a href="#moderation" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>As with many things in life, moderation is the key here.
Visually it can be helpful to occasionally check your image results zoomed far out.
If an image looks too smooth when zoomed out then dial it back.</p>
<p>Remember that this is an inherently <em>destructive</em> process and should be used as little as needed to get a desired result.</p>
<h2 id="resources">Resources<a href="#resources" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>You can download the sample <em>Mairi</em> and <em>Nikki</em> GIMP .XCF files used to create the examples above here:</p>
<ul>
<li><a href="https://s3.amazonaws.com/pixls-files/Mairi-Example.xcf.bz2">Mairi</a> <sup>[<strong>34.4MB</strong>]</sup></li>
<li><a href="https://s3.amazonaws.com/pixls-files/Nikki-Example.xcf.bz2">Nikki</a> <sup>[<strong>7.7MB</strong>]</sup></li>
</ul>
<p>These are compressed GIMP .xcf files (hence the .xcf.bz2 file extensions).
They should open directly in GIMP (created in 2.8.14) without problem.</p>
<h2 id="further-reading">Further Reading<a href="#further-reading" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>This tutorial is a combination of material originally posted here: </p>
<ul>
<li><a href="http://blog.patdavid.net/2011/12/getting-around-in-gimp-skin-retouching.html">Getting Around in GIMP - Skin Retouching (Wavelet Decompose)</a></li>
<li><a href="http://blog.patdavid.net/2014/07/wavelet-decompose-again.html">Getting Around in GIMP - Wavelet Decompose (Again)</a></li>
<li><a href="http://blog.patdavid.net/2013/03/the-open-source-portrait-postprocessing.html#GIMP-Skin">The Open Source Portrait (Postprocessing)</a></li>
</ul>
<p>The original wavelet decompose plugin from user <em>marcor</em> on <a href="http://registry.gimp.org/">registry.gimp.org</a> (the one I use usually):</p>
<ul>
<li><a href="http://registry.gimp.org/node/11742">Wavelet Decompose</a></li>
</ul>
<p>A Script-Fu version of Wavelet Decompose by Christoph A. Traxler.
Place the .scm file into your scripts folder and the menu option “Wavelet Decompose …” will be under the <strong>Image</strong> menu:</p>
<ul>
<li><a href="wavelet-decompose.scm">Wavelet Decompose Script-Fu</a></li>
</ul>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ An Opportunity ]]></title>
            <link>https://pixls.us/blog/2015/04/an-opportunity/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/04/an-opportunity/</guid>
            <pubDate>Tue, 14 Apr 2015 02:59:55 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/04/an-opportunity/Mary Front.jpg" /><br/>
                 <h1>An Opportunity</h1>  
                 <h2>To help (and attract) new users!</h2>   
                <p>I think we are at an interesting time for digital imaging.
I came across this graph on <a href="http://petapixel.com/2015/04/09/this-is-what-the-history-of-camera-sales-looks-like-with-smartphones-included/">Petapixel</a> the other day that showed camera sales from 1947 - 2014:</p>
<p><img src="https://pixls.us/blog/2015/04/an-opportunity/graph.jpg" alt="CIPA Camera Production 1947-2014"></p>
<p>There was explosive growth driven by the <span style="color: #4e92db;"><em>Compact Digital</em></span> market right around 2000.
Likely driven by the advent of those inexpensive compact digital cameras and the ubiquity of home computers.
It was relatively cheap to get a decent digital camera and the cost per photo suddenly dropped to a previously unheard of amount (compared to shooting film).</p>
<p>This meant that substantially more people were now able to take and share photographs.</p>
<p>That precarious plummet after 2011 seems frightful for the photography industry as a whole, though.
The numbers from the graph would seem to indicate that production in 2014 dropped to <em>below</em> the values from 2001.</p>
<!-- more -->
<p>Petapixel had a follow-up article where photographer Sven Skafisk added in smartphone sales using data from Gartner Inc.: </p>
<p><img src="https://pixls.us/blog/2015/04/an-opportunity/chartwithsmartphones.png" alt="Camera Sales with Smartphones"></p>
<p>If that graph doesn’t describe an industry in the throes of change, then I don’t know what does.
It looks like the camera industry is less in decline and more about being in a big transition phase.</p>
<h3 id="so-what-"><a href="#so-what-" class="header-link-alt">So What?</a></h3>
<p>So why would this matter?
Because now, more than ever, there is a large amount of people who may be interested in learning to process their photographs in some way.
As the costs and barrier to entry to photography as a hobby get lower we see more and more people finding the fun and joy of photography.</p>
<p>Couple that with the fact that the modern language of media consumption is primarily <em>visual</em> and I see a great opportunity brewing.</p>
<p>I feel this is important to <em>us</em> as free software users as it gives us an opportunity to help make people aware of free software (and its ideas).
New hobbyists will invariably look for an inexpensive way to get started processing photos and will almost always run into various free software projects at some point in the search.</p>
<p>It’s entirely on us as a community to make sure that there will be good resources to learn from.
If we do a good enough job, some of those folks will realize that free software more than meets their needs.
If we do a <em>really</em> good job, some of those people will become valuable parts of our communities.</p>
<h2 id="articles-have-comments-now-also"><a href="#articles-have-comments-now-also" class="header-link-alt">Articles Have Comments Now Also</a></h2>
<p>So I have now also enabled the comments for more than just blog posts.
They should now be working just fine on full articles as well.
So feel free to head over to <a href="http://lightsweep.co.uk">Ian Hex’s</a> neat <a href="http://pixls.us/articles/luminosity-masking-in-darktable">Luminosity Masking in darktable</a> tutorial and leave a comment to let him know what you thought of it!
(Or any of the other articles, too.)</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ A Forum ]]></title>
            <link>https://pixls.us/blog/2015/04/a-forum/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/04/a-forum/</guid>
            <pubDate>Fri, 10 Apr 2015 14:40:44 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/04/a-forum/Glades.jpg" /><br/>
                 <h1>A Forum</h1>  
                 <h2>For Discourse, if you will...</h2>   
                <p>After much hard work, that basically consisted of me annoying darix as often as possible, I am glad to say that we finally have a <a href="http://discourse.org">Discourse</a> instance set up!
<strong>Super Big</strong> thank you to darix for all the help!</p>
<h2 id="so-what-"><a href="#so-what-" class="header-link-alt">So What?</a></h2>
<p>What does this mean?
For starters, we now have a forum/community in place that we can start building around photography and free software.</p>
<p>A neat side-effect of this forum is that we now also have a way to embed forum threads as comments on posts (only blogposts at the moment - I’ll add them to articles shortly).</p>
<p>At the bottom of any blog post you should now either see a series of conversations happening with a <code>Continue Discussion</code> or a link to <code>Start Discussion</code>.
Either of those buttons will take you to the actual forum to continue the conversation.
Replies to topics that are tied to posts will show up as a conversation at the bottom of the post (check the bottom of this post).</p>
<p>The site is <em>open</em> and <em>live</em> at the moment (if a bit bare-bones).
Feel free to drop by and create an account, comment on things, start new topics, etc.
I’m testing things out at the moment to see if I need to possibly bump the server specs in order to handle the loads (most likely).
(In the course of writing this, I went ahead and bumped the server RAM to 2GB - so it should run smoothly).</p>
<!-- more -->
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ All the Articles ]]></title>
            <link>https://pixls.us/blog/2015/03/all-the-articles/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/03/all-the-articles/</guid>
            <pubDate>Mon, 30 Mar 2015 22:31:36 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/03/all-the-articles/M31 - Adam Evans.jpg" /><br/>
                 <h1>All the Articles</h1>  
                 <h2>My God, It's Full of Articles</h2>   
                <p>I spent a little time struggling conceptually with how I wanted to categorize the different types of content I am planning for this site.
As I had <a href="https://pixls.us/blog/2015/02/some-updates/">previously noted</a>, I was already done with creating a <em>blog post</em> type of content, and had noted that I was working on how to show tutorials and ‘showcase’ types of posts.</p>
<p>Apparently, I had the answer in mind when I created that graphic last month.
If you notice the two other types of content I am working on, <em>Tutorials</em> and <em>Showcase</em>, are both listed as types <strong>Articles</strong> on the graphic.</p>
<!-- more -->
<figure class='big-vid'>
<img src='http://pixls.us/blog/2015/02/some-updates/Some Updates 4.png' alt='site content types - Blog, Tutorials, Showcase' />
</figure>


<p>Of course.
There will only be two distinct types of content from the viewpoint of the site, <em>blogposts</em> and <em>articles</em>.
I will then use the features of the static-site generator I use for this site, <a href="http://metalsmith.io">metalsmith</a>, to manage the content presentation (tutorials, showcase, etc).
This will be handled through collections in metalsmith.</p>
<p>So at the end of the day, even though there will be a section of <em>Tutorials</em> and <em>Showcase</em> or whatever else I come up with (or someone else), the bottom line is that the base content object will be an <strong>Article</strong>.</p>
<p>I like this approach, as it leaves a large amount of flexibility while maintaining a nice sense of simplicity.
(Anything that lowers the barrier to writing and publishing material is good in my book).</p>
<h2 id="an-aside-on-collections-in-metalsmith"><a href="#an-aside-on-collections-in-metalsmith" class="header-link-alt">An Aside on Collections in Metalsmith</a></h2>
<p>This is just a note to myself in case I forget what I was on about with collections.</p>
<p>There are basically two ways of associating an <em>article</em> with a collection, through metadata on the file and through a matching pattern during compile time.
Unfortunately, as near as I can tell, you can’t do them both at the same time for the same collection type.</p>
<h3 id="metadata"><a href="#metadata" class="header-link-alt">Metadata</a></h3>
<p>Doing it through metadata assocation only requires that in the front-matter of the file, the collection type is called out, like <code>collection: tutorial</code>.
For example, here’s a sample of the front-matter for this blog post:</p>
<pre><code class="lang-javascript">---
date: 2015-03-30T17:31:36-05:00
title: &quot;All the Articles&quot;
sub-title: &quot;My God, It&#39;s Full of Articles&quot;
lede-img: &quot;M31 - Adam Evans.jpg&quot;
author: &quot;Pat David&quot;
collection: blogposts
layout: blog-posts.hbt
---
</code></pre>
<p>In this case, the post will be added to the collection, <em>blogposts</em>.</p>
<h3 id="pattern-matching"><a href="#pattern-matching" class="header-link-alt">Pattern Matching</a></h3>
<p>In the <code>index.js</code> for the site, there’s a section for using collections where a pattern can be specified to add files:</p>
<pre><code class="lang-javascript">.use( collections({
    articles: {
        pattern: &#39;articles/*/index.html&#39;,
        sortBy: &#39;date&#39;,
        reverse: true
        }
}))
</code></pre>
<p>This glob pattern will simply add all the posts in a folder in the <code>articles/</code> directory to the collection, <em>articles</em>.</p>
<p>In fact, this is actually how I want to collect all <em>articles</em> on the site for archive purposes.
I’ll want a page on the site that will list all of the articles that will be published, regardless of further classifications.
I feel that it is helpful for people searching for information to have a single page listing of all the material on the site (I did something similar with my blog by adding <a href="http://blog.patdavid.net/p/archive.html">an archive page</a>).</p>
<h2 id="happy-"><a href="#happy-" class="header-link-alt">Happy!</a></h2>
<p>So these pieces sort of falling into place make me happy because it means that I am much closer to having a setup how I would like it to be.
I can get started writing these other article types now without worrying as much about the back end.</p>
<p>Rather, I only need to focus on creating the landing pages for the content type (tutorials/, showcase/, etc…).
Yay!
More time to spend on writing new stuff!</p>
<h2 id="discourse"><a href="#discourse" class="header-link-alt">Discourse</a></h2>
<p><img src="https://pixls.us/blog/2015/03/all-the-articles/discourse.png" alt="Discourse Logo"></p>
<p>I had mentioned it previously, but darix on <code>#darktable</code> has been an immense help in testing out <a href="http://discourse.org">Discourse</a> for me.
He has gotten it to a point where it mostly works so the only thing holding me back from getting it rolled out is deciding how/where to host the instance.</p>
<p>If anyone has any thoughts or suggestions, I’m all ears!
To use Darix’s discourse, I’ll need openSUSE 13 at least.
Otherwise, I could probably buy a droplet on Digital Ocean and host it there for now.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Deep Links ]]></title>
            <link>https://pixls.us/blog/2015/03/deep-links/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/03/deep-links/</guid>
            <pubDate>Tue, 24 Mar 2015 22:17:53 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/03/deep-links/More Mairi Experiments.jpg" /><br/>
                 <h1>Deep Links</h1>  
                 <h2>As well as a sort-of look for article/tutorial indexes</h2>   
                <figure>
<img src="https://pixls.us/blog/2015/03/deep-links/Deep-Thoughts.jpg" alt="Deep Thoughts by Jack Handy" title="I'm showing my age with this reference, aren't I?" />
</figure>

<p>I tried to find a good funny reference to <a href="http://en.wikipedia.org/wiki/Jack_Handey">Jack Handey</a> here but failed.
Which might be a good thing given how the reference likely shows my age…</p>
<p>I have been working on various bits of the site as well as finishing up a long-overdue article.
I’ve also been giving some thoughts in general about interesting ways to move forward with some ideas which I will bore you all with shortly.</p>
<!-- more -->
<h2 id="deep-linking"><a href="#deep-linking" class="header-link-alt">Deep Linking</a></h2>
<p>A while back I had <a href="https://pixls.us/blog/2014/09/an-about-page-and-help/#breaking-up-long-pages">some thoughts</a> around how best to format long form articles.
I finally decided to keep articles entirely on a single page as opposed to breaking them up across multiple pages.
Mostly this was because I know I personally hate having to click through too many times just to read an article, and the technique is often used as a cheap means to show more ads to readers.</p>
<p>The problem with single page articles is linking/referencing content at an arbitrary location in the page.
The markdown processor I’m using in <a href="http://metalsmith.io">metalsmith</a> <em>does</em> add a unique heading id to each html heading element, but doesn’t expose the link easily.</p>
<p>So I spent some time recently writing a small metalsmith plugin to do that for me.
In the <a href="https://pixls.us/articles/">articles</a> you can now get a direct link to a heading section by hovering the mouse pointer over a heading.
The link will become visible at the end of the heading (as a link icon):</p>
<figure style="border: solid 2px #999; padding: 1rem;">
<img src="https://pixls.us/blog/2015/03/deep-links/deep-link.png" alt="PIXLS.US deep link example" />
<figcaption>
The link becomes visible when hovering over a heading.
</figcaption>
</figure>

<p>This lets you now link directly to that section.
So I can now link directly to content deep into the page itself, <a href="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/#example-nikki">like this link</a> to the Nikki example for skin retouching.</p>
<p>These are the same heading links that are used for the <em>Contents</em> navigation pane on the menu:</p>
<figure>
<img src="https://pixls.us/blog/2015/03/deep-links/pixlsus-menu.png" alt="PIXLS.US Navigation Menu" width="640" height="640" />
</figure>

<p>This method of exposing a heading link is similar to what you may find on <a href="http://github.com">GitHub</a> for instance.
So, at least there’s now the ability to deep-link into articles as needed! :)</p>
<h2 id="skin-retouching-with-wavelets"><a href="#skin-retouching-with-wavelets" class="header-link-alt">Skin Retouching with Wavelets</a></h2>
<p>Also, I took a break from this other thing I’m working on to finish writing the <a href="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/">Skin Retouching with Wavelet Decompose</a> article.</p>
<figure class='big-vid'>
<img src="https://lh3.googleusercontent.com/-NEKW7KPTLh0/U_lW3AoF3yI/AAAAAAAARN8/b2DSir8MK0s/s0/Nikki-after-opt.jpg" alt="Nikki by Pat David" />
<figcaption>
<em>Nikki</em> is a sample image from the <a href="https://pixls.us/articles/skin-retouching-with-wavelet-decompose/">Skin Retouching with Wavelets</a> article.
</figcaption>
</figure>

<p>This poor article has been in the queue for what feels like forever, so it’s nice to finally be able to publish it.
This particular article is a combination of many of the previous things I had written around using wavelet scales for retouching work.
If you get a chance to read it, I’d love to hear what anyone has to say about it!</p>
<h2 id="articles-index-page"><a href="#articles-index-page" class="header-link-alt">Articles Index Page</a></h2>
<p>I’m still experimenting with the look and feel of the <a href="https://pixls.us/articles/">articles index page</a>.
If you follow that link you’ll see one of the ideas I currently have for laying it out.
I’m not 100% sold on this layout yet, as it may get cumbersome with many articles at once.</p>
<p>I may also provide links at the top of the page for particular content (tutorials, showcases, by tag/software, etc…).</p>
<p>Speaking of which, I’m wondering from a content management standpoint if it makes more sense to publish every item on the site as an “article”, then to handle the categorization and display as a function of tags/categories on the posts.
Not quite sure just yet.
I’ll still need to fiddle with some other layout/organizational ideas.</p>
<h2 id="on-another-note"><a href="#on-another-note" class="header-link-alt">On Another Note</a></h2>
<p>I finally also fixed the path problem when generating the blog post listing page.
I had a problem where locally referenced images for a post (relative to the post directory) didn’t have their paths updated when showing them on the blog index page.
So I took some time and repaired it with a small <a href="http://handlebarsjs.com">handlebars</a> helper function.</p>
<p>For instance, the <em>Deep Thoughts</em> image at the beginning of this post wasn’t showing correctly from the blog index page before I fixed it.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Some Updates ]]></title>
            <link>https://pixls.us/blog/2015/02/some-updates/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/02/some-updates/</guid>
            <pubDate>Thu, 26 Feb 2015 21:38:21 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/02/some-updates/Dorothy.jpg" /><br/>
                 <h1>Some Updates</h1>  
                 <h2>Yes still writing and working!</h2>   
                <p>I hate when things take me away for a little while, but won’t make any apologies just yet for having little activity here!
It’s mostly a one-man show here at the moment so I do beg for some patience as I build things out and get articles together.</p>
<p>Speaking of building things out…</p>
<h2 id="site-structure"><a href="#site-structure" class="header-link-alt">Site Structure</a></h2>
<p>I have been giving some thought to the general site structure lately.
I thought it might be fun to talk about it briefly.</p>
<p>My original (and still current) intention for the main piece of content for PIXLS.US is a tutorial.
It’s the main type of content I was writing on <a href="http://blog.patdavid.net">my blog</a> as well as what I’ve been trying to update on <a href="http://www.gimp.org/tutorials">http://www.gimp.org/tutorials</a>.
It’s a nice, known quantity…</p>
<!-- more -->
<figure class="big-vid">
<img src="https://pixls.us/blog/2015/02/some-updates/Some Updates.png"/>
</figure>

<p>So I spent my early time building the site focusing on the layout and design of tutorial pages.
Fonts, sizes, weights, layout, and more.
It’s just the way I think.
Plus, if I did a decent job on this layout, I could not worry about fiddling with it later and instead focus on writing.</p>
<p>I finally ended up with a layout that I liked (basically what you’re reading on right now).
The problem was, I wanted a bunch of tutorials, not just one!</p>
<p>So with a little work and the help of some contributors (yay <a href="http://lightsweep.co.uk/">Ian Hex</a>!), I was looking at a few different tutorials now for the site.  Yay!</p>
<figure class="big-vid">
<img src="https://pixls.us/blog/2015/02/some-updates/Some Updates 2.png"/>
</figure>

<p>The problem now was that I needed to create a nice page to help guide users to the various tutorials.
This is <em>still</em> not done…</p>
<p>So here I am at the moment still working on how best to showcase the neat tutorials on an index page of some sort:</p>
<figure class="big-vid">
<img src="https://pixls.us/blog/2015/02/some-updates/Some Updates 3.png"/>
</figure>

<p>I need to find an attractive and usable means of listing the various tutorial articles.
So this is one of the things that has been taking up some of my time.</p>
<p>The main page has also been occupying some of my attention,
as I’m not 100% sure how to present all the site information (tutorials, blog posts, showcases, etc…).
There’s kind of a running theme here I guess.</p>
<figure class="big-vid">
<img src="https://pixls.us/blog/2015/02/some-updates/Some Updates 4.png"/>
</figure>

<p>I’m also going to be trying to produce some “Showcase” type of article posts that will highlight a F/OSS photographer or images.</p>
<p>The blog pages I’ve already finished (it’s what you’re reading now).
I’ve also mostly gotten the index pages for the blog in a workable state.
I took some time recently to paginate the blog index pages as well so as to not try to load the entire post history on a single page.</p>
<p>To summarize, there are a few things yet to design and code.
I’m working on getting them so we can have an actual launch.</p>
<ul>
<li><p><strong>Main Page</strong></p>
<p>  I still need to design and layout how best to show off the site content.</p>
</li>
<li><p><strong>Tutorial/Articles Page</strong></p>
<p>  This is another page to design and layout.
I have some ideas and neat content already written, so this is just designing the page.</p>
</li>
<li><p><strong>Showcase Pages &amp; Index</strong></p>
<p>  These pages will be functionally the same as the article pages, but the content will focus more on showcasing FL/OSS artists and their works.
I’ll categorize these pages differently so I can collect them on their own index page separate from the tutorials.</p>
</li>
</ul>
<h2 id="in-closing-"><a href="#in-closing-" class="header-link-alt">In Closing…</a></h2>
<p>So, things are moving along (albeit slower than I would like).
I’m building the scaffolding for the future, so I don’t feel so rushed.
Better to do it well than quick in my opinion.</p>
<h3 id="contributing"><a href="#contributing" class="header-link-alt">Contributing</a></h3>
<p>Also, if anyone would like to immortalize themselves on the early pages of an experimental website to bring high quality tutorials and discussions to the the Free/Open Source Imaging world – well then you know where to turn: <a href="mailto:pat@patdavid.net?Subject=PIXLS.US">pat@patdavid.net</a>.</p>
<p>I promise I don’t bite (hard).</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Another Article Done ]]></title>
            <link>https://pixls.us/blog/2015/01/another-article-done/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2015/01/another-article-done/</guid>
            <pubDate>Wed, 07 Jan 2015 14:30:35 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2015/01/another-article-done/Ian_Hex.jpg" /><br/>
                 <h1>Another Article Done</h1>  
                 <h2>Ian Hex and Luminosity Masks in darktable</h2>   
                <p>2015 seems to be getting started nicely! </p>
<p>Just before the holidays <a href="http://lightsweep.co.uk">Ian Hex</a> sent me his finished tutorial to post, and I just finished editing it.
It’s a wonderful look at using Luminosity Masks in darktable for targeted adjustments. (Parametric masks in darktable-speak).
You can find the new tutorial here:</p>
<p><a href="https://pixls.us/articles/luminosity-masking-in-darktable/"><strong>PIXLS.US: Luminosity Masks in darktable</strong></a></p>
<!-- more -->
<p class="aside">
On a side note, I had previously written about doing <a href="http://blog.patdavid.net/2013/11/getting-around-in-gimp-luminosity-masks.html">Luminosity Masks in GIMP</a> on my personal blog, and yes I will be porting that tutorial here a little later!
</p>



<h2 id="still-writing"><a href="#still-writing" class="header-link-alt">Still Writing</a></h2>
<p>I am still working on the Wavelet article (I took a break to copyedit Ian’s article).
I am continuing my work on that article as well as taking a rudimentary first stab at an article index page (or possibly a variation for a main landing page for the site).</p>
<p>Just need to decide on an attractive and functional layout for presenting the list of articles we have available.
I’m also open to suggestions if any of you readers out there have seen something that you think would be appropriate or neat to consider…</p>
<p>I am also open to taking submissions from folks who may have the mental fortitude to write something for the site.
Just shoot me any ideas/sketches/outlines you think may be appropriate!
(pat@patdavid.net in case you didn’t already have it…)</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Luminosity Masking in darktable ]]></title>
            <link>https://pixls.us/articles/luminosity-masking-in-darktable/</link>
            <guid isPermaLink="true">https://pixls.us/articles/luminosity-masking-in-darktable/</guid>
            <pubDate>Tue, 06 Jan 2015 18:41:08 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/luminosity-masking-in-darktable/luminosity masks in darktable tutorial lede.jpeg" /><br/>
                 <h1>Luminosity Masking in darktable</h1>  
                 <h2>Making targeted adjustments to your RAWs</h2>   
                <p><strong>Luminosity Masking</strong>, the ability to create selections of your image based on its specific tones for ultra-targeted editing, is a relatively recent concept favoured by landscape photographers the world over.
In this article, we will explore how to create and use Luminosity Masks in the F/OSS RAW editor <a href="http://www.darktable.org">darktable</a>, so that you can make adjustments on your RAW files to isolated tones.</p>
<h2 id="what-is-luminosity-masking-">What is Luminosity Masking?<a href="#what-is-luminosity-masking-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Luminosity Masking is a technique developed in the last 10 years or so primarily by American Southwest landscape photographer Tony Kuyper over at <a href="http://goodlight.us/">goodlight.us</a>. 
Tony provides <em>extensive</em> writing and information on Luminosity Masking and how to create Luminosity Masks; in this article I’ll be primarily focusing and creating and using the masks in darktable, but if you want to really understand the basics I highly recommend giving <a href="http://goodlight.us/writing/luminositymasks/luminositymasks-1.html">Tony’s guide a good read over</a> first.</p>
<p>In essence, Luminosity Masking is about creating highly specific selections of your photo based on the tones of the image itself. 
This enables you to have extremely fine control over what parts of the photo are selected to make adjustments on (such as contrast, saturation etc.) whilst keeping other tones of the photo <em>completely unaffected</em>. 
Let’s quickly illustrate this with some screenshots. </p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-2.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Here’s a shot I got of the Coral Beach on the Isle of Skye, Scotland, when my partner and I toured there recently in October 2014. 
It’s a pretty solid exposure. Let’s have a look at the histogram.</p>
<figure>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-3.jpeg" width='640' height='360' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>This article assumes you already have a basic understanding of histograms and how they work but I’ll give a quick summary here: the histogram represents the tonal information of your photo. 
It’s a graph of the light. 
On the left-hand side of the histogram is where all the Shadow information is, all the darker tones of the image. 
On the right-hand side you’ll find all the Highlight information, the brighter tones. 
And therefore, towards the middle of the histogram, is where all your midtones are located. 
The taller the graph is in a certain section of your histogram, the more information there is. 
So for this photo, you can see that we have a lot of shadow and highlight information, but hardly any midtones. 
We’re also not clipping (losing information) any shadows and highlights as well <em>i.e.</em> the graph isn’t flattened against either side of the histogram. 
So we’ve got plenty of room to work with here.</p>
<p>So let’s say that I feel the sky is a little too bright and I want to darken it. 
The day was quite overcast at this point and the sky in this image feels too washed out. 
Let’s darken it by dropping the exposure.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-4.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Now we’re starting to see some more definition in the sky but the image overall feels too dingy and dark. 
Let’s look at the histogram.</p>
<figure>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-5.jpeg" width='640' height='360' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>As you can see, underneath the histogram I have the Exposure module open and I’ve pulled the exposure of the photo down by -1.02EV, darkening the image. 
This is reflected in the histogram. 
What was previously highlight information has been brought down so that it now resides in the midtones of the photo. 
This has brought back some definition and colour to the sky but now the rest of the photo is too dark; you can see on the histogram that the shadow information is bundling up on the left-hand side and we’re in danger of clipping the shadows, that is, losing information, which would result in blotches of pure black in the photo. 
Not good.</p>
<p>How do we get around this? Well, we create and use a Luminosity Mask that selects just the highlights in the photo, mostly the sky, but leaves the rest of the photo alone, keeping the shadows where they are. 
Here’s the result of using a Luminosity Mask to darken just the highlights in the photo. </p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-6.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p><em>Much</em> better.
We’ve darkened the highlights, the sky, bringing back some colour and definition but have left the shadows, the beach and grass, well alone. 
Let’s see how our histogram is doing.</p>
<figure>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-7.jpeg" width='640' height='349' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Once again, I’ve opened up the Exposure module and dropped the exposure of the photo down to -1.02EV but you can see that the module looks a little different this time.
 That’s because I’ve applied a Luminosity Mask to the Exposure change.
 We’ll come back to that in a bit.
 Look at the histogram in the top-right.
 We’ve brought the highlights down into the midtones but kept the shadows where they are.
 We can make another targeted adjustment if we want.
 Let’s say that I want to brighten the shadows a little bit as well. </p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-8.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Ah-ha! 
Now we’re bringing back some clarity and interest to the foreground, that lovely sweeping curve of the grass, beach and loch, with the hill in the distance. 
Check out the histogram.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-9.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>You can see at the bottom-right that I’ve made a new adjustment, known as “Exposure 1”, where I’ve increased the exposure of the image by 0.72EV. 
But again, I’ve applied a Luminosity Mask to this adjustment so that the brightening effect only happens to the shadows in the photo, leaving the highlights alone. 
In the histogram, you’ll note that we now have a lot of midtone information, by darkening the highlights and brightening the shadows. 
Tony Kuyper talks alot about the <a href="http://goodlight.us/writing/magicmidtones/magicmidtones-1.html">“Magic Midtones”</a> and for good reason: the midtones are the real meat of the photo and applying targeted adjustments to the midtones of a photo can really take your work to the next level.</p>
<p>So, let’s review the changes we’ve made to this photo.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/photo-comp.jpg" width='960' height='636' alt='Ian Hex darktable luminosity mask tutorial' />
<figcaption>
<em>Fig. 1</em>: Original RAW<br/>
<em>Fig. 2</em>: Whole photo darkened<br/>
<em>Fig. 3</em>: Highlights darkened only<br/>
<em>Fig. 4</em>: Shadows brightened as well<br/>
</figcaption>
</figure>

<p>And let’s also look at how the histogram has changed.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/photo-comp2.jpg" width='960' height='530' alt='Ian Hex darktable luminosity mask tutorial' />
<figcaption>
<em>Fig. 5</em>: Original RAW<br/>
<em>Fig. 6</em>: Whole photo darkened<br/>
<em>Fig. 7</em>: Highlights darkened only<br/>
<em>Fig. 8</em>: Shadows brightened as well<br/>
</figcaption>
</figure>



<h2 id="creating-luminosity-masks-in-darktable">Creating Luminosity Masks in darktable<a href="#creating-luminosity-masks-in-darktable" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Luminosity Masking is easy to do in darktable; it’s built right in to the software since v1.4 (and now we’re on v1.6). 
Every single module in darktable, whether that’s Contrast, Vibrance, Exposure etc., can have a Luminosity Mask applied to it for targeted adjustments. 
Let’s demonstrate on a new image.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-1.jpeg" width='960' height='540' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Here’s a shot I grabbed on that same tour of Scotland in October 2014, this time of the Glenfinnan Monument. 
Pretty neat? If you look at the histogram at the top-right, you’ll see that I have a lot of shadow information (in fact it’s almost clipping) and I have a good range of highlight information that moves into the midtones as well. 
Thankfully there’s no clipping going on but the photo is too dark, with the monument and mountains appearing almost as shadowy silhouettes against the sky. 
What we want to do is to brighten up those shadows to bring back the details and colour in the monument and the mountains. 
We may also do a smidgen of highlight darkening as well. </p>
<p>So, let’s open the Exposure module and I’ll walk us through it.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-10.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>You can find the Exposure module in the Basic Group, represented by the hollow white circle icon. </p>
<p>The magic we’re looking for is under the “blend” dropdown:</p>
<figure class=''>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-11.jpeg" width='640' height='349' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Simply select “parametric mask”. 
This is where the magic is. 
In my view, it should be renamed to Luminosity Mask, but that’s just me. </p>
<p>This is where we can create a mask of the photo by selecting just certain tonal ranges. 
Now, we’re not going to go into detail on every aspect of this masking system; I’ll leave to you to experiment with. 
Just note that this “parametric mask” function is available in <em>every darktable module</em>, so you can apply Luminosity Masks on Exposure, Saturation, Contrast, Vibrance, Local Contrast… whatever you wish. 
This is neat and very powerful. </p>
<p>So, next step: select the “L” tab for “Luminosity” – located on the far right of the other tabs “g”, “R”, “G”, “B”, “H”, “S” — and then select the little icon that has the black circle in the white square, this will show the mask.</p>
<figure class=''>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-12.jpeg" width='640' height='360' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>This is what your photo will now look like.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-13.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p><strong><em>Don’t panic!</em></strong></p>
<p>All this yellow is telling you is that, currently, any Exposure adjustment you make will take effect on the <em>whole photo</em>. Clearly, this isn’t what we want. 
What we’re going to do is adjust the Input slider to start narrowing down our selection to just the tones we want; in this case, we’re after the shadows so we can brighten them up whilst leaving the highlights alone. 
We can do this by bringing the sliders on the right-hand side of the Input slider down towards the left.
This will start deselecting the highlights of the photo as we narrow our mask further towards the shadows.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-14.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>As you can see, I’ve brought the Input sliders from the right-hand side down to 25, very close to the left-hand sliders. 
This is reflected in the mask, as we’re now starting to deselect some of the brighter highlights in the sky. 
But we want to narrow it down further so that we’re targeting just the darkest parts of the photo: the mountains, foreground and monument.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-15.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p><em>Boom!</em>
We’ve had to bring the sliders down all the way to 5 to cut off the highlights in the sky. 
We’ve also managed to deselect some of the brighter highlights in the foreground as well. 
Let’s just make one final adjustment to the sliders before we start brightening the Exposure.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-16.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Here, all we’ve done is move the bottom right-hand slider back up towards the highlights a little bit. 
What this does is feather and soften the mask so that when we do our Exposure brightening it will look more natural and blend better. </p>
<p>OK, we’ve got our initial mask targeted nicely towards the shadows; hide the mask by selecting that black circle in the white square icon again. 
Now it’s time to start brightening up the Exposure.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-17.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p><em>Boom!</em> 
Much better. Let’s do a side-by-side comparison.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/photo-comp3.jpg" width='960' height='724' alt='Ian Hex darktable luminosity mask tutorial' />
<figcaption>
<em>Fig. 9</em>: Original RAW unedited<br/>
<em>Fig. 10</em>: Foreground, monument and mountains (shadow areas) have been brightened through a Luminosity Mask, leaving the highlights in the sky alone.
</figcaption>
</figure>

<p>Already, we’ve made a striking change to how the photo looks. 
There’s now a lot more interest as our subject, the monument, is much brighter with plenty of details available. 
However, we’re not quite done. 
The sky to the right of the monument looks a bit… <em>funky</em>. 
That’s because when we feathered our Luminosity Mask a bit we selected too much highlight information. 
This has resulted in part of the sky getting brighter but the rest of the sky staying the same, which looks strange. 
We can correct this by moving the bottom right-hand slide back to the left a bit, cutting off those highlights in the sky more.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-18.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Better. 
By moving the bottom right-hand slider back down from 20 to 8 the sky looks more natural. </p>
<p>Already, this photo is looking a lot better. 
Let’s take some of those bright highlights in the sky and darken them a bit, so that the eye isn’t distracted and focuses more on the monument. 
To do this, we’re going to make another Exposure adjustment.</p>
<figure class=''>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-19.jpeg" width='640' height='360' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>To the left of the Exposure module name you’ll see four little icons. 
Click the rightmost one and then select “New Instance” in the dropdown that appears.</p>
<figure class=''>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-20.jpeg" width='640' height='360' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>We now have a new module called “Exposure 1” that sits on top of our previous Exposure module. 
With this Exposure 1, we’re going to create a Luminosity Mask targeting the highlights so that we can darken the exposure in them.</p>
<p>Same process as before: in “Exposure 1” select the “blend” dropdown then select “parametric mask”.</p>
<figure class=''>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-21.jpeg" width='640' height='360' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Select the “L” tab for Luminosity then make the mask visible by clicking on the black circle in the white square icon.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-22.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>This time, we’re going to take the left-hand sliders and bring them to the right, slowly deselecting the shadows until we’ve targeted the highlight tones we want to darken.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-23.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>In our example, we’ve taken the left-hand sliders of Input up to 17 and then brought the bottom left-hand Input slider back down a little to 6 so that we feather the mask out for a more seamless blend.</p>
<p>Let’s start decreasing the exposure to see what it looks like. 
Just click on the black circle in the white square icon again to hide the mask and starting decreasing the exposure slider.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-24.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Nice! 
Here, we’ve brought the exposure down by -0.50EV through our Luminosity Mask, targeting the highlights and darkening them. 
We’ve also tweaked the bottom left-hand slider by bringing it down to 3 for a bit more feathering.</p>
<p>Here’s a before and after.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/photo-comp4.jpg" width='960' height='724' alt='Ian Hex darktable luminosity mask tutorial' />
<figcaption>
<em>Fig 11</em>: Shadows brightened only via a Luminosity Mask<br/>
<em>Fig 12</em>: Brightest highlights darkened through a Luminosity Mask in a new exposure adjustment.
</figcaption>
</figure>

<p><em>Giggedy.</em> So this photo is starting to look pretty sweet. 
Let’s just make one more adjustment, globally this time with no Luminosity Mask. 
I want to generally increase the overall exposure of the photo. </p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-25.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>Good! 
As you can see, on the right-hand side I’ve created another Exposure module, now called “exposure 2” and have increased the overall exposure of the photo by 0.50EV. </p>
<p>To round up this tutorial, let’s look into making one more adjustment through Luminosity Masks. 
Now that we’ve brightened up the shadows and darkened down the highlights, we’ve moved a lot of the tones in the photo towards the midtones. 
This is where the real meat of the image is. 
We can now really give this photo some punch and pop by applying some contrast to just the midtones of the image. 
Here’s how.</p>
<p>Open the Contrast, Brightness &amp; Saturation module, select “blend” then select the “parametric mask” option in the dropdown.</p>
<figure class=''>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-26.jpeg" width='640' height='360' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>You’ll note this time round that the tabs in the module—the “L”, “a”, “b”, “C” and “h”—are different to the Exposure module. 
Don’t worry. 
Just leave the “L” for Luminosity selected. 
We’re now going to adjust the Input sliders so that we’re targeting just the <em>midtones</em> of the photo. 
We do this by deselecting <em>both the highlights and shadows</em>. 
This is done by moving the left-hand sliders up and the right-hand sliders down towards the middle.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-27.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p>So here’s how my midtones Luminosity Mask looks. 
On the right, you can see that I’ve brought the sliders towards the middle and then dropped the bottom slider of the pair away so that there’s some feathering. 
This is quite a tight midtones mask but that’s OK. 
Now let’s hide the mask and start increasing the contrast. </p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/lum-tut-28.jpeg" width='960' height='523' alt='Ian Hex darktable luminosity mask tutorial' />
</figure>

<p><em>Much better</em>. 
Because we’re only targeting a tight selection of the midtones we can make quite an aggressive contrast adjustment (I’ve brought the contrast slider way up to 50). 
I’ve also increased the brightness of the midtones a little, pulled down the saturation to compensate for the contrast adjustment, and also increased the blurring of the mask to 100, feathering out the mask further for a more natural adjustment. </p>
<p>Let’s look at the before and after.</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/photo-comp5.jpg" width='960' height='724' alt='Ian Hex darktable luminosity mask tutorial' />
<figcaption>
<em>Fig. 13</em>: Our RAW with the shadows brightened and the highlights darkened <br/>
<em>Fig. 14</em>: Contrast increased in the midtones through a Luminosity Mask 
</figcaption>
</figure>

<p>You can see the biggest difference this contrast adjustment made was to the texture in the foreground grass and the stone detail in the monument. 
You can make out the individual clumps of growth in the foreground as well as the individual tones in the stone of the monument. 
Neat. </p>
<p>Finally, here’s an overview of the adjustments we’ve made to this photo</p>
<figure class='big-vid'>
<img src="https://pixls.us/articles/luminosity-masking-in-darktable/photo-comp6.jpg" alt='Ian Hex darktable luminosity mask tutorial' />
<figcaption>
<em>Fig. 15</em>: Original unedited RAW<br/>
<em>Fig. 16</em>: Shadows brightened through a Luminosity Mask<br/>
<em>Fig. 17</em>: Highlights darkened through a Luminosity Mask<br/>
<em>Fig. 18</em>: Overall exposure increased a little, no mask<br/>
<em>Fig. 19</em>: Contrast in the midtones increased through a Luminosity Mask.
</figcaption>
</figure>


<h2 id="conclusion">Conclusion<a href="#conclusion" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>In this tutorial, I’ve only gone through the very basics of what is possible with darktable’s Luminosity Masks, so that one can make subtle adjustments to the shadows, highlights and midtones of their photo in order to balance the image better. 
But Luminosity Masks can be used for so much more and so I invite you to experiment! Try out the different modules available in darktable and see how you can apply various filters through different masks to achieve highly-specific adjustments to your RAWs like never before.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Still Writing ]]></title>
            <link>https://pixls.us/blog/2014/12/still-writing/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/12/still-writing/</guid>
            <pubDate>Fri, 12 Dec 2014 02:15:16 GMT</pubDate>
            <description><![CDATA[  <img src="https://lh3.googleusercontent.com/-QwTdTG8FL1Y/T9yrrP7f_eI/AAAAAAAAK14/UhCj5utvBbM/w1650-no/The%2BReverence%2Bof%2BSt%2BPauls.jpg" /><br/>
                 <h1>Still Writing</h1>  
                 <h2>Yes, things are still moving (slowly) along</h2>   
                <p>It’s been a busy month (+ &frac12;) for me personally.
Things have finally settled down so I can get back to writing articles and working on the site.</p>
<h2 id="wavelets-coming"><a href="#wavelets-coming" class="header-link-alt">Wavelets Coming</a></h2>
<p>As I mentioned in the <a href="https://pixls.us/blog/2014/10/iterating/">previous post</a>, I’m currently working through a re-write of the various tutorials I had done about using Wavelet Decompose for skin retouching.
I’m about <sup>2</sup>&frasl;<sub>3</sub> of the way through it now and expect to have it finished shortly.
<!-- more --></p>
<h2 id="guest-writer-ian-hex"><a href="#guest-writer-ian-hex" class="header-link-alt">Guest Writer Ian Hex</a></h2>
<p>I also previously mentioned that I’ve been reaching out to a few folks to see if they might be interested in writing some articles for the site.
I’m <em>extremely</em> pleased to say that <a href="https://plus.google.com/+IanHex/about">Ian Hex</a> is stepping up to the plate with a neat tutorial about <a href="http://www.darktable.org/">darktable</a> that is being written right at this very moment!</p>
<p>If you haven’t had a chance to see Ian’s work I highly recommend stopping by his site at <a href="http://lightsweep.co.uk/">http://lightsweep.co.uk/</a> to get a gander at some epic images from the UK.
I desperately want to hop on a plane and visit after seeing them!</p>
<p>His self-professed mission is:</p>
<blockquote>
<p>..to show off the beauty of British landscapes and architecture to the world</p>
</blockquote>
<p>and I’d say he’s doing a bang-up job of it so far!</p>
<!-- FULL-WIDTH -->
<figure class='full-width'>
<img src='https://lh4.googleusercontent.com/-v1YXb39LcGU/UgKMka3X-QI/AAAAAAAAcME/eLd41FOcZWg/w1650-no/fire%2Bof%2Bwhitbey%2Babbey.jpg' alt=''/>
<figcaption>
<em>Fire of Whitby Abbey</em> by <a href="http://lightsweep.co.uk">Ian Hex</a> (<a class='cc' href='https://creativecommons.org/licenses/by-nc-sa/3.0/' target='_blank'>cbna</a>)
</figcaption>
</figure>

<p><figure class='full-width'>
<img src='https://lh5.googleusercontent.com/-U-joYnXk96M/UydLySqCmJI/AAAAAAAAkoo/7GGzWvxCMsU/w1650-no/wonder%2Bof%2Bvariety%2Bgoogle.jpg' alt='' /></p>
<p><figcaption>
<em>Wonder of Variety</em> by <a href="http://lightsweep.co.uk">Ian Hex</a> (<a class='cc' href='https://creativecommons.org/licenses/by-nc-sa/3.0/' target='_blank'>cbna</a>)
</figcaption>
</figure>
<!-- /FULL-WIDTH --></p>
<p>Ian will be writing about Luminosity Masks in darktable.
Given his results and body of work I am personally looking forward to this one!</p>
<p>Maybe if we get a good enough response with his post we can convince him to come back and write some more…</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Iterating ]]></title>
            <link>https://pixls.us/blog/2014/10/iterating/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/10/iterating/</guid>
            <pubDate>Wed, 29 Oct 2014 02:59:05 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2014/10/iterating/LGM Bug.jpg" /><br/>
                 <h1>Iterating</h1>  
                 <h2>Minor changes and another tutorial</h2>   
                <p>I’m working my way through some of the suggestions I’ve received from many folks.
In particular, the “px” icon in the upper left to slide open the navigation and Table of Contents has been changed to a (hopefully) more familiar ‘hamburger’ icon.
I’ll also be testing some other things in the coming weeks as time permits such as having a TOC show up by default in the right &#8531; of the page at the top.</p>
<p>Don’t expect it too soon as I want to focus on writing more content first.
I’m aiming for a December-ish timeframe for a more official launch and want to make sure there is a decent amount of material for folks to consume.</p>
<!-- more -->
<h2 id="the-next-tutorial"><a href="#the-next-tutorial" class="header-link-alt">The Next Tutorial</a></h2>
<p>Speaking of material, I’m starting work on a tutorial for skin retouching with wavelet decompose.
I’ve <a href="http://blog.patdavid.net/2014/07/wavelet-decompose-again.html">written</a> about this <a href="http://blog.patdavid.net/2011/12/getting-around-in-gimp-skin-retouching.html">many times before</a>, but want to port the ideas over here.</p>
<figure>
<img src='http://1.bp.blogspot.com/-9kAx4JgN3Eg/U8avZLbi0PI/AAAAAAAAQ4o/tQlbL-G3u2E/w600/dot-closed-eyes-wd.jpg' alt='Dot Eyes Closed Wavelets'/>
<figcaption>
“Dot Eyes Closed” wavelet decomposition
</figcaption>
</figure>

<p>I have a few extra thoughts surrounding the use of wavelets as well as some minor changes in my workflow with them that should make a new writeup more interesting (hopefully).
I’ll also focus specifically on skin retouching as opposed to some of the other things that can be done with wavelets.</p>
<h2 id="more-support"><a href="#more-support" class="header-link-alt">More Support</a></h2>
<p>I have reached out to some of my favorite amazing photographers using F/OSS in their workflows and the response has been overwhelmingly positive.  I’ll speak more about the folks in a later post, but I am personally very thankful that they have taken the time to respond and that it’s been so positive!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ More Content ]]></title>
            <link>https://pixls.us/blog/2014/09/more-content/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/more-content/</guid>
            <pubDate>Tue, 30 Sep 2014 14:54:37 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/blog/2014/09/more-content/will-write-for-food.jpg" /><br/>
                 <h1>More Content</h1>  
                 <h2>First article is done, more to come</h2>   
                <p>I’ve pretty much finished up the first article mentioned in the <a href="https://pixls.us/blog/2014/09/getting-closer">previous post</a>.
There is still a long way to go.</p>
<p>As much as I’d like to believe that <em>“If you build it, they will come”</em>, the reality is that nobody is coming until there is something worth coming for.
So I’m working hard on getting good content in place.</p>
<p>I’m also acutely aware that nobody will <em>stay</em> unless good content continues to be published, but that’s for another post.
<!--more--></p>
<h2 id="next-up"><a href="#next-up" class="header-link-alt">Next Up</a></h2>
<p>I am thinking the next article that I’ll update/port will be either <em>Luminosity Masks</em> or <em>Skin Retouching</em>.
I am also thinking that a port of my <a href="http://blog.patdavid.net/2012/06/getting-around-in-gimp-color-curves.html">older color curves</a> tutorials might be nice as well (particularly <a href="http://blog.patdavid.net/2012/07/getting-around-in-gimp-more-color.html">using sample points</a>).</p>
<p>That should get me to four good tutorials to start the site with.
At that point I can start queueing up the next few asap.</p>
<p>I also wanted to do more than straight single tutorials, though, which brings me to a question.</p>
<h2 id="types-of-content"><a href="#types-of-content" class="header-link-alt">Types of Content</a></h2>
<p><em>What types of content would those of you reading this be interested in?</em></p>
<p>At the moment I’m thinking of 3 main types of articles, with a possible (probable?) fourth:</p>
<ul>
<li>Tutorials</li>
<li>Workflows</li>
<li>Showcase</li>
<li>Getting the Shot</li>
</ul>
<p>A small explanation on what I’m thinking may help here.</p>
<h3 id="tutorials"><a href="#tutorials" class="header-link-alt">Tutorials</a></h3>
<p>These would be similar to the <a href="http://localhost:8080/articles/digital-black-and-white-conversion-GIMP/">Digital B&amp;W</a> article I’ve already ported.
If you’ve read most of my tutorials on my blog, then you’re already familiar with what I’m thinking for these.</p>
<p>These are straight tutorials looking at a single (usually) effect and how to achieve it.
The primary focus is on the steps and tools to produce the desired result.</p>
<h3 id="workflows"><a href="#workflows" class="header-link-alt">Workflows</a></h3>
<p>I am envisioning a <em>workflow</em> article to be more of a look at the creative process to achieve a final resulting image.
This is more along the lines of another previous set of posts I had written about: <a href="http://blog.patdavid.net/2013/03/the-open-source-portrait-equipment.html">The Open Source Portrait</a> and the <a href="http://blog.patdavid.net/2013/08/an-open-source-headshot-ronni.html">Open Source Headshot</a>.</p>
<p>These articles would focus on all of the steps and tools to arrive at a resulting image.
The difference from a <em>tutorial</em> article is that if a <em>tutorial</em> article might explore how to use Wavelet Decompose for skin retouching, a workflow article might include using that technique (among others) to realize a final vision.</p>
<h3 id="showcase"><a href="#showcase" class="header-link-alt">Showcase</a></h3>
<p>Showcasing some of the amazing work I see occasionally is important as well, I think.
One, the artists doing this great work really do deserve to be talked about and exposed to a wider audience.</p>
<p>Second, great work by F/OSS using artists act as ambassadors for what is possible using these tools.
Too often the low opinion of many concerning F/OSS tools is framed by sub-standard work being shown.
There are some amazing photographers working with these tools, and my hope is that they can stand as examples to not only showcase F/OSS but also as a bar for others to aim for (and hopefully smash through).</p>
<h3 id="getting-the-shot-"><a href="#getting-the-shot-" class="header-link-alt">Getting the Shot?</a></h3>
<p>I’m not 100% sure on this yet, but I think I was originally viewing this as a complete workflow from start to finish, including actually shooting.
This is more focused on the photographic process in general and things to keep in mind while capturing the shots for processing later.</p>
<p>HDR, lighting, models, clothes, make-up, landscape scouting, locations, etc…</p>
<h3 id="quick-tips-"><a href="#quick-tips-" class="header-link-alt">Quick Tips?</a></h3>
<p>I’m not at all sure about this, but the idea is there.
Possibly posts that are very short and targeted at a very specific task or function.
Something that might not really warrant a long-form article but could still be quickly useful for others.</p>
<p>I am reminded of this due to an <a href="https://www.youtube.com/watch?v=n4OBn5DJdjk&amp;lc">old video of mine</a> that I had done quickly for someone on G+ about how to add a watermark over an image.</p>
<div class='big-vid'>
<div class='fluid-vid'>
<iframe width="560" height="315" src="http://www.youtube-nocookie.com/embed/n4OBn5DJdjk?rel=0" frameborder="0" allowfullscreen></iframe>
</div>
</div>

<p>You can tell why making videos is best left to folks like Rolf…</p>
<h2 id="forum-and-comments"><a href="#forum-and-comments" class="header-link-alt">Forum and Comments</a></h2>
<p>Thanks to darix (once again) over in irc on <code>#darktable</code> for setting up a <a href="http://www.discourse.org/">Discourse</a> instance for me to play with.
I have used it previously on <a href="http://boingboing.net">boingboing.net</a>, and I rather like what I’ve seen.
It also appears that there may be a way to embed thread posts as well, which would be a nice solution for commenting.</p>
<h2 id="thoughts-"><a href="#thoughts-" class="header-link-alt">Thoughts?</a></h2>
<p>Anyone with any thoughts on this, as usual, feel free to drop me a line and tell me what you think!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Getting Closer ]]></title>
            <link>https://pixls.us/blog/2014/09/getting-closer/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/getting-closer/</guid>
            <pubDate>Thu, 25 Sep 2014 22:18:12 GMT</pubDate>
            <description><![CDATA[  <img src="https://lh3.googleusercontent.com/-w_qFbIdXNzk/VCR_AeDB8zI/AAAAAAAAAJM/sOdDuQOra78/w1650-no/Dot%2BLeipzig%2BMarket.jpg" /><br/>
                 <h1>Getting Closer</h1>  
                 <h2>First article is mostly written</h2>   
                <p>Just a quick update on a couple of interesting things.</p>
<p>The first article is almost done being re-written and updated.</p>
<p>I added some functionality to the slide-out menu and am still thinking about the best icon to use.</p>
<p>I also had a nice epiphany when I realized that the styling I had already written to make big videos works great for images as well.
<!--more--></p>
<h2 id="first-test-article"><a href="#first-test-article" class="header-link-alt">First Test Article</a></h2>
<p>The first article is almost done being ported and formatted.
For anyone who’s curious, it’s a long post from the five part series I did on B&amp;W conversion using GIMP (originally <a href="http://blog.patdavid.net/2012/11/getting-around-in-gimp-black-and-white.html">published on my blog</a>).</p>
<p>The writing is going a bit slow because I am also feeling out the formatting and a couple of other minor visual things as they relate to a full-blown article.
Of course, it doesn’t help that it’s also a really, really long article…</p>
<p>For those of you bothering to read this blog, and who want to take a look at the state of that article, it can be found here:
<a href="https://pixls.us/articles/digital-black-and-white-conversion-GIMP">Pixls.us: Digital B&amp;W Conversion (GIMP)</a>.
Just don’t forget to let me know if anything looks funky, or with any suggestions/comments/criticisms.</p>
<h3 id="speaking-of-long"><a href="#speaking-of-long" class="header-link-alt">Speaking of Long</a></h3>
<p>Speaking of which, one of my first conundrums while working on it was a question of load times vs. convenience. 
The original article was written as <em>five</em> separate blog posts which kept everything in reasonably bite-sized chunks to digest.
The problem is that as a reader I am sometimes annoyed at having to click through multiple pages to read an article and I thought that most readers here might feel the same way.</p>
<p>One of my concerns was load times and rendering speed of large pages.
I <em>think</em> I have all the assets set to load as quick as possible above the fold.
I’ve tried to optimize all images as much as possible and am making sure to define discrete <code>width</code> and <code>height</code> attributes in the html to help the browser render and not have to reflow (hopefully).</p>
<p>There are still a few optimizations that I have to implement that I haven’t yet (minify javascript and concatenating all my stylesheets for actual delivery), but I have them in the queue to do.
Oh, and spritesheets for some assets that I will get around to making soon as well.</p>
<p>So my current thought is to keep the articles to a single page, even if they are long.
I am also 100% open to other ideas as well so if you have one feel free to hit me up!</p>
<h3 id="getting-around"><a href="#getting-around" class="header-link-alt">Getting Around</a></h3>
<p>Long pages can be a bit cumbersome to navigate, though.
To help make it easier to target relevant information in the page, all of the headings in a page should have a unique id attribute.
This means that users will be able to link directly to sections of a long page (this seems to have fallen out of favor with many websites - why?!).</p>
<p>For instance, I can link directly to the previous section of this post by including the id of the element in the url:</p>
<pre><code>http://pixls.us/blog/2014/09/getting-closer/#speaking-of-long
</code></pre><p>I’m still thinking about the easiest/best way to present this capability to users, but the groundwork is there for the future.</p>
<h4 id="navigation"><a href="#navigation" class="header-link-alt">Navigation</a></h4>
<p>I’m not 100% sure this is obvious, but the “px” logo in the upper-left corner of the page <em>should</em> slide out a navigation from the left side of the page (assuming you have javascript enabled in your browser).
If you don’t have javascript enabled, then clicking the logo will take you to the footer of the page where the basic navigation links are located.</p>
<p class='aside'>
I’m also considering a re-working of the icon to possibly make it more obvious that it opens a menu.
Perhaps something like the “hamburger menu icon” is in order?
</p>

<p>The first set of links are the main ones for navigating the site <em>Home</em>, <em>Blog</em>, <em>Articles</em> and <em>Software</em>.
Just below that will be the navigation links for the contents of the current page.</p>
<figure>
<img src="https://pixls.us/blog/2014/09/getting-closer/nav-example.png" alt="pixls.us navigation pane screenshot" />
</figure>

<p>For no other reason than I thought it was neat, I also made it so that the background of each of the Table of Contents entries will be a slightly darker color relative to how far along you are in the page/section.
In the example above, I have already read <em>Getting Closer</em> and <em>First Test Article</em>, and I am ~75% of the way through the <em>Speaking of Long</em> section of the post.</p>
<p>Unfortunately, this won’t work without javascript enabled.
I am still thinking of a way to possibly include the TOC in the page without screwing up the layout too much.
Something to play with later I suppose…</p>
<h3 id="pretty-pictures"><a href="#pretty-pictures" class="header-link-alt">Pretty Pictures</a></h3>
<p>At the moment I am using a combination of serving up the images directly from my host, and using Google+ photos.
Mostly because I have limited space on my webhost, and I’m not quite sure what the impact will be just yet.
I also gain the distributed Google infrastructure for image hosting, which helps I think as images are by far the biggest files to serve for these pages.</p>
<p>I also get on-the-fly image resizing when hosting the images on Google, which is handy while I build things out.</p>
<p>One of the downsides is that the on-the-fly resizing doesn’t produce progressive jpegs, which I thought might help with rendering speeds of large pages (images loading progressively at least show that something is there…).</p>
<h4 id="wider-images"><a href="#wider-images" class="header-link-alt">Wider Images</a></h4>
<p>I think I mentioned it in the previous post <a href="https://pixls.us/blog/2014/09/the-big-picture/"><em>The Big Picture</em></a> that I had done the styling to get images to span the entire width of the page.
In that same post I also demonstrated a means for making embedded videos bigger as well.
It turned out that the same styling worked great for images as well.</p>
<p>Here is the lede image wrapped in a <code>&lt;figure&gt;</code> tag:</p>
<figure>
<img src='https://lh3.googleusercontent.com/-w_qFbIdXNzk/VCR_AeDB8zI/AAAAAAAAAJM/sOdDuQOra78/w1650-no/Dot%2BLeipzig%2BMarket.jpg' alt='Dot in the Leipzig Market by Pat David' width='640' height='401' />
<figcaption>
A caption to the image in a <code>&lt;figcaption&gt;</code> tag.
</figcaption>
</figure>

<p>I can re-use the styling for the larger video to automatically make the image much larger and centered on the page:</p>
<figure class='big-vid'>
<img src='https://lh3.googleusercontent.com/-w_qFbIdXNzk/VCR_AeDB8zI/AAAAAAAAAJM/sOdDuQOra78/w1650-no/Dot%2BLeipzig%2BMarket.jpg' alt='Dot in the Leipzig Market by Pat David' width='960' height='602' />
<figcaption>
Using class <code>big-vid</code> on the figure.
</figcaption>
</figure>

<p>And, of course, wrapping the <code>&lt;figure&gt;</code> in a <code>&lt;!-- FULL-WIDTH --&gt;</code> tag yields:</p>
<!-- FULL-WIDTH -->
<figure class='full-width'>
<img src='https://lh3.googleusercontent.com/-w_qFbIdXNzk/VCR_AeDB8zI/AAAAAAAAAJM/sOdDuQOra78/w1650-no/Dot%2BLeipzig%2BMarket.jpg' alt='Dot in the Leipzig Market by Pat David' width='960' height='602' />
<figcaption>
Wrapping <code>&lt;figure&gt;</code> with a <code>&lt;!-- FULL-WIDTH --&gt;</code> tag <strong>and</strong> setting the class to <code>full-width</code>.
</figcaption>
</figure>
<!-- /FULL-WIDTH -->

This is a <em>photography</em> site, right?!

#### Comparing Images

I still don’t have a great solution for image comparison.
The problem is that ideally I could have an image that shows some results with an easy way to toggle back to a comparison image (before/after for instance).
The current way I am doing it is to toggle the image when it’s clicked on.
If you hover over an image, and the cursor changes to a crosshair, then click on it to compare.

I’m borrowing this from the B&amp;W article I was just working on:

<figure>
<img src="https://pixls.us/articles/digital-black-and-white-conversion-GIMP/rgb-mix-luminosity.png" alt="RGB Luminosity Mix" data-swap-src="https://pixls.us/articles/digital-black-and-white-conversion-GIMP/rgb-mix-base.png" width="500" height="500" />
<figcaption>
Click on the image to compare to original.
</figcaption>
</figure>

<p>This works across mobile as well but I can’t help but feel it is a bit inelegant.
It is also dependent on javsacript and I don’t know if there is a simple way around this.
At least now, without javascript turned on, everything else still works except toggling to the comparison version.</p>
<h3 id="before-launch"><a href="#before-launch" class="header-link-alt">Before Launch</a></h3>
<p>I’d like to have at least a few good articles ready to go at launch time.
As I said, I’m almost finished with the B&amp;W conversion article, but the question is what to migrate next?</p>
<p>I’m thinking that one of the <em>Open-Source Portrait</em> posts would make a nice article to launch with as well,
or perhaps an update/re-write of using Wavelet Decompose for skin retouching?
If anyone has a preference or suggestion, I’m all ears!</p>
<p>I’m also going to publish an interview with a F/OSS photographer whose work I admire.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Digital B&W Conversion (GIMP) ]]></title>
            <link>https://pixls.us/articles/digital-b-w-conversion-gimp/</link>
            <guid isPermaLink="true">https://pixls.us/articles/digital-b-w-conversion-gimp/</guid>
            <pubDate>Tue, 16 Sep 2014 18:36:26 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/digital-b-w-conversion-gimp/Into-the-Fog.jpg" /><br/>
                 <h1>Digital B&W Conversion (GIMP)</h1>  
                 <h2>Methods for converting to B&W</h2>   
                <p>Black and White photography is a big topic that deserves entire books devoted to the subject.
In this article we are going to explore some of the most common methods for converting a color digital image into monochrome in <a href="http://www.gimp.org" title="GIMP Homepage">GIMP</a>.</p>
<h2 id="what-we-are-trying-to-achieve">What We are Trying to Achieve<a href="#what-we-are-trying-to-achieve" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>There are a few things you should focus on in regards to preparing your images for a B&amp;W conversion.
You want to keep in mind that by removing color information you are effectively left with only tonal data (and composition) to convey your intentions.</p>
<figure class="big-vid">
<img src="https://2.bp.blogspot.com/-tTnj2ELdHSM/UKLIXA41skI/AAAAAAAADaw/aAqUIgVKLj8/w960-no/AnselAdamstrees%255B1%255D.jpg" width="960" height="653" alt="Aspens by Ansel Adams" />
<figcaption>
Aspens (no title), <a href="http://www.anseladams.com/">Ansel Adams</a><br/>
&copy;The Ansel Adams Publishing Rights Trust
</figcaption>
</figure>

<p>This can be both liberating and confining.</p>
<p>By liberating yourself of color data the focus is entirely on the subjects and composition
(this is often one of the primary reasons street photography is associated with B&amp;W).
Conversely, the subjects and composition need to be much stronger to carry the result.</p>
<figure>
<img src="https://lh4.googleusercontent.com/-zsW7nufLVLs/UJ1HPOg0vmI/AAAAAAAARS8/a3aOaDg0d38/w640-h811-no/9845_98f0%5B1%5D.jpeg" width="640" height="811" alt="Edward Weston, Pepper #30"/>
<figcaption>
Without color, the form and tones are all that’s left.<br/>
&copy;<a href="http://www.edward-weston.com/edward_weston_natural_1.htm">Edward Weston, Pepper #30</a>
</figcaption>
</figure>

<p class="aside">
As an interesting side note, Edward Weston’s Pepper #30 is the image that began my personal interest in B&amp;W photography.
</p>

<h3 id="tonality">Tonality<a href="#tonality" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>What I tend to refer to when using this term is the presence and relationship between different values of gray in the image.<br>This can be subtle with smooth, even differences between values or much more pronounced.</p>
<p>When referred to as the singular <em>“tone”</em>, it is usually referring to a single value of gray in the image.</p>
<h3 id="contrast">Contrast<a href="#contrast" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Contrast is the relative difference in tones between parts of an image.
High contrast will have a sharper differentiation between tones, while low contrast will have less differences.
Often, a straight conversion to grayscale can result in values that are all similar, yielding a tonally “flat” image.</p>
<p>Contrast is often considered in terms of the entire image <em>globally</em>, or in smaller sections <em>locally</em>.</p>
<h3 id="dynamic-range">Dynamic Range<a href="#dynamic-range" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Dynamic range is the overall range of values in your image from the darkest to the brightest.</p>
<h3 id="the-approach">The Approach<a href="#the-approach" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The approach we will take here is similar to what I had done in my film days.
We’ll attempt to use different methods of grayscale conversion (and possibly blending them) to get to a working image that is as full of tonal detail as possible.
Petteri Sulonen refers to this as his <em>“digital negative”</em> – if you want a great look at a digital B&amp;W workflow head over and read <a href="http://www.prime-junta.net/pont/How_to/n_Digital_BW/a_Digital_Black_and_White.html">his article</a>.</p>
<p>Then, with an image containing as much tonal detail as possible, we will modify it with adjustments of various types to produce a final result that is visually pleasing.</p>
<p>Before heading down that path, it may help to have a closer look at the tools being used.
Let’s have a look at how an image gets displayed on your monitor first.</p>
<h2 id="your-pixels-and-you">Your Pixels and You<a href="#your-pixels-and-you" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>You are working in an RGB world when you stare at your monitors.
Every single pixel is composed of 3 sub-pixels of Red, Green, and Blue.</p>
<figure>
<img src="https://4.bp.blogspot.com/-PQgiDUW-cro/UJrrXrq9HWI/AAAAAAAADPE/j_3YszlVeHU/s300/300px-TN_display_closeup_300X%255B1%255D.jpg" width="300" height="240" alt="TN LCD Display 300X close up"/>
<figcaption>
300X magnification of an LCD panel.<br/>
(Image from <a href="http://en.wikipedia.org/wiki/File:TN_display_closeup_300X.jpg">wikipedia</a>)
</figcaption>
</figure>

<p>The variations in brightness of each of the sub-pixels will “mix” to produce the colors you finally see.
The scales available in an 8-bit display are discrete levels from 0–255 for each color (2<sup>8</sup> = 256).
So if all of the sub-pixel values are 0, the resulting color is black.
If they are all 255, you’ll see white.
Any other combination will produce some variation of a color.</p>
<p class="color-ex" style="background-color: rgb(80,205,255);">
80, 205, 255 for instance
</p>
<p class="color-ex" style="background-color: rgb(255,172,80);">
or 255, 172, 80
</p>

<p class="aside">
<span>But what about 16-bit images?</span>
Well - the data is still in the image file to correctly describe the colors at 16bit/channel, but most likely what you’ll be seeing on your monitor is an interpolation of the values to an 8-bit/channel colorspace.
You should <em>always</em> work in the highest bit depth color that you can, and leave any conversions to 8-bit for when you are saving your work to be viewed on a monitor.
</p>

<p>The important point to take away from this is to realize that when all three color channels are the same value, you’ll got a grey color.
So a middle gray value of 127, 127, 127 would look like this:</p>
<p class="color-ex" style="background-color: rgb(127,127,127); color: #222;">
127, 127, 127
</p>
<p class="color-ex" style="background-color: rgb(222,220,220);">
While this is a little brighter: 220, 220, 220
</p>

<p>Very quickly you should realize that a true monochromatic grayscale image can display up to 256 discrete shades of gray going from 0 (pure black) to 255 (pure white),
while for 16-bit images, 2<sup>16</sup> will yield 65,536 different shades.
It is this limitation for purely gray 8-bit images that introduces artifacts over smooth gradations (<a href="http://en.wikipedia.org/wiki/Posterization">posterization</a> or banding) – and is a good reason to keep your bit depths as high as possible.</p>
<h2 id="getting-to-grey">Getting to Grey<a href="#getting-to-grey" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>There are many different paths to get to a grayscale image and almost none of them are equal.
They will all produce different images based on their method of conversion, 
and it will be up to you to decide which ones (or portions of) to keep and build upon to create your final result.</p>
<figure class="big-vid"> 
<img src="https://lh3.googleusercontent.com/-0BRTT_4u_A0/VBj3kqE8rJI/AAAAAAAARcw/WBSevvGSCqw/w960-h587-no/Conversation%2Bin%2BHayleys.jpg" width="960" height="587" alt="Conversation in Hayleys by Pat David" />
<figcaption>
A combination of luminosity desaturation and GEGL C2G<br/>
<em>Conversation in Hayleys</em> by Pat David (<a href="http://creativecommons.org/licenses/by-sa/4.0/" class="cc">cba</a>)
</figcaption>
</figure>

<p>For this tutorial we are going to try and cover as many different methods as possible.
This means we’ll be having a look at:</p>
<ul>
<li>Desaturate Command (Lightness, Luminosity, Average)</li>
<li>Channel Mixer</li>
<li>Decompose (RGB, LAB)</li>
<li>Pseudogrey</li>
<li>Layer Blending Modes</li>
<li>Film Emulation Presets</li>
<li>Combining these methods</li>
</ul>
<p>One of these methods may work fine for you.
Or, if you’re like me, it will most likely be a combination of one or more of these methods blended through a combination of layer masking and opacity adjustments.</p>
<h2 id="desaturate-gimp-">Desaturate (GIMP)<a href="#desaturate-gimp-" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Perhaps the easiest and most straightforward path to a grayscale image is using the <code>Desaturate</code> command.
It can be invoked from the <a href="http://www.gimp.org" title="GIMP Homepage">GIMP</a> menu:</p>
<p><span class="Cmd">Colors &rarr; Desaturate…</span></p>
<p>There are three options available from this menu:</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/GIMP desaturate dialog.png" alt="GIMP Desaturate Dialog" width="372" height="230" />
</figure>

<p>Each of these options (Lightness, Luminosity, Average) will generate a grayscale image for you,
but the difference lies in the <em>way</em> they interpret the image colors into values of gray.</p>
<p>To illustrate the differences, consider the following two figures.
One is a gradient of red, green and blue from black to full saturation.
The other are overlapping circles of color in an additive mix.</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-base.png" alt="RGB Base Gradient Image" width="500" height="256" />
<figcaption>
Base RGB gradient of pure colors
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-mix-base.png" alt="RGB Base Mix Image" width="500" height="500" />
<figcaption>
Base RGB (additive color) mix
</figcaption>
</figure>

<p>Let’s investigate each of the desaturation options on these test images.</p>
<h3 id="lightness">Lightness<a href="#lightness" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The Lightness method will add the largest value of red, green <em>or</em> blue and the smallest value, then divide the result by 2.</p>
<p class="Cmd aside">
&frac12; &times; ( MAX(R,G,B) + MIN(r,g,b) )
</p>

<p>So, for instance, with an RGB value of 100, 20, 210, the equation would be:</p>
<p class="Cmd aside">
&frac12; &times; ( <strong>210</strong> + <strong>20</strong> ) = 115
</p>

<p>Using the Lightness function on our test images yields the following results:</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-lightness.png" alt="RGB Desaturate Lightness" width="500" height="256" />
<figcaption>
Lightness conversion yields similar values regardless of color
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-mix-lightness.png" alt="RGB Lightness Mix" data-swap-src="rgb-mix-base.png" width="500" height="500" />
<figcaption>
Click to compare to original
</figcaption>
</figure>

<p>This means that one channel is actually ignored in creating the final value.</p>
<h3 id="average">Average<a href="#average" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Average will use the numerical average of the RGB values in each pixel.</p>
<p class="Cmd aside">
&frac13; &times; ( R + G + B )
</p>

<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-average.png" alt="RGB Desaturate Average" width="500" height="256" />
<figcaption>
Averaging, the values will trend darker overall
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-mix-average.png" alt="RGB Average Mix" data-swap-src="rgb-mix-base.png" width="500" height="500" />
<figcaption>
Click to compare to original
</figcaption>
</figure>



<h3 id="luminosity">Luminosity<a href="#luminosity" class="header-link"><i class="fa fa-link"></i></a></h3>
<p><em>Lightness</em> and <em>Average</em> both evaluate the final value of gray as a purely numerical function without regard to the actual color components.
<em>Luminosity</em> on the other hand, utilizes the fact that our eyes will perceive green as lighter than red, and both lighter than blue (<a href="http://en.wikipedia.org/wiki/Luminance_(relative)">relative luminance</a>).
This is also why your camera sensor <em>usually</em> has <a href="http://en.wikipedia.org/wiki/Bayer_filter">twice as many green detectors as red and blue</a>.</p>
<p>The weighted function describing relative luminance is:</p>
<p class="Cmd aside">
(0.2126 &times; R) + (0.7152 &times; G) + (0.0722 &times; B)
</p>

<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-luminosity.png" alt="RGB Desaturate Luminosity" width="500" height="256" />
<figcaption>
This is closer to how our eyes will actually perceive the brightness of each color
</figcaption>
</figure>

<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-mix-luminosity.png" alt="RGB Luminosity Mix" data-swap-src="rgb-mix-base.png" width="500" height="500" />
<figcaption>
Notice the overwhelming contribution from green<br/>
Click to compare to original
</figcaption>
</figure>

<p>No one of these methods is necessarily any better than the other objectively for your own conversions.
It really depends on the desired results.
However, if you are in doubt about which one to use, <em>Luminosity</em> may be the better option of the three to <a href="http://en.wikipedia.org/wiki/Luminosity_function">more closely emulate</a> the brightness levels you will perceive.</p>
<h3 id="examples">Examples<a href="#examples" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The image below, <a href="http://www.flickr.com/photos/patdavid/3808678255">Joseph N. Langan Park</a>, is an interesting example to see just how much green influences the conversion result using luminosity.  Click through each of the different conversion types to them, and pay careful attention to what <strong>Luminosity</strong> does with the green bushes along the waters edge.</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/langan.jpg" alt="Langan Park by Pat David" width="640" height="414" />
<figcaption>
Click to compare:<br><span class="toggle-swap" data-fig-swap="langan.jpg">Original</span>
<span class="toggle-swap" data-fig-swap="langan-lightness.jpg">Lightness</span>
<span class="toggle-swap" data-fig-swap="langan-average.jpg">Average</span>
<span class="toggle-swap" data-fig-swap="langan-luminosity.jpg">Luminosity</span>
</figcaption>
</figure>

<p>This shot of <a href="http://www.flickr.com/photos/patdavid/6231554301/">Whitney</a> shows the effect on skin tones, as well as the change in her shirt color due to the heavy reds present.
In just a <strong>Lightness</strong> conversion, the red shirt becomes relatively flat compared to her skin tones,
but becomes darker and more pronounced using <strong>Luminosity</strong>.
Her lips get a bit of a boost in tone in the <strong>Luminosity</strong> conversion as well.</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/whitney.jpg" alt="Whitney by Pat David" width="640" height="640" />
<figcaption>
Click to compare:
<span class="toggle-swap" data-fig-swap="whitney.jpg">Original</span>
<span class="toggle-swap" data-fig-swap="whitney-lightness.jpg">Lightness</span>
<span class="toggle-swap" data-fig-swap="whitney-average.jpg">Average</span>
<span class="toggle-swap" data-fig-swap="whitney-luminosity.jpg">Luminosity</span>
</figcaption>
</figure>




<h2 id="channel-mixer">Channel Mixer<a href="#channel-mixer" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Using <strong>Desaturate</strong> lets you convert to grayscale based on pre-defined functions for calculating the final value,
but what if you wanted even further control?
What if you wanted to decide just how much the red channel should influence the final gray value,
or to have more control over the ratios and weightings from each of the different channels independently?
That’s precisely what the <strong>Channel Mixer</strong> will allow you to do.</p>
<p>For the examples below I’ll use a different color gradient test map going from blue to blue HSV gradient, with a gradient to black vertically.
This represents the entire 8-bit colorspace.</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/rgb-hsv.png" alt="RGB HSV Gradient" width="550" height="256" />
<figcaption>
Gradient representing all the colors/shades in 8-bit sRGB colorspace.<br/>
Click to compare:
<span class="toggle-swap" data-fig-swap="rgb-hsv.png">Original</span>
<span class="toggle-swap" data-fig-swap="rgb-hsv-lightness.png">Lightness</span>
<span class="toggle-swap" data-fig-swap="rgb-hsv-average.png">Average</span>
<span class="toggle-swap" data-fig-swap="rgb-hsv-luminosity.png">Luminosity</span>
</figcaption>
</figure>

<p>Take a quick moment to click through the various desaturation methods already mentioned.</p>
<p>The <strong>Channel Mixer</strong> can be invoked through:</p>
<div class="Cmd">Colors &rarr; Components &rarr; Channel Mixer…</div>

<p>The dialog will look like this with the test gradient:</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/channel-mixer.png" alt="GIMP Channel Mixer Dialog" width="326" height="464" />
</figure>

<p>The <strong>Channel Mixer</strong> can be used to modify these channel on a full color image, but we are focusing on grayscale conversion right now.
So check the box for <em>Monochrome</em>, which will disable the <em>Output channel</em> option in the dialog (it’s no longer applicable).
This will turn your preview into a grayscale image.</p>
<h3 id="warning-math-ahead">Warning: Math Ahead<a href="#warning-math-ahead" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>If you checked the <em>Monochrome</em> option, and left the Red slider at 100, then you’d be seeing a representation of your image with no Green or Blue contribution (ie: you would basically be seeing the Red channel of your image):</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/channel-mixer-red.png" alt="GIMP Channel Mixer monochrome full red" width="326" height="464" />
<figcaption>
Basically just the red channel
</figcaption>
</figure>

<p>What this means is that with Green and Blue set to 0, the values of the Red are directly mapped to the output value for the grayscale image.
If you were looking at a pixel with RGB components of 200, 150, 100, then the <em>Value</em> for the pixel in this instance would become 200, 200, 200.</p>
<p>It’s also important to note that the sliders represent a <em>percent contribution to the final value</em>.</p>
<p>That is, if you set the Red and Green channels to 50(%), you would see something like this:</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/channel-mixer-red50-green50.png" alt="GIMP Channel mixer monochrome 50% red and green" width="326" height="464" />
</figure>

<p>In this case, Red and Green would contribute 50% of their values (with nothing from Blue) to the final pixel gray value.
Considering the same pixel example from above, where the RGB components are 200, 150, 100, we would get:</p>
<p class="Cmd aside">
( 200 &times; 0.5 ) + ( 150 &times; 0.5 ) + ( 100 &times; 0 )<br/>
( 100 ) + ( 75 ) + ( 0 ) = <strong>175</strong>
</p>

<p>So the final grayscale pixel value would be: 175, 175, 175.</p>
<h3 id="preserve-luminosity">Preserve Luminosity<a href="#preserve-luminosity" class="header-link"><i class="fa fa-link"></i></a></h3>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/eleven.jpg" alt="Spinal Tap up to eleven" width="623" height="336" />
<figcaption>
<em>“These go up to 11”</em> – <a href="http://en.wikipedia.org/wiki/Up_to_eleven">Nigel Tufnel</a>
</figcaption>
</figure>

<p>The astute will notice that the sliders actually have a range from -200 to 200.
So you may be asking – what happens if two channels contribute more than what is possible to show?</p>
<p>Using the pixel example again, what if both the Red and Green channels were set to contribute 100%?</p>
<p class="Cmd aside">
( 200 &times; 1.00 ) + ( 150 &times; 1.00 ) + ( 100 &times; 0 ) = <strong>350</strong>
</p>

<p>While the <strong>Channel Mixer</strong> will allow us to set these values, we can’t very well set the grayscale pixel value to be 350 (in an 8-bit image).
So anything above 255 will simply end up being clipped to 255 (effectively throwing away any tones above 255, bad!).</p>
<p>This means that you have to be careful to make sure that each of the three channel contributions don’t exceed 100 between all of them.
50% Red, 50% Green is ok – but 50% Red, 50% Green, <em>and</em> 50% Blue (150%) will clip your data.</p>
<p>This is where the <em>Preserve Luminosity</em> option comes into play.
This option will scale your final values so the effective result will always add up to 100%.
The scale factor from the above example would be calculated as:</p>
<p class="Cmd aside">
<sup>1</sup>&frasl;<sub>( 1.00 + 1.00 + 0 )</sub> = <strong>0.5</strong>
</p>

<p>So the value of <strong>350</strong> would be scaled by 0.5, giving the actual final value as 175.
If <em>Preserve Luminosity</em> is active, all the values would be scaled by this amount.</p>
<p>This is not to say that <em>Preserve Luminosity</em> is always needed, just stay aware of the possible effects if you don’t use it.</p>
<h4 id="speaking-of-luminosity">Speaking of Luminosity<a href="#speaking-of-luminosity" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Previously we talked about the function used for desaturating according to <em>relative luminance</em>.
If you’ll recall, the formula was:</p>
<p class="Cmd aside">
( 0.2126 &times; R ) + ( 0.7152 &times; G ) + ( 0.0722 &times; B )
</p>

<p>If you wanted to replicate the same results that <code>Desaturate → Luminosity</code> produces, you can just set the RGB sliders to the same values from that function (21.3, 71.5, 7.2):</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/channel-mixer-lum.png" alt="GIMP Channel mixer luminosity values" width="342" height="475" />
<figcaption>
Replicating the luminosity function
</figcaption>
</figure>

<p>If you’re just getting started with the <strong>Channel Mixer</strong>, this makes a pretty nice starting point to begin experimenting.</p>
<h3 id="experimenting">Experimenting<a href="#experimenting" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>A pretty landscape image by <a href="http://www.flickr.com">Flickr</a> user <a href="http://www.flickr.com/people/cyndicalhounfineart/">Cyndi Calhoun</a> serves as a nice test image for experimentation:</p>
<figure class="big-vid">
<img src="https://4.bp.blogspot.com/-iztPHXO-ZWA/UKvzRNgGFwI/AAAAAAAADmY/W0PY_3a_yVk/w960/cyndicalhounfineart-color.jpg" alt="Garden of the Gods by Cyndi Calhoun" width="960" height="638" />
<figcaption>
<a href="http://www.flickr.com/photos/cyndicalhounfineart/7990432224">Garden of the Gods - Looking North</a><br/>
by Cyndi Calhoun (<a href="https://creativecommons.org/licenses/by/2.0/" class="cc">cb</a>)
</figcaption>
</figure>

<p>You’ll want to keep in mind the primary RGB influences in different portions of your image as you approach you adjustments.
For instance, this image (not coincidentally) happens to have strong Red features (the rocks), Blue features (the sky), and Green features (the trees).</p>
<p>Keep an eye on the individual channels from getting so bright that you lose detail (blowouts),
or from crushing the shadows too much.
Remember, you want to try to keep as much tonal detail as possible!</p>
<p>So, using the luminosity function as a starting point…</p>
<figure class="big-vid">
<img src="https://3.bp.blogspot.com/-Kj-evm3wR2M/UKv1m2KKyiI/AAAAAAAADmo/GBPMHkYmSCg/w960/cyndicalhounfineart-CM-luminosity.jpg" alt="Garden of the Gods by Cyndi Calhoun Luminosity" width="960" height="638" />
<figcaption>
Straight conversion using the luminosity 
</figcaption>
</figure>

<p>It’s not a bad start at all, but the prominence of the red rocks in the sunlight has been dulled quite a bit.
It’s a central feature of the image and should really draw the eye towards it.
So the reds could be more pronounced to make the stone pop a little more.</p>
<p>With the <em>Preserve Luminosity</em> option checked, begin bumping the Red channel to taste.</p>
<figure class="big-vid">
<img src="https://4.bp.blogspot.com/-3AI-cCgBKhI/UKv2-uSUobI/AAAAAAAADm0/dcoCibmuKfo/w960/cyndicalhounfineart-CM-red-66.1.jpg" alt="Garden of the Gods by Cyndi Calhoun Red Channel" width="960" height="638" data-swap-src="https://3.bp.blogspot.com/-Kj-evm3wR2M/UKv1m2KKyiI/AAAAAAAADmo/GBPMHkYmSCg/w960/cyndicalhounfineart-CM-luminosity.jpg" />
<figcaption>
Red channel bumped up to 66.1<br/>
(Click image to compare to base luminosity conversion)
</figcaption>
</figure>

<p>This gives a little more prominence to the red stone.</p>
<p>The Green channel seems ok, but for comparison try lowering it to about half of the Red channel value.
Remember – <em>Preserve Luminosity</em> is checked so the final values will scale to give Red values twice the weight as Green.</p>
<figure class="big-vid">
<img src="https://3.bp.blogspot.com/-8axlWaZdtWU/UKv6IAJd24I/AAAAAAAADno/mQa0_SVqNbw/w960/cyndicalhounfineart-CM-green-33.jpg" alt="Garden of the Gods by Cyndi Calhoun Red Channel" width="960" height="638" data-swap-src="https://4.bp.blogspot.com/-3AI-cCgBKhI/UKv2-uSUobI/AAAAAAAADm0/dcoCibmuKfo/w960/cyndicalhounfineart-CM-red-66.1.jpg" />
<figcaption>
Green channel at ~half of Red.<br/>
(Click image to compare to previous step)
</figcaption>
</figure>

<p>This brings up the shadow side of the central rocks a bit as well as adds some definition to the trees and vegetation.
Also interesting is the apparent boost to the red rocks as well.</p>
<p>If you’re wondering why the red rocks got brighter as well, consider the math.
Previously Red and Green were very near each other in value (around 70), so both colors had approximately equal weight.
When Green got its influence cut in half, Red scaled to take a much larger influence, and because there was more red than green the final value will end up higher.</p>
<p>If we look at the RGB values of the red rocks, the values are roughly like this (ignoring Blue for the moment because for this example it’s staying constant): 226, 127.</p>
<p>If both Red and Green have equal influence, the final pixel value will be:</p>
<p class="Cmd aside">
( 226 &times; 0.5 ) + ( 127 &times; 0.5 ) = <strong>176.5</strong>
</p>

<p>Now if Green is only half as strong as Red, the value will be:</p>
<p class="Cmd aside">
<sup>( 226 &times; 0.5 ) + ( 127 &times; 0.25 )</sup>&frasl;<sub>( 0.5 + 0.25 )</sub> = <strong>193</strong>
</p>

<p>The result was divided by the influence amount to scale the way <em>Preserve Luminosity</em> would.
The final pixel value will become brighter in this case, which is why the red rocks got brighter with a decrease in the Green channel.</p>
<p>It should go without saying that the Blue channel will have a heavy influence on the sky (and many areas of the image in shadow).
To add a little drama to the sky, try removing the Blue channel influence by setting it to 0:</p>
<figure class="big-vid">
<img src="https://2.bp.blogspot.com/-uhP5KF3NkRM/UKwBGnx9iAI/AAAAAAAADoc/weZEupnGgdU/w960/cyndicalhounfineart-CM-blue-0.jpg" alt="Garden of the Gods by Cyndi Calhoun Red Channel" width="960" height="638" data-swap-src="https://3.bp.blogspot.com/-8axlWaZdtWU/UKv6IAJd24I/AAAAAAAADno/mQa0_SVqNbw/w960/cyndicalhounfineart-CM-green-33.jpg" />
<figcaption>
Blue channel set to 0<br/>
(Click image to compare to previous step)
</figcaption>
</figure>

<p>This will darken the sky up a bit (as well as some shadow areas).</p>
<p>Pay careful attention to what these changes do to the image in closer views.
In this case there is a higher amount of banding and noise in the smooth sky if values get pushed too far.
So try to approach it with a light hand.</p>
<p>The sliders also allow negative values.
This will seriously crush the channel results when applied (and will quickly lead to funky results if you’re not careful).
For example, to push the Blue channel even darker in the final result, try setting the Blue channel to -20:</p>
<figure class="big-vid">
<img src="https://1.bp.blogspot.com/-GmHZJXuUdkk/UKwDYHmOS1I/AAAAAAAADoo/pfsm-bDmW9c/w960/cyndicalhounfineart-CM-blue--20.jpg" alt="Garden of the Gods by Cyndi Calhoun Red Channel" width="960" height="638" data-swap-src="https://2.bp.blogspot.com/-uhP5KF3NkRM/UKwBGnx9iAI/AAAAAAAADoc/weZEupnGgdU/w960/cyndicalhounfineart-CM-blue-0.jpg" />
<figcaption>
Red: 66.1, Green: 33, Blue: -20<br/>
(Click image to compare to previous step)
</figcaption>
</figure>

<p>The sky has become much darker, as have the shadow side of the rocks.
There is an overall increase in contrast as well, but at the expense of nasty noise and banding artifacts in the sky.</p>
<p class="aside">
<span>General Rules of Thumb</span>
The Red channel is well suited for contrast (particularly in the brighter tones).
<br/>
The Green channel will hold most of the details.
<br/>
The Blue channel contains grain and (often) a lot of noise.
<br/><br/>
In skin, the Red channel is very flattering to the final result and you’ll often get good results by emphasizing the Red channel in portraits.
</p>



<h3 id="on-skin">On Skin<a href="#on-skin" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>The Red channel can be very flattering on skin and is a great tool to keep in mind when working on portraits.
For instance, below is the color image of Whitney from earlier:</p>
<figure>
<img src="https://lh4.googleusercontent.com/-svJdyAqz1H0/UKFbh4bX-4I/AAAAAAAADXs/Klo2tFX_Oac/w960/whitney-color.png" alt="Whitney in color by Pat David" width="640" height="640" />
<figcaption>
Whitney in color
</figcaption>
</figure>

<p>The straight <em>Luminosity</em> conversion is below.
Click on the image to compare it to a version where the Red channel is set equal to the Green channel (giving a greater emphasis on the Reds):</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/whitney-luminosity.jpg" alt="Whitney Luminosity by Pat David" width="640" height="640" data-swap-src="whitney-bw-equal-RG.jpg"/>
<figcaption>
Whitney in Luminosity<br/>
(Click to compare Red channel = Green channel)
</figcaption>
</figure>



<h3 id="b-w-film-simulation">B&amp;W Film Simulation<a href="#b-w-film-simulation" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Due to the popularity of the <strong>Channel Mixer</strong> as a straightforward means of conversion with nice control over each of the RGB channel contributions, many people have used it as a basis for building profiles of what they felt was a close emulation to the tonal response of classic black and white films.</p>
<p>Borrowing the table from <a href="http://www.prime-junta.net/pont/How_to/100_Curves_and_Films/_Curves_and_films.html#N104E4">Petteri Sulonen’s site</a>, these are some common RGB Channel Mixer values to emulate some B&amp;W films.
These aren’t exact, of course, but some people may find them useful.
Particularly as a starting-off point for further modifications.</p>
<table>
<thead>
<tr>
<th>Film</th>
<th>R, G, B</th>
</tr>
</thead>
<tbody>
<tr>
<td>Agfa 200X</td>
<td>18, 41, 41</td>
</tr>
<tr>
<td>Agfapan 25</td>
<td>25, 39, 36</td>
</tr>
<tr>
<td>Agfapan 100</td>
<td>21,40,39</td>
</tr>
<tr>
<td>Agfapan 400</td>
<td>20,41,39</td>
</tr>
<tr>
<td>Ilford Delta 100</td>
<td>21,42,37</td>
</tr>
<tr>
<td>Ilford Delta 400</td>
<td>22,42,36</td>
</tr>
<tr>
<td>Ilford Delta 400 Pro &amp; 3200</td>
<td>31,36,33</td>
</tr>
<tr>
<td>Ilford FP4</td>
<td>28,41,31</td>
</tr>
<tr>
<td>Ilford HP5</td>
<td>23,37,40</td>
</tr>
<tr>
<td>Ilford Pan F</td>
<td>33,36,31</td>
</tr>
<tr>
<td>Ilford SFX</td>
<td>36,31,33</td>
</tr>
<tr>
<td>Ilford XP2 Super</td>
<td>21,42,37</td>
</tr>
<tr>
<td>Kodak Tmax 100</td>
<td>24,37,39</td>
</tr>
<tr>
<td>Kodak Tmax 400</td>
<td>27,36,37</td>
</tr>
<tr>
<td>Kodak Tri-X</td>
<td>25,35,40</td>
</tr>
</tbody>
</table>
<p>There’s a good reason that <strong>Channel Mixer</strong> is such a popular means for converting an image to grayscale.
It’s flexible and allows for a great level of control over the contributions from each channel.</p>
<p>Unfortunately the only way to preview what is happening is in the tiny dialog window.
Even when zooming in it can sometimes be frustrating to make fine adjustments to the channel contributions.</p>
<h2 id="decomposing-colors">Decomposing Colors<a href="#decomposing-colors" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Another method of converting the image to grayscale is to decompose the image into its constituent channels.
When looking at the <strong>Channel Mixer</strong> previously, there was an option to set one of the RGB channels to 100 (and leaving the others at 0) that would isolate that specific channel.</p>
<p>If you wanted to isolate each of the RGB channel contributions into its own layer, it would be tedious to do manually.
Luckily, GIMP has a built-in command to automatically <strong>Decompose</strong> the image into different channels:</p>
<p><span class="Cmd">Colors &rarr; Components &rarr; Decompose…</span></p>
<p>Will bring up the <strong>Decompose</strong> dialog box:</p>
<figure>
<img src="https://pixls.us/articles/digital-b-w-conversion-gimp/decompose-base.png" alt="GIMP Decompose color dialog" width="297" height="203" />
<figcaption>
The <strong>Decompose</strong> dialog
</figcaption>
</figure>

<p>The options available are which <em>Color model</em> to decompose to, and whether to create a new image with the decomposed channels as layers.
If <em>Decompose to layers</em> is not checked, there will be a new image for each channel separately (chances are that you’ll want to start out leaving this checked).</p>
<p>The most important option is which <em>Color model</em> to decompose to.
Up to now we have mostly been considering RGB, but there are other modes that might be handy as well.
Let’s have a look at some of the most useful decomposition modes.</p>
<p>We will be using this image graciously provided by <a href="https://plus.google.com/u/0/+DimitriosPsychogios/about">Dimitrios Psychogios</a>:</p>
<figure>
<img src="https://lh4.googleusercontent.com/-t-5u50_U9tQ/VCGZmH6RJoI/AAAAAAAAAEk/S39lYLOPONE/w640-no/dmitrios-dice.jpg" alt="Dice by Dmitrios Psychogios" width="640" height="640" /> 
<figcaption>
<em>Dice</em> by <a href="https://plus.google.com/u/0/+DimitriosPsychogios/about">Dimitrios Psychogios</a> (<a class="cc" href="http://creativecommons.org/licenses/by-sa/4.0/" title="CC-BY-SA">cba</a>)
</figcaption>
</figure>



<h3 id="rgb-a-">RGB(A)<a href="#rgb-a-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>This is the <em>Color mode</em> that we’ve been focusing on up to now, and is usually the most helpful in terms of having multiple sources to draw from.
This separates out the Red, Green, and Blue Channels into individual layers for you (and Alpha if your image has it).</p>
<figure class="big-vid">
<img src="https://lh4.googleusercontent.com/-z8HEEDSbIyU/VCGUtr9NgdI/AAAAAAAAAEI/ZWIyezyJnic/w960-no/GIMP-Decompose-RGB.jpg" alt="Dimitrios Psychogios Dice decompose RGB" width="960" height="320" />
<figcaption>
RGB decomposed.
</figcaption>
</figure>


<h3 id="hsv-hsl">HSV/HSL<a href="#hsv-hsl" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Hue, Saturation, and Value/Lightness is another useful decomposition, though usually only the Value or Lightness is useful for B&amp;W conversion.</p>
<figure class="big-vid">
<img src="https://lh4.googleusercontent.com/-9zlwkT0oEu8/VCGdQAnH88I/AAAAAAAAAE8/aTdDY_WJCXE/w960-no/GIMP-Decompose-HSV.jpg" alt="Dimitrios Psychogios Dice decompose HSV" width="960" height="320" />
<figcaption>
Hue, Saturation, Value (HSV) Channels
</figcaption>
</figure>

<p>The <em>Value</em> in <strong>HSV</strong> is derived according to a simple formula:</p>
<p class="Cmd aside">
Value, V = MAX( R, G, B )
</p>

<p>Which is basically just the largest value of Red, Green, or Blue.</p>
<figure class="big-vid">
<img src="https://lh3.googleusercontent.com/-X12euPvDqW4/VCGe8zG50II/AAAAAAAAAFQ/lcL2v-lDlxA/w960-no/GIMP-Decompose-HSL.jpg" alt="Dimitrios Psychogios Dice decompose HSL" width="960" height="320" />
<figcaption>
Hue, Saturation, Lightness (HSL) Channels
</figcaption>
</figure>

<p>The <em>Lightness</em> in <strong>HSL</strong> is derived from this formula:</p>
<p class="Cmd aside">
Lightness, L = <sup>( MAX( R, G, B ) + MIN( R, G, B ) )</sup>&frasl;<sub>2</sub><br/>
</p>

<p>Where <em>Lightness</em> is simply determined as the average of the largest and smallest component of RGB.</p>
<p>While Hue and Saturation may seem interesting, it should be obvious that the most useful channels for a grayscale conversion here would likely be <em>Value</em> or <em>Lightness</em>.
Overall, <em>Lightness</em> will tend to be a bit brighter than <em>Value</em>.</p>
<h3 id="lab">LAB<a href="#lab" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>There is far too much information concerning the <a href="http://en.wikipedia.org/wiki/Lab_color_space">LAB colorspace</a> to really go into much detail here.  Suffice it to say that the <em>L</em> in <em>LAB</em> is for <strong>Lightness</strong>, while <em>A</em> and <em>B</em> are for color opponents (<strong>A</strong> = Green&hArr;Red, <strong>B</strong> = Blue&hArr;Yellow).</p>
<p class="aside">
Later articles about color toning will show some neat tricks using the LAB colorspace for adjustments.
</p>

<p>The <em>LAB</em> colorspace is based on a perceptual model (similar to the relative luminance previously discussed).
In fact, the <em>Lightness</em> in <em>LAB</em> is calculated using the cube root of the luminance from that function.</p>
<figure class="big-vid">
<img src="https://lh6.googleusercontent.com/-9GO7aKHOqw8/VCGikj93xwI/AAAAAAAAAFg/4bXt5w2NfwI/w1014-h338-no/GIMP-Decompose-LAB.jpg" alt="Dimitrios Psychogios Dice decompose LAB" width="960" height="320" />
<figcaption>
LAB Channels
</figcaption>
</figure>

<p>As you can see, the only channel of any use for a B&amp;W conversion is really the <strong>Lightness</strong>, <em>L</em> channel.</p>
<h3 id="cmy-k-">CMY(K)<a href="#cmy-k-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Cyan, Magenta, Yellow and (Black, K) are often discussed in terms of printing.
When doing the decomposition in GIMP, you’ll have to invert the results to make them useful.
Once you do, you may notice that they are, in fact, the same as RGB (for CMY decomposition):</p>
<figure class="big-vid">
<img src="https://lh4.googleusercontent.com/-251PiePdosc/VCGm-RMqdgI/AAAAAAAAAF4/SARBbmx8qqM/w960-no/GIMP-Decompose-CMY.jpg" alt="Dimitrios Psychogios Dice decompose CMY" width="960" height="320" />
<figcaption>
CMY conversion (inverted from direct conversion)
</figcaption>
</figure>

<p>CMYK produces a similar result, but adds another channel to control the level of black in the result.
Inverting the <em>Black</em>, <strong>K</strong> channel yields something usable.</p>
<figure>
<img src="https://lh6.googleusercontent.com/-VtvoazGyhuo/VCGp7IqVWPI/AAAAAAAAAGM/1xPe4DPRM0o/w640-no/GIMP-Decompose-CMYK.jpg" alt="Dimitrios Psychogios Dice decompose CMYK" width="640" height="640" />
<figcaption>
CMYK conversion with the Black, <strong>K</strong> channel inverted
</figcaption>
</figure>



<h3 id="ycbcr">YCbCr<a href="#ycbcr" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Anyone who has done video processing might recognize this colorspace representation, as it often shows up in digital video.
<em>YCbCr</em> is a means for encoding the RGB colorspace with three channels: <em>Luma</em>, <strong>Y</strong>, and two channels of Red (<em>Cr</em>) and Blue (<em>Cb</em>) chroma differences.</p>
<figure class="big-vid">
<img src="https://lh4.googleusercontent.com/-xTLwdn-hAyc/VCGr1ZaGr8I/AAAAAAAAAGs/qNoCdHxuYBQ/w960-no/GIMP-Decompose-YCbCr.jpg" alt="Dimitrios Psychogios Dice decompose YCbCr" width="960" height="320" />
<figcaption>
YCbCr
</figcaption>
</figure>

<p>Try to use the <em>256</em> variants of the ITU recommendations to allow the decomposition to span the full 256 values available (the non-256 versions will pad 16 to the range, only allowing values to go from 16-240).</p>
<h3 id="so-what-s-the-result-">So What’s the Result?<a href="#so-what-s-the-result-" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Let’s summarize some of the most useful results from <code>Colors → Components → Decompose</code> for a B&amp;W conversion:</p>
<ul>
<li>RGB - All channels</li>
<li>HSV/HSL - V (Value) and L (Lightness)</li>
<li>LAB - L</li>
<li>CMYK - K</li>
<li>YCbCr - Y (Luma)</li>
</ul>
<p>This gives a total of 9 different types of color mode conversions that may be useful for generating a B&amp;W image.
It helps to visually see all of the options at once to get a better feel for what is going on:</p>
<figure class="big-vid">
<img src="https://lh4.googleusercontent.com/-nYBQlJWqAI4/VCHaoly4o9I/AAAAAAAAAHI/dI-EDksL5sk/w960-no/GIMP-Decompose-All.jpg" alt="Dimitrios Psychogios Dice decompose All" width="960" height="960" />
<figcaption>
All 9 useful channels from <code>Colors → Components → Decompose</code>
</figcaption>
</figure>

<p>Chances are that one of these conversions might prove useful as a direct B&amp;W conversion.</p>
<p>It helps to notice that the first 4 conversions are all color channels, while the last 5 conversions are brightness values based on different functions for achieving the results (<strong>K</strong>, <strong>V</strong>alue, <strong>L</strong>ightness, <strong>L</strong>, <strong>Y</strong> (luma)).</p>
<h4 id="the-script">The Script<a href="#the-script" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>I had previously written some Script-Fu to automate the task of generating these useful channel decompositions (it was tedious choosing each color model manually).</p>
<p>The script will take the active layer in an image, and decompose it to each of the useful color channels listed above, each on its own layer.
Once downloaded and placed into your <strong>Scripts</strong> folder, the command can be found here:</p>
<p><span class="Cmd">Colors &rarr; Color Decompose…</span></p>
<p class="aside">
<span>Downloading the Script</span>
The Script-Fu for <em>Color Decompose</em> can be downloaded here:<br/>
<a href="http://registry.gimp.org/node/27745" style="font-size:1rem;">Color Decompose on GIMP Registry</a><br/>
or downloaded from here: <br/>
<a href="https://docs.google.com/uc?export=download&id=0B21lPI7Ov4CVa2ZFQW5hajhYSWs" style="font-size:1rem;">Color Decompose on Google Drive</a>
</p>

<h4 id="looking-forward">Looking Forward<a href="#looking-forward" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Likely that <em>some parts</em> of <em>some conversions</em> will be useful in some way.
I am personally rarely satisfied with any of the straight conversion options on their own,
but would like to pick and choose which parts of the image contain the best detail and tones from the different conversion options.
The fun is then combining them in such a way so as to produce a final result that is pleasing.</p>
<h2 id="pseudogrey">Pseudogrey<a href="#pseudogrey" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Pseudogrey (gr<strong><em>e</em></strong>y, not gray, per the original author, <a href="http://r0k.us/rock/index.html">Rich Franzen</a>) is a means for increasing the available levels of <em>perceived</em> gray in an image using a bit-stealing technique.</p>
<figure class="big-vid">
<img src="https://lh4.googleusercontent.com/-0_HhC6-uT3c/VCQ9aimZaZI/AAAAAAAAAHg/jhI4l2ImxwM/w960/Randi%2Bpseudogrey.jpg" alt="Randi pseudogrey by Pat David" width="960" height="906" />
<figcaption>
<em>Randi</em> in pseudogrey<br/>
by Pat David (<a class="cc" href="https://creativecommons.org/licenses/by-sa/4.0/">cba</a>)
</figcaption>
</figure>

<p>The basic approach in <strong>Pseudogrey</strong> is that you can achieve a much higher number of <em>perceived</em> gray values in an image, if you allow some of the pixels to stray just a tiny bit away from pure gray.  For instance, if a pixel value in a true gray image was: 180, 180, 180, <strong>Pseudogrey</strong> may actually make the pixel value something like 180, 18<strong>1</strong>, 180.</p>
<p>That is, the Green value may be just a bit higher.  The <a href="http://blog.patdavid.net/2012/06/true-pseudogrey-in-gimp.html">full post on Pseudogrey</a> goes into much more detail about the algorithm.</p>
<p>The results from using Pseudogrey will follow the same model as for Luminosity desaturation, but will provide a much larger range of tones (1786 possible shades vs 256 in a truly gray image).</p>
<p>There are a couple of ways to convert images to pseudogrey.</p>
<p>There is a Script-Fu available for download:</p>
<p class="aside">
<span>Downloading the Pseudogrey script</span>
The Script-Fu for <em>Pseudogrey</em> can be downloaded here:<br/>
<a href="http://registry.gimp.org/node/26515" style="font-size:1rem;">Pseudogrey on GIMP Registry</a><br/>
or downloaded from here: <br/>
<a href="https://docs.google.com/uc?export=download&id=0B21lPI7Ov4CVOW9yTnBtbjVlaEk" style="font-size:1rem;">Pseudogrey on Google Drive</a>
</p>

<p>Once the file has been downloaded and placed into your <em>Scripts</em> folder, the command can be found under:</p>
<p class="Cmd">
Colors &rarr; Pseudogrey…
</p>

<p>Alternatively, if <a href="http://gmic.sourceforge.net/" title="G&#39;MIC Homepage">G’MIC</a> is installed then the command can be found at the Black &amp; white filter:</p>
<p class="Cmd">
G’MIC &rarr; Black &amp; white &rarr; Black &amp; white
</p>

<p>At the end of all of the various options in the filter, there is a <em>Pseudo-gray dithering</em> option to apply the algorithm at various levels (higher levels increase the distance from true gray for each pixel).</p>
<p>Pseudogrey can be helpful in areas with slight tonal value changes over a large area, as this is often where banding will become visible in an 8-bit image.
While the differences may be slight in many cases, if allowing the tiniest amount of color shifting to creep into the image for an expanded tonal range is ok, then pseudogrey is a great option to have.</p>
<h2 id="gegl-c2g">GEGL C2G<a href="#gegl-c2g" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>The Generic Graphics Library (GEGL) is the underlying graphics engine for GIMP.
There is one neat function in GEGL specificaly for B&amp;W conversions called <em>Color 2 Grayscale</em> (c2g).
It can be found on the <em>Tools</em> menu in GIMP:</p>
<p class="Cmd">
Tools &rarr; GEGL Operation…
</p>

<p>Rolf Steinort covers c2g briefly in <a href="http://blog.meetthegimp.org/episode-084-the-3-letter-acronym-show/">episode 84 of Meet the GIMP</a>.
<a href="http://blog.wbou.de/index.php/2009/08/04/black-and-white-conversion-with-gegls-c2g-color2gray-in-gimp/">Paul Bou also looks</a> at using c2g for B&amp;W conversions in a little more detail, and <a href="http://jcornuz.wordpress.com/2009/05/30/could-this-be-the-ultimate-black-and-white-converter/">Joel Cornuz also asks</a> if c2g could be the “ultimate” B&amp;W converter.
It may not be worth all the hyperbole, but c2g does do some very interesting things.</p>
<p>The operation considers each pixel relative to its neighbors within a given radius.
The value determined is evaluated as a function of perceived luminance weighted against neighboring pixels.
The <a href="http://www.gegl.org/operations.html#op_gegl:c2g">description from GEGL.org</a> is:</p>
<blockquote>
<p>Color to grayscale conversion, uses envelopes formed from spatial color differences to perform color-feature preserving grayscale spatial contrast enhancement</p>
</blockquote>
<p>In practice, c2g will attempt to scale the values of pixels within its neighborhood (radius) to maximize contrast.
What some people like about c2g is that the operation will also introduce a nice range of synthetic grain during the conversion.
There are ways to minimize the resulting grain by adjusting settings, though.</p>
<p>Let’s consider this test image:</p>
<figure class='big-vid'>
<img src='https://4.bp.blogspot.com/-dP86WT3T1Ds/UO3t-D_wewI/AAAAAAAAEwg/lObIv6J_5-M/w960/Cars-Luminosity.jpg' alt='Deerfield Beach luminosity GIMP' width='960' height='662' />
<figcaption>
Straight <em>Luminosity</em> desaturation in GIMP
</figcaption>
</figure>

<p>At first glance, GEGL c2g will likely produce ugly results.
The default settings are not conducive to producing a pretty image:</p>
<figure class='big-vid'>
<img src='https://3.bp.blogspot.com/-wGXTbiRqbwc/UO3uc418VjI/AAAAAAAAEws/8sdZBXcgN-U/w960/Cars-c2g-default.jpg' data-swap-src='https://4.bp.blogspot.com/-dP86WT3T1Ds/UO3t-D_wewI/AAAAAAAAEwg/lObIv6J_5-M/w960/Cars-Luminosity.jpg' alt='Deerfield Beach c2g default GIMP by Pat David' width='960' height='662' />
<figcaption>
    c2g conversion, default settings (radius 300, samples 4, iterations 10)<br/>
(Click image to compare to original)
</figcaption>
</figure>

<p>The default settings will (usually) produce a nasty halo effect on edges where the radius is not large enough to fully consider transitions.
The edges of the buildings/trees against the sky show this particularly.
There is also an excessive amount of synthetic graininess to the result.</p>
<p>Tweaking parameters can lead to better results at the cost of processing time.
GEGL c2g is not a fast algorithm.</p>
<p>Haloing can be decreased by increasing the radius and graininess can be decreased by increasing the samples or iterations.
Iterations seem to have a larger effect on overall noisiness in the result but (again) at the cost of increased processing time.</p>
<figure class='big-vid'>
<img src='https://2.bp.blogspot.com/-6YArLzaEH5g/UO3wD3AXOcI/AAAAAAAAExk/S8eAr2D0oQI/w960/Cars-c2g-r750-s8-i15.jpg' data-swap-src='https://3.bp.blogspot.com/-wGXTbiRqbwc/UO3uc418VjI/AAAAAAAAEws/8sdZBXcgN-U/w960/Cars-c2g-default.jpg' alt='Deerfield Beach c2g r750 s8 i15 GIMP by Pat David' width='960' height='662' />
<figcaption>
Betters results after increasing some parameters (radius 750, samples 8, iterations 15)<br/>
(Click image to compare to default parameters)
</figcaption>
</figure>

<p>Increasing the radius helped to alleviate some of the halos and will allow the algorithm to spread the contrast over a larger area.
The increase in samples and iterations helps to keep the noise down to a more manageable level as well.
Refining even further yields slightly better results:</p>
<figure class='big-vid'>
<img src='https://2.bp.blogspot.com/-lqqXT-1WS5c/UO3zfMVGNOI/AAAAAAAAEyc/GNUDbf10f_U/w960/Cars-c2g-r1500-s8-i20.jpg' data-swap-src='https://4.bp.blogspot.com/-dP86WT3T1Ds/UO3t-D_wewI/AAAAAAAAEwg/lObIv6J_5-M/w960/Cars-Luminosity.jpg' alt='Deerfield Beach c2g r1500 s8 i20 GIMP by Pat David' width='960' height='662' />
<figcaption>
Betters results after increasing some parameters (radius 1500, samples 8, iterations 20)<br/>
(Click image to compare to original)
</figcaption>
</figure>

<p>At this point the noise is nicely suppressed while the halos have mostly been eliminated.
The overall image still has more contrast than the straight luminosity desaturation (click to compare) and the contrast has been <em>weighted for the surrounding pixels as well</em>.</p>
<p>If a luminosity desaturation will choose a pixel value based on the perceived color brightness, c2g will do the same in addition to weighting the result relative to neighboring pixels.</p>
<p>For example, below is an optical illusion showing the effect on perceived luminosity relative to nearby brightness:</p>
<figure>
<img src='https://lh6.googleusercontent.com/-OID1AdW-hNU/VCRoplYzRLI/AAAAAAAAAIk/BiUyArqPQA8/w507-h395-no/Same_color_illusion.png' alt='checkerboard luminosity optical illusion' width='507' height='395' />
<figcaption>
Square A and B are the same value of gray!
</figcaption>
</figure>

<p>Squares A &amp; B are the same exact shade of gray.
The reason we perceive B as lighter than A is due to the way our eyes are perceiving nearby colors (and our expectations are strengthened by the checkerboard pattern as well).</p>
<p>The results of running the image through c2g aligns the pixel values closer to what our eyes see:</p>
<figure>
<img src='https://lh3.googleusercontent.com/-1hkcjYC9M8g/VCRoplfiphI/AAAAAAAAAIo/p_VGtseYAXE/w507-h395-no/illusion.png' alt='checkerboard luminosity optical illusion' width='507' height='395' />
<figcaption>
After letting c2g do its thing
</figcaption>
</figure>

<p>This operation can be very handy for bringing out micro-contrasts in an image (or increasing global contrast at large radius settings).</p>
<h2 id="conversion-examples">Conversion Examples<a href="#conversion-examples" class="header-link"><i class="fa fa-link"></i></a></h2>
<p><em>Finally</em>, a look at a simple workflow for applying these various methods of grayscale conversion to arrive at a final result.</p>
<p>The overall workflow here will be to decompose the image to various grayscale layers.
Then to investigate each of the different versions to identify features of interest aesthetically.
Finally, combine the different decompositions and mask accordingly to highlight those features or tones.</p>
<h3 id="pretty-woman">Pretty Woman<a href="#pretty-woman" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Do a <a href="https://www.flickr.com/creativecommons">Creative Commons search</a> on Flickr, and it’s <em>very</em> likely that photographer <a href="https://www.flickr.com/photos/72213316@N00/">Frank Kovalchek</a> will show up in some fashion.  He liberally licenses many photographs under <a href="http://creativecommons.org/">Creative Commons</a> licenses, and we will be using one of his portraits for this first example.</p>
<figure>
<img src='https://lh3.googleusercontent.com/-uac9hP5_BH8/VCWKk9tPJXI/AAAAAAAAAKI/x_7FP3Zp9QA/w640-no/aldude-color.jpg' alt='GIMP B&W base image by Frank Kovalchek' width='640' height='801' />
<figcaption>
<a href="http://www.flickr.com/photos/72213316@N00/4589410278"><em>What a sweet looking portrait</em></a> by <a href="http://www.flickr.com/people/72213316@N00/">Frank Kovalchek</a> on Flickr
(<a class='cc' href='https://creativecommons.org/licenses/by/2.0/' title='Creative Commons - By Attribution'>cb</a>)
</figcaption>
</figure>

<p>Utilizing <a href="#the-script">the script from earlier</a> to quickly break the image down into multiple layers using different decomposition modes produces a nice array overview to consider:</p>
<figure class='big-vid'>
<img src='https://lh6.googleusercontent.com/-puR1O1BYDKg/VCWQ8KlJGoI/AAAAAAAAAKo/pHHv5g7OMEI/w960-no/aldude-array.jpg' alt='GIMP B&W Decompose Array' width='960' height='1202' />
</figure>

<p>These various decompositions supply a large amount of possible variations in getting to a finished product.
Keep in mind that the goal in this example is to maintain good tonal density as well as imparting a sense of texture and detail.</p>
<h4 id="the-scarf">The Scarf<a href="#the-scarf" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>As good a starting point as any, consider the texture and detail of the scarf.  Looking at the various decompositions in the array, the question you should be asking yourself is:</p>
<blockquote>
<p>Which of these results produces the best quality/texture in the fabric of the scarf?</p>
</blockquote>
<p>Looking at the previews leads to three possible choices: <em>Luma Y709F</em>, <em>Luma Y470F</em>, and <em>HSL - Lightness</em>.
Of those let’s go with <em>Luma Y709F</em>.
This is very subjective, of course.
The important point to take away is the choice being made due to qualities it possesses <em>for a particular purpose</em>.</p>
<figure>
<img src='https://lh3.googleusercontent.com/-qmNK-DKRMX8/VCW1_ul2rJI/AAAAAAAAALA/HcGa1bm75GQ/w640-no/aldude-bw-y709f.jpg' alt='GIMP B&W y709f' width='640' height='801' />
<figcaption>
The Y709F - Luma channel as a “base” layer - chosen for the fabric texture
</figcaption>
</figure>


<p>The main focus of the image will be the models face but you will still want to retain detail and texture in the scarf as well.</p>
<h4 id="the-skin">The Skin<a href="#the-skin" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>Looking at the model and her skin there is already fine detail , but could use a bit more emphasis overall.
Perhaps get the skin a little bit brighter and in a higher key to offset the dark background and the scarf.
It would be nice to smoothen/soften the skin tones as well.</p>
<p>Keeping that in mind, look back at the various decompositions again, this time with an eye towards skin tones and her face.
Not surprisingly, the <strong>RGB - Red</strong> channel looks very pretty (as well as the HSV - Value).
It’s fairly common that the red channel will be complimentary on (Caucasian) skin.
There is even an old trick to use the red channel as an overlay on a color image to help “enhance” skin tones.</p>
<p>So let’s try that here.
Place the <em>RGB - Red</em> channel over the <em>Luma - y709f</em> channel and change the layer blending mode to <strong>Overlay</strong>.</p>
<figure>
<img src='https://lh5.googleusercontent.com/-K2mv-EBujdo/VCW5HbLDMQI/AAAAAAAAALU/zLAkLGclIQo/w640-no/aldude-bw-y709f-Red-Overlay.jpg' alt='GIMP B&W y709f with Red channel Overlay' data-swap-src='https://lh3.googleusercontent.com/-qmNK-DKRMX8/VCW1_ul2rJI/AAAAAAAAALA/HcGa1bm75GQ/w640-no/aldude-bw-y709f.jpg' width='640' height='801' />
<figcaption>
Luma Y709F base, with Red channel over (layer blend mode: Overlay)<br/>
(Click to compare to base Y709F - Luma)
</figcaption>
</figure>

<p>Visually this appears to have more impact, but the skin may be blown out a little too much.
One option to attenuate this would be to lower the opacity on the <em>RGB - Red</em> layer.</p>
<p class="aside">
Also, note that very often the visual impact may also be due to the higher contrast in the image at this point.
Sometimes it’s best to stand up and look away from the image for a while before committing to a change…
</p>

<p>The problem with adjusting the opacity for the entire layer is that the ratio of levels between the skin and scarf may not be desirable for the final output.
Adjusting the opacity might reduce the effect on the skin, but at the same time will reduce the effect on the scarf by an equal amount.
What is needed is a way to apply the effect stronger on the scarf or skin separately.</p>
<p>This is exactly what <em>Layer Masks</em> are for!</p>
<h4 id="masks">Masks<a href="#masks" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>At this point a layer mask could be added to the <em>RGB - Red</em> layer, and then painted by hand to modify the intensity by isolating the face and giving a little less opacity to the scarf.
It’s a lot of tedious, detailed work.</p>
<p>However, if you look back on the array of decompositions you may notice that channels like <em>RGB - Blue</em> and <em>RGB - Green</em> look pretty good for isolating the face from the scarf already.</p>
<p>So we are going to use the <em>RGB - Green</em> layer and apply it as a layer mask to the <em>RGB - Red</em> layer.</p>
<p>The <strong>Layers</strong> palette should look something like this in GIMP now:</p>
<figure>
<img src='https://lh6.googleusercontent.com/-o_IpVAcmp1o/VCW-PQFwKRI/AAAAAAAAALo/rJEkns_zyJQ/s0-no/aldude-bw-y709f-RoverlayMask-Layers.png' alt='GIMP Layer Palette with layer mask' width='197' height='180' />
</figure>

<p>Keep in mind, a layer mask will be more transparent the darker the color is in it.
The lighter areas will show more of the layer it is applied to.
In this case, the lighter areas will allow more of the <em>RGB - Red</em> layer to show, while darker areas will show more of the layer below, <em>Luma - Y709F</em>.</p>
<p>The results at this point with the mask:</p>
<figure>
<img src='https://lh3.googleusercontent.com/-I7vWCN-LKD0/VCW_h0zI3GI/AAAAAAAAAL8/0upOtVWT_54/w640-no/aldude-bw-y709f-Red-Overlay-Masked.jpg' alt='GIMP B&W y709f with Red channel Overlay' data-swap-src='https://lh5.googleusercontent.com/-K2mv-EBujdo/VCW5HbLDMQI/AAAAAAAAALU/zLAkLGclIQo/w640-no/aldude-bw-y709f-Red-Overlay.jpg' width='640' height='801' />
<figcaption>
<em>RGB - Red</em> as overlay with <em>RGB - Green</em> as a layer mask<br/>
(Click to compare without the layer mask)
</figcaption>
</figure>

<p>What this has done is to isolate the models face from the surrounding scarf.
You can now modify the opacity of the layer, or adjust the values of the mask using <em>Levels</em> or <em>Curves</em> to adjust the intensity of the result.</p>
<p>Any changes to the <em>RGB - Red</em> layer will now be masked to apply mainly to the models face.</p>
<p>Looking at the results, the scarf has become much more flat in tones, while the models face has brightened up.
Considering it, the ratios look backwards a bit.  The scarf has flattened out, and the face has brightened a bit too much.</p>
<p>To flip the ratios, simply invert the colors of the layer mask.
Select the <em>mask</em> (not the layer itself!), and run:</p>
<p class="Cmd">
Colors &rarr; Invert
</p>

<p>The layers palette will now look like this:</p>
<figure>
<img src='https://lh3.googleusercontent.com/-4-xP0wRsso8/VCXBL2IBTPI/AAAAAAAAAMQ/-PRpfnuFGKc/s0-no/aldude-bw-y709f-RoverlayMaskInvert-Layers.png' alt='GIMP Layer Palette with inverted mask' />
</figure>

<p>The result on the image so far:</p>
<figure>
<img src='https://lh3.googleusercontent.com/-YjH7FDGZhYg/VCXCCnIdt-I/AAAAAAAAAMk/Am326xAfjos/w640-no/aldude-bw-y709f-Red-Overlay-Masked-Inverted.jpg' alt='GIMP B&W y709f with Red channel Overlay' data-swap-src='https://lh3.googleusercontent.com/-I7vWCN-LKD0/VCW_h0zI3GI/AAAAAAAAAL8/0upOtVWT_54/w640-no/aldude-bw-y709f-Red-Overlay-Masked.jpg' width='640' height='801' />
<figcaption>
Inverted mask results<br/>
(Click to compare to non-inverted mask)
</figcaption>
</figure>

<p>At this point the results look pretty nice and would make a fine stopping point.
The overlay and mask added some nice depth to the scarf fabric while maintaining a nice effect on the skin of the model as well.
More work could be done if wanted with adjusting layer mask levels and increasing/decreasing the results on the models skin but this looks good as it is.</p>
<p>A final comparison of the results against a straight color desaturation:</p>
<figure>
<img src='https://lh3.googleusercontent.com/-YjH7FDGZhYg/VCXCCnIdt-I/AAAAAAAAAMk/Am326xAfjos/w640-no/aldude-bw-y709f-Red-Overlay-Masked-Inverted.jpg' alt='GIMP B&W y709f with Red channel Overlay' data-swap-src='https://lh3.googleusercontent.com/-EFb0VVJFFRg/VCXDVN9PVOI/AAAAAAAAAM0/f5X1i55yGcs/w640-no/aldude-desaturation.jpg' width='640' height='801' />
<figcaption>
Final result<br/>
(Click to compare to straight color desaturation)
</figcaption>
</figure>

<p>This path was a little fussier than doing a straight color desaturation but the results are much nicer and is visually more interesting.</p>
<h3 id="methuselah">Methuselah<a href="#methuselah" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>Well, this isn’t the <em>actual</em> <a href="http://en.wikipedia.org/wiki/Methuselah_(tree)">Methuselah</a>, but it is a similar species of Bristlecone Pine.  Once again, image courtesy of <a href="http://www.flickr.com">Flickr</a> user <a href="http://www.flickr.com/people/72213316@N00/">Frank Kovalchek</a>.</p>
<figure>
<img src='https://lh3.googleusercontent.com/-uROcbQJ8fow/VCXUL3EMceI/AAAAAAAAANM/PXFRRZ3bAGg/w640-no/aldude2-color.jpg' alt='GIMP B&W Base Image 2 by Frank Kovalchek' width='640' height='853' />
<figcaption>
<a href="http://www.flickr.com/photos/72213316@N00/6956555116"><em>Bristlecone pine hanging on for dear life at 10,000 feet</em></a><br/>
by <a href="http://www.flickr.com/people/72213316@N00/">Frank Kovalchek</a> on Flickr (<a class='cc' href='https://creativecommons.org/licenses/by/2.0/'>cb</a>)
</figcaption>
</figure>

<p>As before, a first look at multiple decomposition modes originally pointed to <em>Luma - Y709F</em> as being a good candidate for the conversion.
In this case, the focus would be on the texture of the tree itself.
The <em>RGB - Green</em> decomposition also looks quite good to use as a base moving forward.</p>
<p>The primary focus is the gnarled old tree itself and the secondary focus the lighting of the sun across the ground.</p>
<figure>
<img src='https://lh5.googleusercontent.com/--F61om9H5tI/VCXdbocVErI/AAAAAAAAAN8/TcRjQ66gxbs/w640-no/aldude2-bw-green.jpg' alt='GIMP B&W Base Image 2 Green Channel' width='640' height='853' />
<figcaption>
<em>RGB - Green</em> channel decomposition
</figcaption>
</figure>

<p>While the <em>RGB - Green</em> channel is nice for the tree texture, the sky still appears too bright and the ground could be a bit darker compared to the tree.
The sunlight on the upper branches of the tree and topping the brush on the ground gets slightly lost when the sky is so bright comparatively.</p>
<p>Having found a good layer for the tree texture, the other decompositions are examined for something that represents the sky and ground a little better.
The <em>RGB - Red</em> channel is a good compromise (the <em>RGB - Blue</em> channel is a little too noisy).</p>
<figure>
<img src='https://lh3.googleusercontent.com/-hNjzGq6TQyk/VCXg6Bxp1RI/AAAAAAAAAOQ/Pk_Rr5LwPR4/w640-no/aldude2-bw-red.jpg' alt='GIMP B&W Base Image 2 Green Channel' data-swap-src='https://lh5.googleusercontent.com/--F61om9H5tI/VCXdbocVErI/AAAAAAAAAN8/TcRjQ66gxbs/w640-no/aldude2-bw-green.jpg' width='640' height='853' />
<figcaption>
<em>RGB - Red</em> channel decomposition<br/>
(Click to compare to <em>RGB - Green</em>)
</figcaption>
</figure>

<p><em>RGB - Red</em> looks like a great candidate for the sky and ground, while <em>RGB - Green</em> will do nicely for the tree textures.
As before, layer masks can be used to modify the mix of the two layers to arrive at a final result.</p>
<p>Set the <em>RGB - Green</em> channel above the <em>RGB - Red</em> channel on the layer palette, and add a layer mask to the <em>RGB - Green</em> channel layer initialized to <strong>Black (full transparency)</strong>.
This lets all of the underlying <em>RGB - Red</em> channel layer show through.</p>
<figure>
<img src='https://lh4.googleusercontent.com/-pkmlbFtjCJk/VCXiTrLvIUI/AAAAAAAAAOk/XNYLpZaLmb0/w197-h180-no/aldude2-bw-green-Layers.png' alt='GIMP B&W Green channel with mask' />
<figcaption>
Red channel layer, with Green channel over + mask
</figcaption>
</figure>

<p>Now with the layer mask active (see the white outline around the layer mask, not the layer itself above), paint with a white color to allow that portion of the <em>RGB - Green</em> channel layer to show through.
When painting with white, it will turn the current layer the mask is associated with opaque in those areas – so focus on painting white where the tree is.</p>
<p>Below is a quick mask to illustrate.</p>
<figure>
<img src='https://lh3.googleusercontent.com/-zA0mNObEO1M/VCXj0WsYapI/AAAAAAAAAPI/8OEhalXw8Y8/w640-no/aldude2-bw-green-mask.jpg' alt='GIMP B&W Tree Layer Mask'  width='640' height='853' />
<figcaption>
It’s only a quick mask, don’t judge it too harshly…
</figcaption>
</figure>

<p>The layers at this point will look like this:</p>
<figure>
<img src='https://lh3.googleusercontent.com/-6Vmzoy7z60I/VCXknZZpU9I/AAAAAAAAAPw/y4cHaEAoz5c/w197-h179-no/aldude2-bw-green-Layers-mask.png' alt='GIMP Layer Mask B&W Dialog' />
</figure>

<p>The results from applying the mask above to the image:</p>
<figure>
<img src='https://lh6.googleusercontent.com/-pBi62NxVALI/VCXkNUuHfrI/AAAAAAAAAPg/1uL7GM0IL2E/w640-no/aldude2-bw-greenred-masked.jpg' alt='GIMP B&W Tree Final' data-swap-src='https://lh4.googleusercontent.com/-H-SKh5ALI2Q/VCYlWbprY7I/AAAAAAAAAQM/9W2w-PsDUXg/w640-no/aldude2-bw-desat.jpg'  width='640' height='853' />
<figcaption>
Final blend of <em>RGB - Red</em> and <em>RGB - Green</em> channels with mask<br/>
(Click to compare to straight desaturation)
</figcaption>
</figure>

<p>This could be a good final version, though there is still a bit of noise in the upper-left corner of the sky from the Red channel.
This could be fixed by adding another layer mask just for the sky which would allow adjustments to the levels of the sky relative to everything else.</p>
<h2 id="grain">Grain<a href="#grain" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>Following some ideas from the great tutorial by Petteri Sulonen on <a href="http://www.prime-junta.net/pont/How_to/n_Digital_BW/a_Digital_Black_and_White.html">Digital Black and White</a>, he speaks a bit about grain in B&amp;W images.
There are a few different methods of adding synthetic grain to an image but visually the results are less than impressive.</p>
<p>Petteri was kind enough to make available a grain field that he processed himself from scanned film.
An easy way to add grain to an image using this grain field is to add it as a layer over the image, set the layer blending mode to <em>Overlay</em>, and adjust opacity to suit.</p>
<figure>
<img src='https://lh4.googleusercontent.com/-CsAOUoeabZU/VCmVscMpefI/AAAAAAAAAQo/Pd3BTmB49_k/w550-h315-no/aldude2-100-grain.png' alt='GIMP B&W Tree Grain Comparison' data-swap-src='https://lh4.googleusercontent.com/-2IKeDLcLjBI/VCmVsrB4oGI/AAAAAAAAAQs/OgkgI4FeTJI/w550-h315-no/aldude2-100-nograin.png' />
<figcaption>
100% crop with Petteri’s grain field applied as <em>Overlay</em> layer
(Click to compare no grain)
</figcaption>
</figure>

<p>You can download the grain-field to use here: <a href="http://farm8.staticflickr.com/7228/7314861896_292120872b_o.png">Petteri Sulonen’s grain field</a>.</p>
<h2 id="conclusion">Conclusion<a href="#conclusion" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>There are many ways to get to a monochrome image.
The important process to take way from this article is to consider <em>elements</em> of the final image as built up from multiple conversion methods, and controlling/applying them as needed to serve the final result best.</p>
<p>Mix and match the methods presented here to get to the best base for further modifications.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Commenting ]]></title>
            <link>https://pixls.us/blog/2014/09/commenting/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/commenting/</guid>
            <pubDate>Mon, 15 Sep 2014 21:30:22 GMT</pubDate>
            <description><![CDATA[  <img src="https://lh6.googleusercontent.com/-9gf4njPcjnY/VBdcwEcXBfI/AAAAAAAARcU/pRU0aMSq54o/w1650-no/Relics%2Bin%2BThomaskirche.jpg" /><br/>
                 <h1>Commenting</h1>  
                 <h2>I still don't have a good solution</h2>   
                <p>First things first.
I forgot to actually link to the new <a href="https://pixls.us/about" title="About Pixls.us">About page</a> in my last post.
So <a href="https://pixls.us/about">here it is</a>.
As with all things related to the site, any feedback, comments, or criticisms are welcome!</p>
<p>Speaking of feedback, comments, and criticisms, I wanted to write about it for a moment.</p>
<p>First, I want to thank everyone who has taken the time to contact me and provide me feedback on the site.
You have no idea how valuable it is to both as a motivator, and as a means to know when something is off.
I appreciate and give my full attention to each and every person and idea thrown at me.  Thank you!</p>
<!-- more -->
<p>From the beginning I have been considering how to let everyone interact with the site and posts.
It would be so much easier for folks to leave a comment on a page (or forum) directly.
Particularly if it allows everyone to view the conversation.</p>
<h2 id="disqus"><a href="#disqus" class="header-link-alt">Disqus</a></h2>
<p>One thing I could do relatively easily is just use a third party commenting system, like <a href="https://disqus.com/">Disqus</a>.
They make it <em>so</em> easy it almost seems silly <strong>not</strong> to do it.
An account, a few lines of javascript, and done.</p>
<p>This method comes with a price, though.
A price in both user privacy concerns as well as the fact that comments are no longer mine (pixls.us) to manage and archive.
I don’t know that I’m willing to pay that price yet just for convenience.</p>
<p>If anything, I may set it up as a temporary solution while I work on something a little more long term.</p>
<h2 id="discourse"><a href="#discourse" class="header-link-alt">Discourse</a></h2>
<p>From what I’ve seen so far, <a href="http://www.discourse.org/">Discourse</a> is the long term solution that I would like to get up and running.
It’s also “Yet-Another-Thing” I should thank darix on <code>#darktable</code> for pointing me to.</p>
<p>The only drawback at the moment is that my hosting provider doesn’t have what I need to get it running (relatively easily).
There are a couple of options for hosted solutions that I may go with, but I want to focus on getting the content ready to go for an “official” launch before I get too far down that rabbit hole.</p>
<h2 id="conclusion"><a href="#conclusion" class="header-link-alt">Conclusion</a></h2>
<p>Yes, I know there’s a need for having some sort of commenting system available for everyone to participate!
I’ll get one running just as soon as I can.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ An About Page and Help ]]></title>
            <link>https://pixls.us/blog/2014/09/an-about-page-and-help/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/an-about-page-and-help/</guid>
            <pubDate>Sun, 14 Sep 2014 02:36:18 GMT</pubDate>
            <description><![CDATA[  <img src="https://lh3.googleusercontent.com/-95I6L_COmM4/U1rYUJcK7mI/AAAAAAAAPdQ/O-Omo-gyuwI/w1650/rolf.jpg" /><br/>
                 <h1>An About Page and Help</h1>  
                 <h2>A little more about site</h2>   
                <p>I’ve started working a bit on the “About” page for the site.
I wanted a place to highlight the <em>mission statement</em> I’m sort of working from:</p>
<blockquote>
<p>To provide tutorials, workflows and a showcase for high-quality photography using Free/Open Source Software.</p>
</blockquote>
<p>As well as a place to let users know who is behind the scenes working on the site.
It’s mostly me at the moment, but I’ve managed to talk someone into helping me…</p>
<h2 id="enter-rolf-steinort"><a href="#enter-rolf-steinort" class="header-link-alt">Enter Rolf Steinort</a></h2>
<p>Yep, that’s right.
I’ve managed to talk Rolf Steinort of <a href="http://meetthegimp.org" title="Meet the GIMP Website">Meet the GIMP</a> fame into helping me out with the site.
We’re still not 100% sure <em>exactly</em> what this means yet, but I have already been bouncing ideas off him for some of the site details anyway.</p>
<!-- more -->
<figure>
<img src="https://lh3.googleusercontent.com/-980jZBjJRq0/U0xPe73g3pI/AAAAAAAAPu4/RHg7C4aB148/w640-no/Rolf.jpg" alt="Rolf Steinort by Pat David" />
<figcaption>
Rolf Steinort, creator of <a href="http://meetthegimp.org">Meet the GIMP</a>.
</figcaption>
</figure>

<p>Meet the GIMP is over <strong>7 years</strong> old now, and quickly closing in on episode <strong>200</strong>!
I am excited (and honored) to have his expertise and help as we build this site out.
Especially because my feeble attempts at video productions are sad at best, and Rolf has the type of voice that could read the phone book and I’d still listen to it.</p>
<h2 id="content-status"><a href="#content-status" class="header-link-alt">Content Status</a></h2>
<p>I’m currently in the process of choosing which articles from my archive on <a href="http://blog.patdavid.net/p/getting-around-in-gimp.html" title="blog.patdavid.net Getting Around in GIMP">Getting Around in GIMP</a> I want to translate over and possibly update/rewrite.
If anyone has suggestions on which ones they’d like to see, you can always let me know.</p>
<p>I’m currently thinking possibly the big 
<a href="http://blog.patdavid.net/2012/11/getting-around-in-gimp-black-and-white.html" title="blog.patdavid.net: B&amp;W Conversion">B&amp;W Conversion</a>, the 
<a href="http://blog.patdavid.net/2014/02/25d-parallax-animated-photo-tutorial.html" title="patdavid.net: 2.5D Parallax Animated Photo">2.5D Parallax</a>, and/or the
<a href="http://blog.patdavid.net/2013/09/film-emulation-presets-in-gmic-gimp.html" title="patdavid.net: Film Emulation in G&#39;MIC/GIMP">Film Emulation in GIMP/G’MIC</a>.</p>
<h2 id="breaking-up-long-pages"><a href="#breaking-up-long-pages" class="header-link-alt">Breaking Up Long Pages</a></h2>
<p>One other thing that I’m trying to decide on is if I should worry about breaking up long posts into multiple pages or not.
I don’t really have any interest in making users click through multiple pages to get all of the content (I personally hate doing this).</p>
<p>On the other hand, if the post is really long it could take some time to load all the assets if they all exist on a single page.
It may be a delicate trade-off for keeping a page responsive vs. requiring a user to click through to a second (or possibly third) page.
For the moment I’m erring on the side of convenience for the user and keeping things as long pages.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ The Big Picture ]]></title>
            <link>https://pixls.us/blog/2014/09/the-big-picture/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/the-big-picture/</guid>
            <pubDate>Mon, 08 Sep 2014 16:06:28 GMT</pubDate>
            <description><![CDATA[  <img src="https://lh4.googleusercontent.com/-RVauHGzbPRQ/UwvCg3d4Q6I/AAAAAAAAOS4/pLGsqpAM_8E/w1650-no/Into%2Bthe%2BFog.jpg" /><br/>
                 <h1>The Big Picture</h1>  
                 <h2>This is all about visual media after all...</h2>   
                <p>Sometimes I get into weird OCD mode where I need to have something for better or worse.
One of those things was a desire to break out of the mold of standard blog-type posts in articles for this site.
I’ve sometimes found images are relegated to second-class citizens on some page layouts that don’t do them justice.</p>
<p>I couldn’t let that happen here.
The problem was that I needed to do some things to make sure the typographic layouts were visually strong as well.
This meant a adding control to width and layout of main text elements, with the downside of having to hack a bit to make images large.
<!--more-->
The solution I ended up with was to add a tag surrounding elements that I wanted to break out of the current layout.
So I would end up with something like this:</p>
<pre><code class="lang-markup">&lt;!-- FULL-WIDTH --&gt;
&lt;img src=&quot;http://to be full width.png&quot;/&gt;
&lt;!-- /FULL-WIDTH --&gt;
</code></pre>
<p>Technically, in my case, I’m using the <code>&lt;figure&gt;</code> tag with <code>&lt;figcaption&gt;</code>, so my actual markup for full-width images looks like this:</p>
<pre><code class="lang-markup">&lt;!-- FULL-WIDTH --&gt;
&lt;figure&gt;
&lt;img src=&quot;http://full-width-image-src.jpg&quot; /&gt;
&lt;figcaption&gt;A caption for my image&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;!-- /FULL-WIDTH --&gt;
</code></pre>
<p>This let me capture that block in my processing when I build the site (metalsmith), and to modify the page code to accommodate what’s needed to make it full-width.
The result of this is that I can now break images out of their containers to span the full width of a page, like this:</p>
<!-- FULL-WIDTH -->
<p><figure class="full-width">
<img src="https://lh3.googleusercontent.com/-dzpZ6jpJF7E/U0k05P-js8I/AAAAAAAAO7Y/CgrjtmXgoT8/w1650-no/Nikolaikirche.jpg" alt="Nikolaikirche, Leipzig, Germany by Pat David" /></p>
<p><figcaption>
<em>A view of <a href="http://en.wikipedia.org/wiki/St._Nicholas_Church,_Leipzig">Nikolaikirche</a> in Leipzig, Germany.</em><br/>
For you <a href="http://www.darktable.org">darktable</a> fans, that’s houz in the bottom right.
</figcaption>
</figure>
<!-- FULL-WIDTH --></p>
<p>Of course, this can get very tiring very quickly.
I find that it tends to break the flow of reading, so should be used sparingly and wisely in the context of the post or article.
I promise not to abuse it.</p>
<h2 id="attribution"><a href="#attribution" class="header-link-alt">Attribution</a></h2>
<p>It’s a small thing, but I’ve added an attribution line for the lede images that you’ll find in the bottom right of the actual image.
I will also be incorporating the <a href="http://creativecommons.org/" title="Creative Commons">Creative Commons</a> icon fonts to support proper attribution notice as well.
Once I’ve done that, I will include a similar style attribution for other images (as it stands now, they can be put into the <code>&lt;figure&gt;</code> image caption).</p>
<h2 id="video-killed-the-radio-star"><a href="#video-killed-the-radio-star" class="header-link-alt">Video Killed the Radio Star</a></h2>
<p>Of course, sometimes what is needed to really explain a concept is to use a video. 
So I couldn’t just ignore a way to get good video styling.</p>
<p>My first hurdle was to find a way to keep the video container fluid with the rest of the page.
Remember, the page is built to be responsive, so it’s a single page served to all devices.
This means that I need to adapt to all possible viewing device screen resolutions (as well as possible).</p>
<p>Getting images to scale and resize correctly to fit new sizes was easy.
Doing the same thing for video is not <em>as</em> easy, but wasn’t too bad.
Once again, I’m relying on the kindness of strangers…</p>
<h3 id="the-code"><a href="#the-code" class="header-link-alt">The Code</a></h3>
<p>The answer came in the form of an <a href="http://alistapart.com/article/creating-intrinsic-ratios-for-video/">A List Apart</a> article from 2009 by Thierry Koblentz.
The basic premise was to create a box to contain the video embed, then to stretch the video to fill the box dimensions.
Then I could still the box to be responsive just like the other elements.</p>
<p>So I wrapped the video embed in a container box, and added some CSS classes:</p>
<pre><code class="lang-markup">&lt;div class=&quot;fluid-video&quot;&gt;
  &lt;iframe src=&quot;http://Normal Youtube Embed Code&quot;/&gt;
&lt;/div&gt;
</code></pre>
<p>Then it was just a matter of styling by setting the <code>padding</code> property to be percentage based on th width of the container.
To use a 16:9 ratio, the percentage should be 56.25%:</p>
<pre><code class="lang-css">.fluid-video {
    position: relative;
    padding-bottom: 56.25%;
    padding-top: 30px;
    height: 0;
    overflow: hidden;
}
</code></pre>
<p>With the container styled, it was a simple matter to fill the container with the embedded video:</p>
<pre><code class="lang-css">.fluid-video iframe {
    position: absolute;
    top: 0;
    left: 0;
    width: 100%;
    height: 100%;
}
</code></pre>
<p>Et voila!  Fluid video embeds that <em>hopefully</em> should maintain responsiveness.</p>
<p>Of course, I couldn’t leave well enough alone, and to coincide with the previous idea of displaying larger images, I have also added a little extra to embiggen video embeds as well (not full width stretching, but to give it a bit more prominence).</p>
<div class="big-vid">
<div class="fluid-vid">
<iframe width="560" height="315" src="https://pixls.us//www.youtube-nocookie.com/embed/tHTZOu668JM?list=UUMJEM7T8fpJx5CFsi0BfDGA" frameborder="0" allowfullscreen></iframe>
</div>
</div>

<p>Technically I’m stretching the video to 150% of the width of it’s parent container, which happens to be the same container as the <code>&lt;p&gt;</code> elements (so roughly 150% of the text column width).
Mostly I was going to use this type of styling for highlight videos, and leave a normal video embed if it’s not the focus of the article.</p>
<p>Just for reference, a normal (fluid) embed would look like this relative to the surrounding text:</p>
<div class="fluid-vid">
<iframe width="560" height="315" src="https://pixls.us//www.youtube-nocookie.com/embed/tHTZOu668JM?list=UUMJEM7T8fpJx5CFsi0BfDGA" frameborder="0" allowfullscreen></iframe>
</div>

<p>Which makes more sense for supporting material vs. feature videos.</p>
<h2 id="wrap-it-up-already"><a href="#wrap-it-up-already" class="header-link-alt">Wrap it up Already</a></h2>
<p>Ok, I could ramble on for longer, but I think my time is better spent getting back to writing the site.
I think the blog back-end and formatting is mostly done at this point, so on to feature articles!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ RSS Feed & Social Media ]]></title>
            <link>https://pixls.us/blog/2014/09/rss-feed-social-media/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/rss-feed-social-media/</guid>
            <pubDate>Thu, 04 Sep 2014 15:15:33 GMT</pubDate>
            <description><![CDATA[  <img src="https://lh6.googleusercontent.com/-LuDGEuWcAeQ/U_zlAWpDU-I/AAAAAAAARSA/wgRmO0BUoUw/s1920/Sarah-Original.jpg" /><br/>
                 <h1>RSS Feed & Social Media</h1>  
                 <h2>Finally getting the RSS feed working</h2>   
                <p>It took a bit of digging and wrestling to get there, but a couple of nights ago I also managed to get an RSS feed working for the blog posts on the site.
Honestly, I spent more time fiddling with dates in javascript than I should have.</p>
<p>I had to make some minor modifications this morning to accommodate where the location should be, but it should be live now.</p>
<p>The location is: <a href="http://pixls.us/blog/feed.xml" title="Pixls.us blog RSS Feed">http://pixls.us/blog/feed.xml</a>.</p>
<p>Both the blog index pages and post pages contain a <code>&lt;link&gt;</code> element that point to it, so most readers <em>should</em> find the feed if you point it at a page.
I’ll test it later, but the most important thing is the location is correct regardless of whatever hacking I do to the feed itself later.</p>
<!--more-->
<p>I’ve tested the feed quickly with <a href="http://feedly.com" title="feedly.com">feedly</a> and it appears to be working ok. If anyone else is using other feed readers and sees a problem, please let me know!</p>
<p>I intend to have a separate feed available for the articles and main site content when I get those ready to go (most likely at <a href="http://pixls.us/articles/feed.xml">http://pixls.us/articles/feed.xml</a>).</p>
<h2 id="social-media"><a href="#social-media" class="header-link-alt">Social Media</a></h2>
<p>I’ve also started (perhaps prematurely?) getting some social media accounts registered.
If for nothing else than to keep someone else from parking the accounts.</p>
<h3 id="google-"><a href="#google-" class="header-link-alt">Google+</a></h3>
<p>At the moment, I’ve got a <a href="https://plus.google.com/b/115344273324079495662/115344273324079495662/about" title="PIXLS.US Google+ Page">Google+ page</a> setup for the site.
I’ll try to keep updates flowing to that page as well (so if you happen to use g+, follow it!).
If you already <a href="http://plus.google.com/+PatrickDavid" title="Pat David on Google+">follow me</a> on g+ then you’ll know I’m fairly active there.</p>
<p>Now if I could just get google to allow my vanity URL to <em>only</em> read +pixlus I’d be a happy camper!</p>
<h3 id="twitter"><a href="#twitter" class="header-link-alt">Twitter</a></h3>
<p>Back when I first registered this domain name, I apparently had the foresight to register a <a href="http://www.twitter.com" title="twitter.com">Twitter</a> handle as well.
So if you want to follow the conversation there, you can find me <a href="https://twitter.com/pixlsus" title="Pixls.us Twitter Account">@pixlsus</a>.
I even found a first tweet back from Dec 2011!</p>
<h3 id="flickr"><a href="#flickr" class="header-link-alt">Flickr</a></h3>
<p>I’ve also created a <a href="http://www.flickr.com" title="flickr.com">Flickr</a> group for users on Flickr to share photos or congregate.
You can find the group <a href="https://www.flickr.com/groups/pixlsus/" title="Pixls.us Flickr Group">here</a>.</p>
<p>Really this is just a pre-emptive action to have these channels available as soon as we get going.</p>
<h2 id="moving-along"><a href="#moving-along" class="header-link-alt">Moving Along</a></h2>
<p>I feel like I’m gaining a little traction here.
There’s a few more things I need to tidy up and make some design decisions on, but at least I have a clear vision going forward.
I’ve already got an article ported over from <a href="http://blog.patdavid.net/p/getting-around-in-gimp.html" title="Getting Around in GIMP">Getting Around in GIMP</a> on my blog to use as a test case for formatting.</p>
<p>As soon as I like how it’s looking, I’ll work on porting over some other articles as well.
If it goes well, I may just go ahead and update/re-write some more things as well to test with.
As soon as I have things in a relatively stable state I’ll also get some new material out as well!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ A Push Menu ]]></title>
            <link>https://pixls.us/blog/2014/09/a-push-menu/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/a-push-menu/</guid>
            <pubDate>Wed, 03 Sep 2014 17:17:16 GMT</pubDate>
            <description><![CDATA[  <img src="https://lh5.googleusercontent.com/-VScF_Hq-YE8/VAOA5mdIchI/AAAAAAAARYs/uj6xLzvyRiY/s0/pixls-background.jpg" /><br/>
                 <h1>A Push Menu</h1>  
                 <h2>A Fanc(y|ier) Menu</h2>   
                <p>So, I’ve had the idea in my head for a while that it would be nice to get the navigation out of the way.
When I’m reading an article or tutorial, I don’t want to be inundated with elements that aren’t pertinent to what I’m reading.
I want to focus on the content.
<!--more--></p>
<p>I had to think a bit on the best way to possibly achieve this.
One option was to remove all navigation from the top of the page, and instead show them at the end of the article.
This runs on the assumption that the user wants to read the page, and when they’re finished reading to possibly navigate somewhere else.</p>
<p>If they came to the page by mistake, or want to get out, they can always use “Back” on their browser.
If they made it to the end of the article, then that’s the point where they may want other navigation options.
(This is how the page is currently laid out).</p>
<p>If they don’t have javascript turned on, they can still use the site just fine.
(This is important for accessibility, and security for some folks).</p>
<h2 id="what-about-a-little-more-"><a href="#what-about-a-little-more-" class="header-link-alt">What About a Little More?</a></h2>
<p>This is <strong>2014</strong> for the love of Pete!
Surely we can reasonably expect that <em>most</em> users will have javascript?
Well, maybe not.
If they do, however, we might be able to create something <em>slightly</em> nicer.</p>
<p>I personally like the idea of a menu hidden out of the way until needed.
So I put a small floating logo in the top-left of the page.
If you scroll down, the logo should slide out of view (not needed).
If you scroll up, it should bring the logo back into view (possibly needed).</p>
<p>This has already been here since I started building these pages, but now I’ve added a little more…</p>
<h3 id="push-menu"><a href="#push-menu" class="header-link-alt">Push Menu</a></h3>
<p>By default a click on the floating navigation logo will scroll the page to the navigation links on the bottom of the page.
If JS is turned off, the floating logo will always be visible, and when clicked will still get you to the navigation links quickly.</p>
<p>If JS is turned on, though, the floating logo will now “push” the page to the side as it reveals a navigation menu on the left edge of the page.
The first set of links mirror those at the end of the page for site navigation.
The next set of links is a representation of the “Table of Contents” for the current page.</p>
<p>This is anticipation of longer articles being posted soon.
I wanted to have an easier means of navigating long posts.</p>
<p><strong>Try it out!</strong></p>
<p>Clicking anywhere on the main page again will collapse the menu.</p>
<h4 id="pure-css-solution"><a href="#pure-css-solution" class="header-link-alt">Pure CSS Solution</a></h4>
<p>There may actually be a pure CSS solution for hiding/showing the menu.  The javascript is really only there to manage class states, all of the styling and transition effects are done in CSS.</p>
<p>Honestly, though, I think I’m mostly done for the moment.  I may come back and re-visit the pure CSS solution later, but for now I want to shift focus to working on content pages (and the actual content itself!).</p>
<h4 id="start-simple"><a href="#start-simple" class="header-link-alt">Start Simple</a></h4>
<p>My thought process so far on building the site is to minimize any requirements on stuff that’s questionable.  I’m only assuming HTML/CSS for the most part.
This is to make sure everything can still be accessible to folks.</p>
<p>It’s a royal PITA, though.</p>
<h3 id="a-table-of-contents-"><a href="#a-table-of-contents-" class="header-link-alt">A Table of Contents!</a></h3>
<p>So the addition of basic navigational elements was a no brainer, but that menu bar looked awfully sparse.
So, I used the extra space to include a “Table of Contents” for the current post/article as well.  This is generated automatically from all of the HTML heading tags in the page (h1/2/3/4/5).</p>
<p>My intention at the moment is to also have some sort of a reading progress indicator show up along the TOC.
I think this could provide nice visual feedback to users on where they are in an article, and how far along they might be.</p>
<p>Again, this is something that should degrade just fine in older browsers/no-js.  Those users simply won’t see the effect.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Building PIXLS.US ]]></title>
            <link>https://pixls.us/articles/building-pixls-us/</link>
            <guid isPermaLink="true">https://pixls.us/articles/building-pixls-us/</guid>
            <pubDate>Tue, 02 Sep 2014 16:49:28 GMT</pubDate>
            <description><![CDATA[  <img src="https://pixls.us/articles/building-pixls-us/dot-open-eyes.jpg" /><br/>
                 <h1>Building PIXLS.US</h1>  
                 <h2>A journey of enlightenment...</h2>   
                <p>This is just a log of reference material for actually building this site.  It’s mostly for my own reference and edification.  If you’re reading this, good luck making sense of my notes…</p>
<h3 id="static-website-with-node-js-and-metalsmith">Static Website with Node.js and Metalsmith<a href="#static-website-with-node-js-and-metalsmith" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I decided to build this site as a static website.  This means that I’m generating all of the material on my local machines, and then compiling them into static webpages that are then uploaded to the server for serving.  While this does sound like a pain in the ass, there are static site generators that make this job much easier.</p>
<p>So I looked around a bit more and found that apparently static site generators are the hip new thing.</p>
<p>I originally started with <a href="http://nanoc.ws/">http://nanoc.ws/</a>.  While this was pretty interesting looking, I am just not a Ruby guy.  So I had the double-whammy of learning the static build system along with Ruby occasionally.  Plus, after a host of problems getting the correct ruby and gems installed on my OSX machine I just decided it wasn’t worth the hassle. (I have to switch between win at work, and OSX/Linux at home - so I needed a consistent environment).</p>
<p>I expanded my search and finally remembered <a href="http://nodejs.org/">Node.js</a>.  Looking around a bit more and I also found a static site generator for Node.js called <a href="http://www.metalsmith.io">Metalsmith</a>.
This was good, as I was already reasonably familiar with javascript.</p>
<p>Metalsmith basically just takes a directory of files, and passes them into a javascript environment for processing and output to a new directory, ready to be uploaded to a server.
This is how this page is being generated right now as well.</p>
<h4 id="installing-the-build-tools">Installing the Build Tools<a href="#installing-the-build-tools" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The first thing to do is to get Node.js for your platform.  Once installed, you’ll have access to the commands <code>node</code> as well as <code>npm</code> (node package manager?).
Installing Metalsmith from there is as simple as:</p>
<p><code>node install metalsmith</code></p>
<p>Basically, Metalsmith just passes each of the directory contents through a stack of functions that you can use to process the files.  Many of these are available as plug-ins for Metalsmith.
For this site so far, I’ve been using these plug-ins:</p>
<ul>
<li>metalsmith-collections <code>npm install metalsmith-collections</code></li>
<li>metalsmith-permalinks <code>npm install metalsmith-permalinks</code></li>
<li>metalsmith-templates <code>metalsmith-templates</code></li>
<li>metalsmith-markdown <code>metalsmith-markdown</code></li>
</ul>
<p>For the templating option, I’m also using <a href="http://handlebarsjs.com/">Handlebars</a>.</p>
<p>There is a great tutorial on getting started with Metalsmith at <a href="http://www.robinthrift.com/posts/metalsmith-part-1-setting-up-the-forge/">Robin Thrift’s website</a>.</p>
<h4 id="project-structure">Project Structure<a href="#project-structure" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The structure of this site is still in flux.
By default metalsmith will look for a folder in the project root called “src”, and will output to a folder called “build”.
The site structure I have setup for this site is:</p>
<pre>
|-pixlsus/
    |-src/
        |-articles/
        |-images/
        |-js/
        |-pages/
        |-scripts/
        |_styles/
    |-templates/
    |-index.js
    |_package.json
</pre>

<h4 id="index-js">index.js<a href="#index-js" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The main processing file for building the site is <code>index.js</code>.</p>
<pre><code>var Metalsmith    = require(&#39;metalsmith&#39;),
    collections    = require(&#39;metalsmith-collections&#39;),
    permalinks    = require(&#39;metalsmith-permalinks&#39;),
    templates    = require(&#39;metalsmith-templates&#39;),
    markdown    = require(&#39;metalsmith-markdown&#39;),
    metadata    = require(&#39;./config.json&#39;),
    Handlebars    = require(&#39;handlebars&#39;);

Metalsmith(__dirname)
    .use(markdown({
        smartypants: true,
        gfm: true,
        tables: true
    }))
    .use(hyphenate_urls)
    .use(collections())
    .use(permalinks({
        pattern: &#39;:collection/:title&#39;
    }))
    .use(templates(&#39;handlebars&#39;))
    .destination(&#39;./build&#39;)
    .build();
</code></pre><p>There are a couple of other things I am doing for the templating, and one custom function I wrote to automatically hyphenate url’s. To avoid something like:</p>
<p> <code>articles/a%20new%20article/</code></p>
<p>I think this looks nicer: </p>
<p><code>articles/a-new-article/</code></p>
<p>Honestly, if I was just testing things out, the bare minimum I could use to get by would be:</p>
<pre><code>var Metalsmith    = require(&#39;metalsmith&#39;),
    templates     = require(&#39;metalsmith-templates&#39;),
    Handlebars    = require(&#39;handlebars&#39;);

Metalsmith(__dirname)
    .use(templates(&#39;handlebars&#39;))
    .destination(&#39;./build&#39;)
    .build();
</code></pre><p>If you have a base skeleton of a site, this would be all you need to run.</p>
<h4 id="building-the-site">Building the Site<a href="#building-the-site" class="header-link"><i class="fa fa-link"></i></a></h4>
<p>The site can be built by entering the site directory, and issuing the command <code>node index.js</code>.</p>
<p>Wait a few moments, and you should find a <code>build/</code> directory full of your files ready to go.</p>
<h3 id="uploading">Uploading<a href="#uploading" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>My host doesn’t have rsync access directly, but I can use rsync over ssh:</p>
<pre><code>rsync -PSauve ssh --exclude=EXCLUDE_FILES build/ USER@pixls.us:/home4/pixlsus/public_html/
</code></pre><p>Which works just fine.</p>
<h2 id="todo">TODO<a href="#todo" class="header-link"><i class="fa fa-link"></i></a></h2>
<p>List of stuff I still need to get to:</p>
<ul>
<li>Test porting one of the ‘Getting Around in GIMP’ articles<ul>
<li>Working on it.</li>
</ul>
</li>
<li>Port a few other test articles</li>
<li>Use collections in Metalsmith to collect articles of a type<ul>
<li>Generate a page of those.</li>
</ul>
</li>
<li>Probably a new index.html/front page.</li>
<li>Work on “About” page</li>
<li>Finish styling article pages.<ul>
<li><del>Particularly the links (Mobile is done? - Tablet is needed).</del></li>
</ul>
</li>
</ul>
<p>This list will grow, of course, as it needs to until we launch!</p>
<h3 id="blog">Blog<a href="#blog" class="header-link"><i class="fa fa-link"></i></a></h3>
<p>I’ve started an article to represent blog posts on the site.
I intend for them to live at the path: <code>pixls.us/blog/YYYY/MM/title-of-post</code></p>
<p>The problem is that I can’t easily use <code>metalsmith-permalinks</code> for them.
There doesn’t appear to be a way to easily process a sub-folder of documents with a different path.
I don’t want the <code>articles</code> content to contain <code>YYYY/MM</code> in the path, but I <strong>do</strong> for blog posts.</p>
<p>So I think I’ll just have to write a plugin to handle that myself real quick.
Shouldn’t be too hard, just need to do something similar to what I already wrote for hyphenating urls.</p>
<p>Basically, grab all blog posts, update their paths to the hyphenated version and change the source file to <code>index.html</code> in the directory. <strong>IF</strong> the file is not already in a sub-directory.</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ On Building PIXLS.US ]]></title>
            <link>https://pixls.us/blog/2014/09/on-building-pixls-us/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/09/on-building-pixls-us/</guid>
            <pubDate>Tue, 02 Sep 2014 14:35:51 GMT</pubDate>
            <description><![CDATA[  <img src="https://lh5.googleusercontent.com/-VScF_Hq-YE8/VAOA5mdIchI/AAAAAAAARYs/uj6xLzvyRiY/s0/pixls-background.jpg" /><br/>
                 <h1>On Building PIXLS.US</h1>  
                 <h2>Some notes from the back end</h2>   
                <p>For the curious, and to serve as an introduction, I thought I’d make a few notes about how this site is built and what I’m currently obsessing over.
Hopefully this can help define what I’m up to in case anyone wants to jump in and help out.</p>
<h2 id="the-purpose"><a href="#the-purpose" class="header-link-alt">The Purpose</a></h2>
<p>The entire point of this site, its “mission statement” if you will, is:</p>
<blockquote>
<p>To provide tutorials, workflows and a showcase for high-quality photography using Free/Open Source Software.</p>
</blockquote>
<p>Subject to revisions, of course, but mostly sums up what I’d like to accomplish here.
I also think it’s good to have this documented somewhere to remind me. :)
<!--more--></p>
<h2 id="the-technical"><a href="#the-technical" class="header-link-alt">The Technical</a></h2>
<p>I had already started writing about this elsewhere, but I’m going to reiterate it here for posterity (when I wrote it earlier I hadn’t completed the blog portion of the site yet).</p>
<h3 id="static-pages"><a href="#static-pages" class="header-link-alt">Static Pages</a></h3>
<p>On the recommendation of <a href="http://nordisch.org/">darix</a> on the <small>#darktable</small> irc channel, I looked into static site generators. 
I was originally going to use some sort of CMS and build things out from there, but I have to thank darix for causing me to pause and to think carefully about how to proceed.</p>
<p>I realized that I wanted to keep things simple.
The main focus of the site is the articles themselves (a tutorial, workflow, or showcase).
Really, this content is static by nature - so it made sense to approach it in that light.</p>
<p>The idea is to have all of the site content exist locally on my machine, then to pass it through some sort of processor to output all of the website pages ready to upload to my server. I was already familiar with the process as the <a href="http://www.gimp.org">GIMP</a> website is built in a similar fashion.</p>
<p>I just had to find a static site generator that I could use and extend as needed.</p>
<h4 id="enter-metalsmith"><a href="#enter-metalsmith" class="header-link-alt">Enter Metalsmith</a></h4>
<p>There is a plethora of static site generators out there (apparently it’s the hip new thing?), so I just had to find one that I was comfortable with using and extending.
I needed it to do what I wanted and get the hell out of the way so I could focus on content.</p>
<p>Oh, and I had to be able to extend it as needed myself.  I’m already pretty comfortable writing for the web, so I decided to go with the <a href="http://nodejs.org" title="Node.js">Node.js</a>-based <a href="http://www.metalsmith.io/" title="Metalsmith website">Metalsmith</a>.
Mostly because I’m already comfortable making a mess in javascript.</p>
<p>Metalsmith basically takes a directory full of data, and passes those objects through any series of functions I want, munges them somehow, and then spits out my website.
It’s the munging part that’s fun, and at least I can extend/modify things as needed quickly and easily.</p>
<p>tl;dr: I use javascript to process the files and output the website ready to upload.</p>
<h3 id="responsiveness"><a href="#responsiveness" class="header-link-alt">Responsiveness</a></h3>
<p>I also wanted the site to work well across different screen sizes and devices.
So I’m trying to incorporate some responsiveness in the design. 
You can actually see it working right now by resizing your browser width.
The page should reflow and elements change size to adapt to the new viewport.</p>
<p>This lets me focus on the content while knowing that it should adapt as needed to the viewer.
As a great starting point, I used Adam Kaplans <a href="http://www.adamkaplan.me/grid/">Grid</a>.</p>
<h3 id="easy-reading"><a href="#easy-reading" class="header-link-alt">Easy Reading</a></h3>
<p>Taking a cue from the past, I’m also trying to maintain legibility and readability in the pages.
This means paying attention to simple things like characters per line, font choices, and spacing.
I’m not a designer, so this topic has been fun to learn about as I go.</p>
<p>The lines on this post, for instance, should settle in around 60-75 characters per line (I’m aiming for about 65). 
The <a href="http://baymard.com/blog/line-length-readability">Baymard Institute</a> has a nice summary of the idea behind this.</p>
<h3 id="attractive"><a href="#attractive" class="header-link-alt">Attractive</a></h3>
<p>This goes without saying, I think, but who wants to look at an ugly layout/site?
I can’t say this site is beautiful, but at least I’m conciously trying to make it a pleasant experience…</p>
<p>If not for everyone, at least for me…</p>
<!-- FULL-WIDTH -->
<p><figure class="full-width">
<img src="https://lh6.googleusercontent.com/-kif88EbVMDY/U9F1NpY4YpI/AAAAAAAAQ9I/upgSaUleOaA/s1920/Dot.jpg" alt="Dot Window Portrait"/></p>
<p><figcaption>
Attractive to me. Possibly to others, but definitely to me!
</figcaption>
</figure>
<!-- /FULL-WIDTH --></p>
<h3 id="ease-of-use"><a href="#ease-of-use" class="header-link-alt">Ease of Use</a></h3>
<p>All the pretty in the world won’t fix something that’s hard to use. 
So I’m trying to put thought into user interaction.
I try to get cruft out of the way so the focus is on the articles, while also providing easy navigation or interaction (that should get the hell out of the way when it’s not needed).</p>
<h2 id="in-summary"><a href="#in-summary" class="header-link-alt">In Summary</a></h2>
<p>That’s the short version.
There’s a million things going on right now in my head as I build the site out.
I’ve got most of the pieces sorted out, and just need to finish assembling them in a way that I like.</p>
<p>So we should be ready to get things kicked off before too long!</p>
  ]]>
            </description>
        </item>
        <item>
            <title><![CDATA[ Hello World! ]]></title>
            <link>https://pixls.us/blog/2014/08/hello-world/</link>
            <guid isPermaLink="true">https://pixls.us/blog/2014/08/hello-world/</guid>
            <pubDate>Mon, 25 Aug 2014 00:40:00 GMT</pubDate>
            <description><![CDATA[  <img src="https://lh5.googleusercontent.com/-VScF_Hq-YE8/VAOA5mdIchI/AAAAAAAARYs/uj6xLzvyRiY/s0/pixls-background.jpg" /><br/>
                 <h1>Hello World!</h1>  
                 <h2>Let's see if I can get this thing off the ground...</h2>   
                <p>Well, technically this isn’t the first post on the site.
I had actually started with building out the temporary <a href="https://pixls.us/">Coming Soon</a> page.
Then I shifted focus on styling the main content page for the site (articles).
After a bit I realized that I should probably be working on some sort of blog posts as a means for folks to keep up with what I’m doing.</p>
<p>So, here we are!</p>
<h2 id="who-am-i-"><a href="#who-am-i-" class="header-link-alt">Who Am I?</a></h2>
<p><strong>I’m <a href="http://blog.patdavid.net" title="Pat David&#39;s Blog">Pat David</a>.</strong></p>
<!-- FULL-WIDTH -->
<p><figure class="full-width">
<img src="https://lh3.googleusercontent.com/-GkKqZhlz7YA/U_IWqqkLDYI/AAAAAAAARMI/Wcu4JLy3m1g/s2048/Pat-David-Headshot-Crop-2048-Q60.jpg" alt="Pat David Headshot" /></p>
<p><figcaption>Yes, I need a new headshot.</figcaptions>
</figure>
<!-- /FULL-WIDTH --></p>
<!--
<figure> 
<img src="https://lh3.googleusercontent.com/-GkKqZhlz7YA/U_IWqqkLDYI/AAAAAAAARMI/Wcu4JLy3m1g/s2048/Pat-David-Headshot-Crop-2048-Q60.jpg" alt="Pat David Headshot" />
<figcaption>Yes, I need a new headshot.</figcaptions>
</figure>
-->
<p>I’m an occasional photographer and I dabble in digital artwork occasionally as the mood strikes me.
I also happen to be a fan of free software. Those two worlds collide fairly often, and lately I’ve been having a great time writing about them.</p>
<p>I’ve been writing tutorials on my blog as well as trying to modernize/update tutorials on the <a href="http://www.gimp.org" title="GIMP Website">GIMP website</a>. 
You could call me a <small>(small)</small> part of the GIMP team (but I’m trying to do more!).
I also try to help out where I can on other F/OSS projects as well (<a href="http://gmic.sourceforge.net" title="G&#39;MIC Homepage">G’MIC</a> is another place you’ll find me bumming around).
I do these things because I think it’s important to try and give back to the community in whatever way you’re capable of.</p>
<p><strong>I’m loud.</strong>  So I figured I could use that capability to help out.</p>
<p><small>(It’s my demented super-power).</small>
<!--more--></p>
<h2 id="so-what-s-going-on-here-"><a href="#so-what-s-going-on-here-" class="header-link-alt">So What’s Going on Here?</a></h2>
<p>Well, I mentioned on the main page that I felt like we could use a site/community dedicated to photography.  Particularly Free/Open Source Software and photography.</p>
<p>The problem I noticed is a lack of sites that focus explicitly on photography and workflows using F/OSS tools. 
There are plenty of blog posts on various sites, forum posts on various boards, and the occasional group on social media. 
There is <em>not</em> a great website to act as a portal specifically for photographic needs or interests.</p>
<p>It’s my sincere desire that I can build it.</p>
<p><em>I actually find it strange to write that.</em> 
How does this not exist already?!</p>
<h3 id="is-it-ready-yet-"><a href="#is-it-ready-yet-" class="header-link-alt">Is It Ready Yet?</a></h3>
<p>No.  Not quite.</p>
<p>I’m building this entire site from scratch, so it’s taking a little bit of time.
I only just got the blog portion finished, so hopefully that much is done.</p>
<p>I’ve also <em>mostly</em> finished what the main articles will look like.
I’m in the process of porting over some of my tutorials from my blog to here so that I can have some content to test things out with.
I enjoy doing this sort of thing, so it’s a nice way to relax for me.</p>
<p>After that I’ll just need to get a couple of other pages setup, and I should at least have the skeleton of the site up and running.
I promise, as soon as I have something to actually launch I will be loud and annoying about it.</p>
<h3 id="can-i-help-"><a href="#can-i-help-" class="header-link-alt">Can I Help?</a></h3>
<p>That’s the spirit!</p>
<p>Yes, absolutely. 
Just shoot me an email and I’ll be happy to answer any questions I can. 
If there’s some particular skill you’d like to bring, I’m all ears.
If you want to write an article or tutorial, let me know.</p>
<p><script type="text/javascript" language="javascript">
<!--
// Email obfuscator script 2.1 by Tim Williams, University of Arizona
// Random encryption key feature by Andrew Moulden, Site Engineering Ltd
// This code is freeware provided these four comment lines remain intact
// A wizard to generate this code is at http://www.jottings.com/obfuscator/
{ coded = "bMz@bMzkM5Yk.ptz"
  key = "PZRuYeaAcpsl30Th1G9JUtMdFbymI4j2BX8rozQk7OvqDVfCKxiNELSnWw5Hg6"
  shift=coded.length
  link=""
  for (i=0; i<coded.length; i++) {
    if (key.indexOf(coded.charAt(i))==-1) {
      ltr = coded.charAt(i)
      link += (ltr)
    }
    else {     
      ltr = (key.indexOf(coded.charAt(i))-shift+key.length) % key.length
      link += (key.charAt(ltr))
    }
  }
document.write("<a href='mailto:"+link+"'>Email me!</a>")
}
//-->
</script><noscript>Sorry, you need Javascript on to email me.</noscript></p>
  ]]>
            </description>
        </item>

    </channel>
</rss>
