<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
xmlns:content="http://purl.org/rss/1.0/modules/content/"
xmlns:wfw="http://wellformedweb.org/CommentAPI/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:atom="http://www.w3.org/2005/Atom"
xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
>
<channel>
<title>Server Storage at Microsoft</title>
<atom:link href="https://blogs.technet.microsoft.com/filecab/feed/" rel="self" type="application/rss+xml" />
<link>https://blogs.technet.microsoft.com/filecab</link>
<description>The official blog of the Windows Server storage engineering teams</description>
<lastBuildDate>Tue, 17 Jan 2017 17:23:25 +0000</lastBuildDate>
<language>en-US</language>
<sy:updatePeriod>hourly</sy:updatePeriod>
<sy:updateFrequency>1</sy:updateFrequency>
<item>
<title>Cluster size recommendations for ReFS and NTFS</title>
<link>https://blogs.technet.microsoft.com/filecab/2017/01/13/cluster-size-recommendations-for-refs-and-ntfs/</link>
<comments>https://blogs.technet.microsoft.com/filecab/2017/01/13/cluster-size-recommendations-for-refs-and-ntfs/#comments</comments>
<pubDate>Fri, 13 Jan 2017 20:05:33 +0000</pubDate>
<dc:creator><![CDATA[Garrett Watumull]]></dc:creator>
<category><![CDATA[Software Defined Storage]]></category>
<category><![CDATA[Windows Server 2016]]></category>
<category><![CDATA[NTFS]]></category>
<category><![CDATA[ReFS]]></category>
<category><![CDATA[Storage]]></category>
<guid isPermaLink="false">https://blogs.technet.microsoft.com/filecab/?p=7695</guid>
<description><![CDATA[Microsoft’s file systems organize storage devices based on cluster size. Also known as the allocation unit size, cluster size represents the smallest amount of disk space that can be used to hold a file. Both ReFS and NTFS support multiple cluster sizes, as different sized clusters can offer different performance benefits, depending on the deployment.... <a aria-label="read more about Cluster size recommendations for ReFS and NTFS" href="https://blogs.technet.microsoft.com/filecab/2017/01/13/cluster-size-recommendations-for-refs-and-ntfs/" class="read-more">Read more</a>]]></description>
<content:encoded><![CDATA[<p>Microsoft’s file systems organize storage devices based on cluster size. Also known as the allocation unit size, cluster size represents the smallest amount of disk space that can be used to hold a file. Both ReFS and NTFS support multiple cluster sizes, as different sized clusters can offer different performance benefits, depending on the deployment.</p>
<p>In the past couple weeks, we’ve seen some confusion regarding the recommended cluster sizes for ReFS and NTFS, so this blog will hopefully disambiguate previous recommendations while helping to provide the reasoning behind why some cluster sizes are recommended for certain scenarios.</p>
<p><strong>IO amplification</strong></p>
<p>Before jumping into cluster size recommendations, it’ll be important to understand what IO amplification is and why minimizing IO amplification is important when choosing cluster sizes:</p>
<ul>
<li>IO amplification refers to the broad set of circumstances where one IO operation triggers other, unintentional IO operations. Though it may appear that only one IO operation occurred, in reality, the file system had to perform multiple IO operations to successfully service the initial IO. This phenomenon can be especially costly when considering the various optimizations that the file system can no longer make:
<ul>
<li>When performing a write, the file system could perform this write in memory and flush this write to physical storage when appropriate. This helps dramatically accelerate write operations by avoiding accessing slow, non-volatile media before completing every write.</li>
<li>Certain writes, however, could force the file system to perform additional IO operations, such as reading in data that is already written to a storage device. Reading data from a storage device significantly delays the completion of the original write, as the file system must wait until the appropriate data is retrieved from storage before making the write.</li>
</ul>
</li>
</ul>
<p><strong>ReFS cluster sizes</strong></p>
<p>ReFS offers both 4K and 64K clusters. 4K is the default cluster size for ReFS, and <em>we recommend using 4K cluster sizes for most ReFS deployments </em>because it helps reduce costly IO amplification:</p>
<ul>
<li>In general, if the cluster size exceeds the size of the IO, certain workflows can trigger unintended IOs to occur. Consider the following scenario where a ReFS volume is formatted with 64K clusters:
<ul>
<li>Consider a <a href="https://technet.microsoft.com/en-us/windows-server-docs/storage/refs/refs-overview#performance">tiered volume</a>. If a 4K write is made to a range currently in the capacity tier, ReFS must read the entire cluster from the capacity tier into the performance tier <em>before making the write</em>. Because the cluster size is the smallest granularity that the file system can use, ReFS must read the entire cluster, which includes an unmodified 60K region, to be able to complete the 4K write.</li>
</ul>
</li>
<li>By choosing 4K clusters instead of 64K clusters, one can reduce the number of IOs that occur that are smaller than the cluster size, preventing costly IO amplifications from occurring as frequently.</li>
</ul>
<p>Additionally, 4K cluster sizes offer greater compatibility with Hyper-V IO granularity, so we strongly recommend using 4K cluster sizes with Hyper-V on ReFS. 64K clusters are applicable when working with large, sequential IO, but otherwise, 4K should be the default cluster size.</p>
<p><strong>NTFS cluster sizes</strong></p>
<p>NTFS offers cluster sizes from 512 to 64K, but in general, we recommend a 4K cluster size on NTFS, as 4K clusters help minimize wasted space when storing small files. We also strongly discourage the usage of cluster sizes smaller than 4K. There are two cases, however, where 64K clusters could be appropriate:</p>
<ul>
<li>4K clusters limit the maximum volume and file size to be 16TB
<ul>
<li>64K cluster sizes can offer increased volume and file capacity, which is relevant if you’re are hosting a large deployment on your NTFS volume, such as hosting VHDs or a SQL deployment.</li>
</ul>
</li>
<li>NTFS has a fragmentation limit, and larger cluster sizes can help reduce the likelihood of reaching this limit
<ul>
<li>Because NTFS is backward compatible, it must use internal structures that weren’t optimized for modern storage demands. Thus, the metadata in NTFS prevents any file from having more than ~1.5 million extents.
<ul>
<li>One can, however, use the “format /L” option to increase the fragmentation limit to ~6 million. Read more <a href="https://support.microsoft.com/en-us/kb/967351">here</a>.</li>
</ul>
</li>
<li>64K cluster deployments are less susceptible to this fragmentation limit, so 64K clusters are a better option if the NTFS fragmentation limit is an issue. (Data deduplication, sparse files, and SQL deployments can cause a high degree of fragmentation.)
<ul>
<li>Unfortunately, NTFS compression only works with 4K clusters, so using 64K clusters isn’t suitable when using NTFS compression. Consider increasing the fragmentation limit instead, as described in the previous bullets.</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>While a 4K cluster size is the default setting for NTFS, there are many scenarios where 64K cluster sizes make sense, such as: Hyper-V, SQL, deduplication, or when most of the files on a volume are large.</p>
]]></content:encoded>
<wfw:commentRss>https://blogs.technet.microsoft.com/filecab/2017/01/13/cluster-size-recommendations-for-refs-and-ntfs/feed/</wfw:commentRss>
<slash:comments>2</slash:comments>
</item>
<item>
<title>Deep Dive: The Storage Pool in Storage Spaces Direct</title>
<link>https://blogs.technet.microsoft.com/filecab/2016/11/21/deep-dive-pool-in-spaces-direct/</link>
<comments>https://blogs.technet.microsoft.com/filecab/2016/11/21/deep-dive-pool-in-spaces-direct/#comments</comments>
<pubDate>Mon, 21 Nov 2016 17:09:43 +0000</pubDate>
<dc:creator><![CDATA[Cosmos Darwin]]></dc:creator>
<category><![CDATA[SDS]]></category>
<category><![CDATA[Software Defined Storage]]></category>
<category><![CDATA[Windows 10]]></category>
<category><![CDATA[Windows Server 2016]]></category>
<category><![CDATA[BiggerPoolThanKanye]]></category>
<category><![CDATA[failover clustering]]></category>
<category><![CDATA[S2D]]></category>
<category><![CDATA[Storage Spaces]]></category>
<category><![CDATA[Storage Spaces Direct]]></category>
<guid isPermaLink="false">https://blogs.technet.microsoft.com/filecab/?p=7335</guid>
<description><![CDATA[Hi! I’m Cosmos. Follow me on Twitter @cosmosdarwin. Review The storage pool is the collection of physical drives which form the basis of your software-defined storage. Those familiar with Storage Spaces in Windows Server 2012 or 2012R2 will remember that pools took some managing – you had to create and configure them, and then manage... <a aria-label="read more about Deep Dive: The Storage Pool in Storage Spaces Direct" href="https://blogs.technet.microsoft.com/filecab/2016/11/21/deep-dive-pool-in-spaces-direct/" class="read-more">Read more</a>]]></description>
<content:encoded><![CDATA[<p>Hi! I’m Cosmos. Follow me on Twitter <a href="https://twitter.com/CosmosDarwin">@cosmosdarwin</a>.</p>
<h3>Review</h3>
<p>The storage pool is the collection of physical drives which form the basis of your software-defined storage. Those familiar with Storage Spaces in Windows Server 2012 or 2012R2 will remember that pools took some managing – you had to create and configure them, and then manage membership by adding or removing drives. Because of scale limitations, most deployments had multiple pools, and because data placement was essentially static (more on this later), you couldn’t really expand them once created.</p>
<p>We’re introducing some exciting improvements in Windows Server 2016.</p>
<h3>What’s new</h3>
<p>With <a href="http://aka.ms/s2d">Storage Spaces Direct</a>, we now support up to 416 drives per pool, the same as our per-cluster maximum, and we strongly recommend you use exactly one pool per cluster. When you enable Storage Spaces Direct (as with the <code>Enable-ClusterS2D</code> cmdlet), this pool is automatically created and configured with the best possible settings for your deployment. Eligible drives are automatically discovered and added to the pool and, if you scale out, any new drives are added to the pool too, and data is moved around to make use of them. When drives fail they are automatically retired and removed from the pool. In fact, you really don’t need to manage the pool at all anymore except to keep an eye on its available capacity.</p>
<p>Nonetheless, understanding how the pool works can help you reason about fault tolerance, scale-out, and more. So if you’re curious, read on!</p>
<p>To help illustrate certain key points, I’ve written a script (open-source, available at the end) which produces this view of the pool’s drives, organized by type, by server (‘node’), and by how much data they’re storing. The fastest drives in each server, listed at the top, are claimed for caching.</p>
<p><div id="attachment_7465" style="width: 774px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/11/PoolsBlog-Screenshot-1.png" width="764" height="500" class="wp-image-7465 size-full" /><p class="wp-caption-text">The storage pool forms the physical basis of your software-defined storage.</p></div></p>
<h3>The confusion begins: resiliency, slabs, and striping</h3>
<p>Let’s start with three servers forming one Storage Spaces Direct cluster.</p>
<p>Each server has 2 x 800 GB NVMe drives for caching and 4 x 2 TB SATA SSDs for capacity.</p>
<p><img src="https://msdnshared.blob.core.windows.net/media/2016/11/PoolsBlog-Servers-3-500x278.png" alt="poolsblog-servers-3" width="500" height="278" class="aligncenter size-mediumlarge wp-image-7415" /></p>
<p>We can create our first volume (‘Storage Space’) and choose 1 TiB in size, two-way mirrored. This implies we will maintain <em>two identical copies </em>of everything in that volume, always on different drives in different servers, so that if hardware fails or is taken down for maintenance, we’re sure to still have access to all our data. Consequently, this 1 TiB volume will actually occupy 2 TiB of physical capacity on disk, its so-called ‘footprint’ on the pool.</p>
<p><div id="attachment_7365" style="width: 970px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/11/PoolsBlog-Gif1.gif" alt="Our 1 TiB two-way mirror volume occupies 2 TiB of physical capacity, its ‘footprint’ on the pool." width="960" height="320" class="size-full wp-image-7365" /><p class="wp-caption-text">Our 1 TiB two-way mirror volume occupies 2 TiB of physical capacity, its ‘footprint’ on the pool.</p></div></p>
<p><strong></strong><em></em>(Storage Spaces offers many <a href="https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-fault-tolerance">resiliency types with differing storage efficiency</a>. For simplicity, this blog will show two-way mirroring. The concepts we’ll cover apply regardless which resiliency type you choose, but two-way mirroring is by far the most straightforward to draw and explain. Likewise, although Storage Spaces offers <a href="https://technet.microsoft.com/en-us/windows-server-docs/failover-clustering/fault-domains">chassis and/or rack awareness</a>, this blog will assume the default server-level awareness for simplicity.)</p>
<p>Okay, so we have 2 TiB of data to write to physical media. <strong>But where will these two tebibytes of data actually land? </strong></p>
<p>You might imagine that Spaces just picks any two drives, in different servers, and places the copies <em>in whole </em>on those drives. Alas, no. What if the volume were larger than the drive size? Okay, perhaps it spans <em>several</em> drives in both servers? Closer, but still no.</p>
<p>What actually happens can be surprising if you’ve never seen it before.</p>
<p><div id="attachment_7515" style="width: 970px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/11/PoolsBlog-Gif2-SV.gif" width="960" height="320" class="wp-image-7515 size-full" alt="Storage Spaces starts by dividing the volume into many 'slabs', each 256 MB in size." /><p class="wp-caption-text">Storage Spaces starts by dividing the volume into many ‘slabs’, each 256 MB in size.</p></div></p>
<p>Storage Spaces starts by dividing the volume into many ‘slabs’, each 256 MB in size. This means our 1 TiB volume has some 4,000 such slabs!</p>
<p>For each slab, two copies are made and placed on different drives in different servers. This decision is made independently for each slab, successively, with an eye toward equilibrating utilization – you can think of it like dealing playing cards into equal piles. This means every single drive in the storage pool will store some copies of some slabs!</p>
<p><div id="attachment_7525" style="width: 970px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/11/PoolsBlog-Gif3-SV.gif" width="960" height="540" class="wp-image-7525 size-full" /><p class="wp-caption-text">The placement decision is made independently for each slab, like dealing playing cards into equal piles.</p></div></p>
<p><strong></strong>This can be non-obvious, but it has some real consequences you can observe. For one, it means all drives in all servers will gradually “fill up” in lockstep, in 256 MB increments. This is why we rarely pay attention to how full specific drives or servers are – because they’re (almost) always (almost) the same!</p>
<p><strong></strong></p>
<p><div id="attachment_7475" style="width: 774px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/11/PoolsBlog-Screenshot-2.png" width="764" height="500" class="wp-image-7475 size-full" alt="Slabs of our two-way mirrored volume have landed on every drive in all three servers." /><p class="wp-caption-text">Slabs of our two-way mirrored volume have landed on every drive in all three servers.</p></div></p>
<p>(For the curious reader: the pool keeps a sprawling mapping of which drive has each copy of each slab called the ‘pool metadata’ which can reach up to several gigabytes in size. It is replicated to at least five of the fastest drives in the cluster, and synchronized and repaired with the utmost aggressiveness. To my knowledge, pool metadata loss has <em>never</em> taken down an actual production deployment of Storage Spaces.)</p>
<h3>Why? Can you spell parallelism?</h3>
<p>This may seem complicated, and it is. So why do it? Two reasons.</p>
<h3>Performance, performance, performance!</h3>
<p>First, striping every volume across every drive unlocks truly awesome potential for reads and writes – especially larger sequential ones – to activate many drives in parallel, vastly increasing IOPS and IO throughput. The unrivaled performance of Storage Spaces Direct compared to competing technologies is largely attributable to this fundamental design. (There is more complexity here, with the infamous <em>column count </em>and <em>interleave </em>you may remember from 2012 or 2012R2, but that’s beyond the scope of this blog. Spaces automatically sets appropriate values for these in 2016 anyway.)</p>
<p>(This is also why members of the core Spaces engineering team take some offense if you compare mirroring directly to RAID-1.)</p>
<h3>Improved data safety</h3>
<p>The second is data safety – it’s related, but worth explaining in detail.</p>
<p>In Storage Spaces, when drives fail, their contents are reconstructed elsewhere based on the surviving copy or copies. We call this ‘repairing’, and it happens automatically and immediately in Storage Spaces Direct. If you think about it, repairing must involve two steps – first, reading from the surviving copy; second, writing out a new copy to replace the lost one.</p>
<p>Bear with me for a paragraph, and imagine if we kept <em>whole </em>copies of volumes. (Again, we don’t.) Imagine one drive has every slab of our 1 TiB volume, and another drive has the copy of every slab. What happens if the first drive fails? The other drive has the only surviving copy. Of <em>every</em> slab. To repair, we need to read from it. <em>Every. Last. Byte.</em> We are obviously limited by the read speed of that drive. Worse yet, we then need to write all that out again to the replacement drive or hot spare, where we are limited by its write speed. Yikes! Inevitably, this leads to contention with ongoing user or application IO activity. Not good.</p>
<p><strong></strong>Storage Spaces, unlike some of our friends in the industry, does not do this.</p>
<p>Consider again the scenario where some drive fails. We <em>do </em>lose all the slabs stored on that drive. And we <em>do </em>need to read from each slab’s surviving copy in order to repair. <strong>But, where are these surviving copies?</strong> They are evenly distributed across almost every other drive in the pool! One lost slab might have its other copy on Drive 15; another lost slab might have its other copy on Drive 03; another lost slab might have its other copy on Drive 07; and so on. So, almost every other drive in the pool has something to contribute to the repair!</p>
<p>Next, we <em>do</em> need to write out the new copy of each – <strong>where can these new copies be written? </strong>Provided there is available capacity, each lost slab can be re-constructed on almost any other drive in the pool!</p>
<p><strong></strong>(For the curious reader: I say <em>almost</em> because the requirement that slab copies land in different servers precludes any drives in the same server as the failure from having anything to contribute, read-wise. They were never eligible to get the other copy. Similarly, those drives in the same server as the surviving copy are ineligible to receive the new copy, and so have nothing to contribute write-wise. This detail turns out not to be terribly consequential.)</p>
<p>While this can be non-obvious, it has some significant implications. Most importantly, repairing data faster minimizes the risk that multiple hardware failures will overlap in time, improving overall data safety. It is also more convenient, as it reduces the ‘resync’ wait time during rolling cluster-wide updates or maintenance. And because the read/write burden is spread thinly among all surviving drives, the load on each drive individually is light, which minimizes contention with user or application activity.</p>
<h3>Reserve capacity</h3>
<p>For this to work, you need to set aside some extra capacity in the storage pool. You can think of this as giving the contents of a failed drive “somewhere to go” to be repaired. For example, to repair from one drive failure (without immediately replacing it), you should set aside at least one drive’s worth of reserve capacity. (If you are using 2 TB drives, that means leaving 2 TB of your pool unallocated.) This serves the same function as a hot spare, but unlike an actual hot spare, the reserve capacity is taken evenly from every drive in the pool.</p>
<p><div id="attachment_7355" style="width: 1814px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/11/Reserve-Capacity.png" alt="Reserve capacity gives the contents of a failed drive "somewhere to go" to be repaired." width="1804" height="677" class="wp-image-7355 size-full" /><p class="wp-caption-text">Reserve capacity gives the contents of a failed drive “somewhere to go” to be repaired.</p></div></p>
<p>Reserving capacity is not enforced by Storage Spaces, but we highly recommend it. The more you have, the less urgently you will need to scramble to replace drives when they fail, because your volumes can (and will automatically) repair into the reserve capacity, completely independent of the physical replacement process.</p>
<p>When you do eventually replace the drive, it will automatically take its predecessor’s place in the pool.</p>
<p>Check out our <a href="http://aka.ms/s2dcalc">capacity calculator</a> for help with determining appropriate reserve capacity.</p>
<h3>Automatic pooling and re-balancing</h3>
<p>New in Windows 10 and Windows Server 2016, slabs and their copies can be moved around between drives in the storage pool to equilibrate utilization. We call this ‘optimizing’ or ‘re-balancing’ the storage pool, and it’s essential for scalability in Storage Spaces Direct.</p>
<p>For instance, what if we need to add a fourth server to our cluster?</p>
<pre><code>Add-ClusterNode -Name <Name></code></pre>
<p><img src="https://msdnshared.blob.core.windows.net/media/2016/11/PoolsBlog-Servers-4-500x278.png" alt="poolsblog-servers-4" width="500" height="278" class="aligncenter wp-image-7405 size-mediumlarge" /></p>
<p>The new drives in this new server will be added automatically to the storage pool. At first, they’re empty.</p>
<p><div id="attachment_7485" style="width: 774px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/11/PoolsBlog-Screenshot-3.png" width="764" height="548" class="wp-image-7485 size-full" /><p class="wp-caption-text">The capacity drives in our fourth server are empty, for now.</p></div></p>
<p>After 30 minutes, Storage Spaces Direct will automatically begin re-balancing the storage pool – moving slabs around to even out drive utilization. This can take some time (many hours) for larger deployments. You can watch its progress using the following cmdlet.</p>
<pre><code>Get-StorageJob</code></pre>
<p>If you’re impatient, or if your deployment uses Shared SAS Storage Spaces with Windows Server 2016, you can kick off the re-balance yourself using the following cmdlet.</p>
<pre><code>Optimize-StoragePool <span>-FriendlyName "S2D*"</span></code></pre>
<p><div id="attachment_7395" style="width: 970px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/11/PoolsBlog-Gif4.gif" alt="The storage pool is 're-balanced' whenever new drives are added to even out utilization." width="960" height="320" class="size-full wp-image-7395" /><p class="wp-caption-text">The storage pool is ‘re-balanced’ whenever new drives are added to even out utilization.</p></div></p>
<p>Once completed, we see that our 1 TiB volume is (almost) evenly distributed across all the drives in all <em>four </em>servers.</p>
<p><strong></strong></p>
<p><div id="attachment_7495" style="width: 774px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/11/PoolsBlog-Screenshot-4.png" width="764" height="549" class="wp-image-7495 size-full" alt="The slabs of our 1 TiB two-way mirrored volume are now spread evenly across all four servers." /><p class="wp-caption-text">The slabs of our 1 TiB two-way mirrored volume are now spread evenly across all four servers.</p></div></p>
<p>And going forward, when we create new volumes, they too will be distributed evenly across all drives in all servers.</p>
<p>This can explain one final phenomena you might observe – that when a drive fails, <em>every</em> volume is marked ‘Incomplete’ for the duration of the repair. Can you figure out why?</p>
<h3>Conclusion</h3>
<p>Okay, that’s it for now. If you’re still reading, wow, thank you!</p>
<p>Let’s review some key takeaways.</p>
<ul>
<li>Storage Spaces Direct automatically creates one storage pool, which grows as your deployment grows. You do not need to modify its settings, add or remove drives from the pool, nor create new pools.</li>
<li>Storage Spaces does not keep whole copies of volumes – rather, it divides them into tiny ‘slabs’ which are distributed evenly across all drives in all servers. This has some practical consequences. For example, using two-way mirroring with three servers does <em>not</em> leave one server empty. Likewise, when drives fail, all volumes are affected for the very short time it takes to repair them.</li>
<li>Leaving some unallocated ‘reserve’ capacity in the pool allows this fast, non-invasive, parallel repair to happen even before you replace the drive.</li>
<li>The storage pool is ‘re-balanced’ whenever new drives are added, such as on scale-out or after replacement, to equilibrate how much data every drive is storing. This ensures all drives and all servers are always equally “full”.</li>
</ul>
<h3>U Can Haz Script</h3>
<p>In PowerShell, you can see the storage pool by running the following cmdlet.</p>
<pre><code>Get-StoragePool S2D*</code></pre>
<p>And you can see the drives in the pool with this simple pipeline.</p>
<pre><code>Get-StoragePool S2D* | Get-PhysicalDisk</code></pre>
<p>Throughout this blog, I showed the output of a script which essentially runs the above, cherry-picks interesting properties, and formats the output all fancy-like. That script is included below, and is also available at <a href="http://cosmosdarwin.com/Show-PrettyPool.ps1">http://cosmosdarwin.com/Show-PrettyPool.ps1</a> to spare you the 200-line copy/paste. There is also a simplified version at <a href="http://cosmosdarwin.com/Show-PrettyPool-Simplified.ps1">here</a> which forgoes my extravagant helper functions to reduce running time by about 20x and lines of code by about 2x. <img src="https://s.w.org/images/core/emoji/2/72x72/1f642.png" alt="" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>Let me know what you think!</p>
<pre><code># Written by Cosmos Darwin, PM
# Copyright (C) 2016 Microsoft Corporation
# MIT License
# 11/2016
Function ConvertTo-PrettyCapacity {
<#
.SYNOPSIS Convert raw bytes into prettier capacity strings.
.DESCRIPTION Takes an integer of bytes, converts to the largest unit (kilo-, mega-, giga-, tera-) that will result in at least 1.0, rounds to given precision, and appends standard unit symbol.
.PARAMETER Bytes The capacity in bytes.
.PARAMETER UseBaseTwo Switch to toggle use of binary units and prefixes (mebi, gibi) rather than standard (mega, giga).
.PARAMETER RoundTo The number of decimal places for rounding, after conversion.
#>
Param (
[Parameter(
Mandatory = $True,
ValueFromPipeline = $True
)
]
[Int64]$Bytes,
[Int64]$RoundTo = 0,
[Switch]$UseBaseTwo # Base-10 by Default
)
If ($Bytes -Gt 0) {
$BaseTenLabels = ("bytes", "KB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB")
$BaseTwoLabels = ("bytes", "KiB", "MiB", "GiB", "TiB", "PiB", "EiB", "ZiB", "YiB")
If ($UseBaseTwo) {
$Base = 1024
$Labels = $BaseTwoLabels
}
Else {
$Base = 1000
$Labels = $BaseTenLabels
}
$Order = [Math]::Floor( [Math]::Log($Bytes, $Base) )
$Rounded = [Math]::Round($Bytes/( [Math]::Pow($Base, $Order) ), $RoundTo)
[String]($Rounded) + $Labels[$Order]
}
Else {
0
}
Return
}
Function ConvertTo-PrettyPercentage {
<#
.SYNOPSIS Convert (numerator, denominator) into prettier percentage strings.
.DESCRIPTION Takes two integers, divides the former by the latter, multiplies by 100, rounds to given precision, and appends "%".
.PARAMETER Numerator Really?
.PARAMETER Denominator C'mon.
.PARAMETER RoundTo The number of decimal places for rounding.
#>
Param (
[Parameter(Mandatory = $True)]
[Int64]$Numerator,
[Parameter(Mandatory = $True)]
[Int64]$Denominator,
[Int64]$RoundTo = 1
)
If ($Denominator -Ne 0) { # Cannot Divide by Zero
$Fraction = $Numerator/$Denominator
$Percentage = $Fraction * 100
$Rounded = [Math]::Round($Percentage, $RoundTo)
[String]($Rounded) + "%"
}
Else {
0
}
Return
}
Function Find-LongestCommonPrefix {
<#
.SYNOPSIS Find the longest prefix common to all strings in an array.
.DESCRIPTION Given an array of strings (e.g. "Seattle", "Seahawks", and "Season"), returns the longest starting substring ("Sea") which is common to all the strings in the array. Not case sensitive.
.PARAMETER Strings The input array of strings.
#>
Param (
[Parameter(
Mandatory = $True
)
]
[Array]$Array
)
If ($Array.Length -Gt 0) {
$Exemplar = $Array[0]
$PrefixEndsAt = $Exemplar.Length # Initialize
0..$Exemplar.Length | ForEach {
$Character = $Exemplar[$_]
ForEach ($String in $Array) {
If ($String[$_] -Eq $Character) {
# Match
}
Else {
$PrefixEndsAt = [Math]::Min($_, $PrefixEndsAt)
}
}
}
# Prefix
$Exemplar.SubString(0, $PrefixEndsAt)
}
Else {
# None
}
Return
}
Function Reverse-String {
<#
.SYNOPSIS Takes an input string ("Gates") and returns the character-by-character reversal ("setaG").
#>
Param (
[Parameter(
Mandatory = $True,
ValueFromPipeline = $True
)
]
$String
)
$Array = $String.ToCharArray()
[Array]::Reverse($Array)
-Join($Array)
Return
}
Function New-UniqueRootLookup {
<#
.SYNOPSIS Creates hash table that maps strings, particularly server names of the form [CommonPrefix][Root][CommonSuffix], to their unique Root.
.DESCRIPTION For example, given ("Server-A2.Contoso.Local", "Server-B4.Contoso.Local", "Server-C6.Contoso.Local"), returns key-value pairs:
{
"Server-A2.Contoso.Local" -> "A2"
"Server-B4.Contoso.Local" -> "B4"
"Server-C6.Contoso.Local" -> "C6"
}
.PARAMETER Strings The keys of the hash table.
#>
Param (
[Parameter(
Mandatory = $True
)
]
[Array]$Strings
)
# Find Prefix
$CommonPrefix = Find-LongestCommonPrefix $Strings
# Find Suffix
$ReversedArray = @()
ForEach($String in $Strings) {
$ReversedString = $String | Reverse-String
$ReversedArray += $ReversedString
}
$CommonSuffix = $(Find-LongestCommonPrefix $ReversedArray) | Reverse-String
# String -> Root Lookup
$Lookup = @{}
ForEach($String in $Strings) {
$Lookup[$String] = $String.Substring($CommonPrefix.Length, $String.Length - $CommonPrefix.Length - $CommonSuffix.Length)
}
$Lookup
Return
}
### SCRIPT... ###
$Nodes = Get-StorageSubSystem Cluster* | Get-StorageNode
$Drives = Get-StoragePool S2D* | Get-PhysicalDisk
$Names = @()
ForEach ($Node in $Nodes) {
$Names += $Node.Name
}
$UniqueRootLookup = New-UniqueRootLookup $Names
$Output = @()
ForEach ($Drive in $Drives) {
If ($Drive.BusType -Eq "NVMe") {
$SerialNumber = $Drive.AdapterSerialNumber
$Type = $Drive.BusType
}
Else { # SATA, SAS
$SerialNumber = $Drive.SerialNumber
$Type = $Drive.MediaType
}
If ($Drive.Usage -Eq "Journal") {
$Size = $Drive.Size | ConvertTo-PrettyCapacity
$Used = "-"
$Percent = "-"
}
Else {
$Size = $Drive.Size | ConvertTo-PrettyCapacity
$Used = $Drive.VirtualDiskFootprint | ConvertTo-PrettyCapacity
$Percent = ConvertTo-PrettyPercentage $Drive.VirtualDiskFootprint $Drive.Size
}
$Node = $UniqueRootLookup[($Drive | Get-StorageNode -PhysicallyConnected).Name]
# Pack
$Output += [PSCustomObject]@{
"SerialNumber" = $SerialNumber
"Type" = $Type
"Node" = $Node
"Size" = $Size
"Used" = $Used
"Percent" = $Percent
}
}
$Output | Sort Used, Node | FT
</code></pre>
]]></content:encoded>
<wfw:commentRss>https://blogs.technet.microsoft.com/filecab/2016/11/21/deep-dive-pool-in-spaces-direct/feed/</wfw:commentRss>
<slash:comments>51</slash:comments>
</item>
<item>
<title>Don’t do it: consumer-grade solid-state drives (SSD) in Storage Spaces Direct</title>
<link>https://blogs.technet.microsoft.com/filecab/2016/11/18/dont-do-it-consumer-ssd/</link>
<comments>https://blogs.technet.microsoft.com/filecab/2016/11/18/dont-do-it-consumer-ssd/#comments</comments>
<pubDate>Fri, 18 Nov 2016 16:00:23 +0000</pubDate>
<dc:creator><![CDATA[Cosmos Darwin]]></dc:creator>
<category><![CDATA[SDS]]></category>
<category><![CDATA[Software Defined Storage]]></category>
<category><![CDATA[Uncategorized]]></category>
<category><![CDATA[Windows Server 2016]]></category>
<category><![CDATA[Clustering]]></category>
<category><![CDATA[Hardware]]></category>
<category><![CDATA[Performance]]></category>
<category><![CDATA[S2D]]></category>
<category><![CDATA[Storage Spaces Direct]]></category>
<guid isPermaLink="false">https://blogs.technet.microsoft.com/filecab/?p=7255</guid>
<description><![CDATA[// This post was written by Dan Lovinger, Principal Software Engineer. Howdy, In the weeks since the release of Windows Server 2016, the amount of interest we’ve seen in Storage Spaces Direct has been nothing short of spectacular. This interest has translated to many potential customers looking to evaluate Storage Spaces Direct. Windows Server has... <a aria-label="read more about Don’t do it: consumer-grade solid-state drives (SSD) in Storage Spaces Direct" href="https://blogs.technet.microsoft.com/filecab/2016/11/18/dont-do-it-consumer-ssd/" class="read-more">Read more</a>]]></description>
<content:encoded><![CDATA[<p><em>// This post was written by Dan Lovinger, Principal Software Engineer.</em></p>
<p>Howdy,</p>
<p>In the weeks since the release of Windows Server 2016, the amount of interest we’ve seen in <a href="http://aka.ms/s2d">Storage Spaces Direct</a> has been nothing short of spectacular. This interest has translated to many potential customers looking to evaluate Storage Spaces Direct.</p>
<p>Windows Server has a strong heritage with do-it-yourself design. We’ve even done it ourselves with the <a href="https://blogs.technet.microsoft.com/filecab/2016/10/14/kepler-47">Project Kepler-47 proof of concept</a>! While over the coming months there will be many OEM-validated solutions coming to market, many more experimenters are once again piecing together their own configurations.</p>
<p>This is great, and it has led to a lot of questions, particularly about Solid-State Drives (SSDs). One dominates: <em>“Is <strong>[some drive]</strong> a good choice for a cache device?”</em> Another comes in close behind: <em>“We’re using <strong>[some drive]</strong> as a cache device and performance is horrible, what gives?”</em></p>
<p><div id="attachment_7256" style="width: 510px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/11/SSD-Buffer-And-Spare-500x305.png" alt="The flash translation layer masks a variety of tricks an SSD can use to accelerate performance and extend its lifetime, such as buffering and spare capacity." width="500" height="305" class="wp-image-7256 size-mediumlarge" /><p class="wp-caption-text">The flash translation layer masks a variety of tricks an SSD can use to accelerate performance and extend its lifetime, such as buffering and spare capacity.</p></div></p>
<p><strong>Some background on SSDs</strong></p>
<p>As I write this in late 2016, an SSD is universally a device built from a set of NAND flash dies connected to an internal controller, called the flash translation layer (“FTL”).</p>
<p>NAND flash is inherently unstable. At the physical level, a flash cell is a charge trap device – a bucket for storing electrons. The high voltages needed to trigger the quantum tunneling process that moves electrons in and out of the cell – your data – slowly accumulates damage at the atomic level. Failure does not happen all at once. Charge degrades in-place over time and even reads aren’t without cost, a phenomenon known as read disturb.</p>
<p>The number of electrons in the cell’s charge trap translate to a measurable voltage. At its most basic, a flash cell stores one on/off bit – a single level cell (SLC) – and the difference between 0 and 1 is “easy”. There is only one threshold voltage to consider. On one side the cell represents 0, on the other it is 1.</p>
<p>However, conventional SSDs have moved on from SLC designs. Common SSDs now store two (MLC) or even three (TLC) bits per cell, requiring four (00, 01, 10, 11) or eight (001, 010, … 110, 111) different charge levels. On the horizon is 4 bit QLC NAND, which will require sixteen! As the damage accumulates it becomes difficult to reliably set charge levels; eventually, they cannot store new data. This happens faster and faster as bit densities increase.</p>
<ul>
<li>SLC: 100,000 or more writes per cell</li>
<li>MLC: 10,000 to 20,000</li>
<li>TLC: low to mid 1,000’s</li>
<li>QLC: mid-100’s</li>
</ul>
<p>The FTL has two basic defenses.</p>
<ul>
<li>error correcting codes (ECC) stored alongside the data</li>
<li>extra physical capacity, over and above the apparent size of the device, “over-provisioning”</li>
</ul>
<p>Both defenses work like a bank account.</p>
<p>Over the short term, some amount of the ECC is needed to recover the data on each read. Lightly-damaged cells or recently-written data won’t draw heavily on ECC, but as time passes, more of the ECC is necessary to recover the data. When it passes a safety margin, the data must be re-written to “refresh” the data and ECC, and the cycle continues.</p>
<p>Across a longer term, the over-provisioning in the device replaces failed cells and preserves the apparent capacity of the SSD. Once this account is drawn down, the device is at the end of its life.</p>
<p>To complete the physical picture, NAND is not freely writable. A die is divided into what we refer to as program/erase “P/E” pages. These are the actual writable elements. A page must first be erased to prepare writing it, then the entire page can be written at once. A page may be as small as 16K, or potentially much larger. Any one single write that arrives in the SSD probably won’t line up with the page size!</p>
<p>And finally, NAND never re-writes in place. The FTL is continuously keeping track of wear, preparing fresh erased pages, and consolidating valid data sitting in pages alongside stale data corresponding to logical blocks which have already been re-written. These are additional reasons for over-provisioning.</p>
<p><strong></strong></p>
<p><div id="attachment_7325" style="width: 889px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/11/SSD-Comparison1-1024x511.png" width="879" height="439" class="wp-image-7325 size-large" /><p class="wp-caption-text">In consumer devices, and especially in mobile, an SSD can safely leverage an unprotected, volatile cache because the device’s battery ensures it will not unexpectedly lose power. In servers, however, an SSD must provide its own power protection, typically in the form of a capacitor.</p></div></p>
<p><strong>Buffers and caches</strong></p>
<p>The bottom line is that a NAND flash SSD is a complex, dynamic environment and there is a lot going on to keep your data safe. As device densities increase, it is getting ever harder. We must maximize the value of each write, as it takes the device one step closer to failure. Fortunately, we have a trick: a buffer.</p>
<p>A buffer in an SSD is just like the cache in the system that surrounds it: some memory which can accumulate writes, allowing the user/application request to complete while it gathers more and more data to write efficiently to the NAND flash. Many small operations turn into a small number of larger operations. Just like the memory in a conventional computer, though, on its own that buffer is volatile – if a power loss occurs, any pending write operations are lost.</p>
<p>Losing data is, of course, not acceptable. Storage Spaces Direct is at the far end of a series of actions which have led to it getting a write. A virtual machine on another computer may have had an application issue a flush which, in a physical system, would put the data on stable storage. After Storage Spaces Direct acknowledges <em>any </em>write, it must be stable.</p>
<p>How can any SSD have a volatile cache!? Simple, and it is a crucial detail of how the SSD market has differentiated itself: you are very likely reading this on a device with a battery! <em>Consumer</em> flash is volatile in the device but not volatile when considering the entire system – your phone, tablet or laptop. Making a cache non-volatile requires some form of power storage (or new technology …), which adds unneeded expense in the consumer space.</p>
<p><span>What about servers? In the enterprise space, the cost and complexity of providing complete power safety to a collection of servers can be prohibitive. This is the design point enterprise SSDs sit in: the added cost of internal power capacity to allow saving the buffer content is small.</span></p>
<p><div id="attachment_7295" style="width: 889px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/11/35-1024x768.jpg" alt="An (older) enterprise-grade SSD, with its removable and replaceable built-in battery!" width="879" height="659" class="wp-image-7295 size-large" /><p class="wp-caption-text">An (older) enterprise-grade SSD, with its removable and replaceable built-in battery!</p></div></p>
<p><div id="attachment_7305" style="width: 889px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/11/15-1024x768.jpg" alt="This newer enterprise-grade SSD, foreground, uses a capacitor (the three little yellow things, bottom right) to provide power-loss protection." width="879" height="659" class="wp-image-7305 size-large" /><p class="wp-caption-text">This newer enterprise-grade SSD, foreground, uses a capacitor (the three little yellow things, bottom right) to provide power-loss protection.</p></div></p>
<p>Along with volatile caches, consumer flash is also universally of lower endurance. A consumer device targets environments with light activity. Extremely dense, inexpensive, fragile NAND flash – which may wear out after only a thousand writes – could still provide many years of service. However, expressed in total writes over time or capacity written per day, a consumer device could wear out more than <em>10x</em> faster than available enterprise-class SSD.</p>
<p>So, where does that leave us? Two requirements for SSDs for Storage Spaces Direct. One hard, one soft, but they normally go together:</p>
<ul>
<li>the device must have a non-volatile write cache</li>
<li>the device <em>should</em> have enterprise-class endurance</li>
</ul>
<p>But … could I get away with it? And more crucially – for us – what happens if I just put a consumer-grade SSD with a volatile write cache in a Storage Spaces Direct system?</p>
<p> </p>
<p><strong>An experiment with consumer-grade SSDs</strong></p>
<p>For this experiment, we’ll be using a new-out-of-box 1 TB consumer class SATA SSD. While we won’t name it, it is a first tier, high quality, widely available device. It just happens to not be appropriate for an enterprise workload like Storage Spaces Direct, as we’ll see shortly.</p>
<p>In round numbers, its data sheet says the following:</p>
<ul>
<li>QD32 4K Read: 95,000 IOPS</li>
<li>QD32 4K Write: 90,000 IOPS</li>
<li>Endurance: 185TB over the device lifetime</li>
</ul>
<p>Note: QD (“queue depth”) is geek-speak for the targeted number of IOs outstanding during a storage test. Why do you always see 32? That’s the SATA Native Command Queueing (NCQ) limit to which commands can be pipelined to a SATA device. SAS and especially NVME can go much deeper.</p>
<p>Translating the endurance to the widely-used device-writes-per-day (DWPD) metric, over the device’s 5-year warranty period that is</p>
<pre><code>185 TB / (365 days x 5 years = 1825 days) = ~ 100 GB writable per day
100 GB / 1 TB total capacity = 0.10 DWPD
</code></pre>
<p>The device can handle just over 100 GB each day for 5 years before its endurance is exhausted. That’s a lot of Netflix and web browsing for a single user! Not so much for a large set of virtualized workloads.</p>
<p>To gather the data below, I prepared the device with a 100 GiB load file, written through sequentially a little over 2 times. I used <a href="http://aka.ms/diskspd">DISKSPD 2.0.18</a> to do a QD8 70:30 4 KiB mixed read/write workload using 8 threads, each issuing a single IO at a time to the SSD. First with the write buffer enabled:</p>
<pre><code>diskspd.exe -t8 -b4k -r4k -o1 -w30 -Su -D -L -d1800 -Rxml Z:\load.bin</code></pre>
<p><div id="attachment_7315" style="width: 510px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/11/Graph-1-500x301.png" alt="Normal unbuffered IO sails along, with a small write cliff." width="500" height="301" class="wp-image-7315 size-mediumlarge" /><p class="wp-caption-text">Normal unbuffered IO sails along, with a small write cliff.</p></div></p>
<p>The first important note here is the length of the test: 30 minutes. This shows an abrupt drop of about 10,000 IOPS two minutes in – this is normal, certainly for consumer devices. It likely represents the FTL running out of pre-erased NAND ready for new writes. Once its reserve runs out, the device runs slower until a break in the action lets it catch back up. With web browsing and other consumer scenarios. the chances of noticing this are small.</p>
<p>An aside: this is a good, stable device in each mode of operation – behavior before and after the “write cliff” is very clean.</p>
<p>Second, note that the IOPS are … a bit different than the data sheet might have suggested, even before it reaches steady operation. We’re intentionally using a light, QD8 70:30 4K to drive it more like a generalized workload. It still rolls over the write cliff. Under sustained, mixed IO pressure the FTL has much more work to take care of and it shows.</p>
<p>That’s with the buffer on, though. Now just adding write-through (with -Su<em>w</em>):</p>
<pre><code>diskspd.exe -t8 -b4k -r4k -o1 -w30 -Suw -D -L -d1800 -Rxml Z:\load.bin</code></pre>
<p><div id="attachment_7316" style="width: 510px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/11/Graph-2-500x301.png" alt="Write-through IO exposes the true latency of NAND, normally masked by the FTL/buffer." width="500" height="301" class="wp-image-7316 size-mediumlarge" /><p class="wp-caption-text">Write-through IO exposes the true latency of NAND, normally masked by the FTL/buffer.</p></div></p>
<p>Wow!</p>
<p>First: it’s great that the device honors write-through requests. In the consumer space, this gives an application a useful tool for making data durable when it must be durable. This is a good device!</p>
<p>Second, <em>oh my</em> does the performance drop off. This is no longer an “SSD”: especially as it goes over the write cliff – which is still there – it’s merely a fast HDD, at about 220 IOPS. Writing NAND is slow! This is the FTL forced to push all the way into the NAND flash dies, immediately, without being able to buffer, de-conflict the read and write IO streams and manage all the other background activity it needs to do.</p>
<p>Third, those immediate writes take what is already a device with modest endurance and deliver a truly crushing blow to its total lifetime.</p>
<p>Crucially, <em>this</em> is how Storage Spaces Direct would see this SSD. Not much of a “cache” anymore.</p>
<p> </p>
<p><strong>So, why does a non-volatile buffer help?</strong></p>
<p>It lets the SSD claim that a write is stable once it is in the buffer. A write-through operation – or a flush, or a request to disable the cache – can be honored without forcing all data directly into the NAND. We’ll get the good behavior, the stated endurance, <em>and</em> the data stability we require for reliable, software-defined storage to a complex workload.</p>
<p>In short, your device will behave much as we saw in the first chart: a nice, flat, fast performance profile. A good cache device. If it’s NVMe it may be even more impressive, but that’s a thought for another time.</p>
<p> </p>
<p><strong>Finally, how do you identify a device with a non-volatile buffer cache?</strong></p>
<p>Datasheet, datasheet, datasheet. Look for language like:</p>
<ul>
<li>“Power loss protection” or “PLP”
<ul>
<li>Samsung SM863, and related</li>
<li>Toshiba HK4E series, and related</li>
</ul>
</li>
<li>“Enhanced power loss data protection”
<ul>
<li>Intel S3510, S3610, S3710, P3700 series, and related</li>
</ul>
</li>
</ul>
<p>… along with many others across the industry. You should be able to find a device from your favored provider. These will be more expensive than consumer grade devices, but hopefully we’ve convinced you why they are worth it.</p>
<p>Be safe out there!</p>
<p>/ Dan Lovinger</p>
]]></content:encoded>
<wfw:commentRss>https://blogs.technet.microsoft.com/filecab/2016/11/18/dont-do-it-consumer-ssd/feed/</wfw:commentRss>
<slash:comments>25</slash:comments>
</item>
<item>
<title>Work Folders for Android can now upload files!</title>
<link>https://blogs.technet.microsoft.com/filecab/2016/11/08/work-folders-for-android-can-now-upload-files/</link>
<comments>https://blogs.technet.microsoft.com/filecab/2016/11/08/work-folders-for-android-can-now-upload-files/#respond</comments>
<pubDate>Tue, 08 Nov 2016 23:37:06 +0000</pubDate>
<dc:creator><![CDATA[Jeff Patterson - MSFT]]></dc:creator>
<category><![CDATA[Information Worker]]></category>
<category><![CDATA[Work Folders]]></category>
<guid isPermaLink="false">https://blogs.technet.microsoft.com/filecab/?p=7205</guid>
<description><![CDATA[Hi all, I’m Jeff Patterson, Program Manager for Work Folders. We’re excited to announce that we’ve released an updated version of Work Folders for Android to the Google Play Store which enables users to sync files that were created or edited on their Android device. Overview Work Folders is a Windows Server feature since 20012 R2... <a aria-label="read more about Work Folders for Android can now upload files!" href="https://blogs.technet.microsoft.com/filecab/2016/11/08/work-folders-for-android-can-now-upload-files/" class="read-more">Read more</a>]]></description>
<content:encoded><![CDATA[<p><span style="color: #000000">Hi all,</span></p>
<p><span style="color: #000000">I’m Jeff Patterson, Program Manager for Work Folders.</span></p>
<p><span style="color: #000000">We’re excited to announce that we’ve released an updated version of Work Folders for Android </span><span><span style="color: #000000">to the Google Play</span> </span><a href="https://play.google.com/store/apps/details?id=com.microsoft.workfolders"><span>Store</span></a> <span style="color: #000000">which enables users to sync files that were created or edited on their Android device.</span></p>
<p><a href="https://msdnshared.blob.core.windows.net/media/2016/11/WhatsNew.png"><img width="1430" height="909" class="alignnone size-full wp-image-7215" alt="whatsnew" src="https://msdnshared.blob.core.windows.net/media/2016/11/WhatsNew.png" /></a></p>
<h3><span style="color: #000000">Overview</span></h3>
<p><span style="color: #000000">Work Folders is a Windows Server feature since 20012 R2 that enables individual employees to access their files securely from inside and outside the corporate environment. The Work Folders app connects to the server and enables file access on your Android phone and tablet. Work Folders enables this while allowing the organization’s IT department to fully secure that data.</span></p>
<h3><span style="color: #000000">What’s New</span></h3>
<p><span style="color: #000000">Using the latest version of Work Folders for Android, users can now:</span></p>
<ul>
<li><span style="color: #000000">Sync files that were created or edited on their device</span></li>
<li><span style="color: #000000">Take pictures and write notes within the Work Folders application</span></li>
<li><span style="color: #000000">When working within other applications (i.e., Microsoft Word), the Work Folders location can be selected when opening or saving files. No need to open the Work Folders app to sync your files.</span></li>
</ul>
<p><span style="color: #000000">For the complete list of Work Folders for Android features, please reference the feature list section below.</span></p>
<p><a href="https://msdnshared.blob.core.windows.net/media/2016/11/Import.png"><img width="1430" height="909" class="alignnone size-full wp-image-7225" alt="import" src="https://msdnshared.blob.core.windows.net/media/2016/11/Import.png" /></a></p>
<p><a href="https://msdnshared.blob.core.windows.net/media/2016/11/Save.png"><img width="1430" height="909" class="alignnone size-full wp-image-7235" alt="save" src="https://msdnshared.blob.core.windows.net/media/2016/11/Save.png" /></a></p>
<h3><span style="color: #000000">Work Folders for Android – Feature List</span></h3>
<ul>
<li><span style="color: #000000">Sync files that were created or edited on your device</span></li>
<li><span style="color: #000000">Take pictures and write notes within the Work Folders app</span></li>
<li><span style="color: #000000">Pin files for offline viewing – saves storage space by showing all available files but locally storing and keeping in sync only the files you care about.</span></li>
<li><span style="color: #000000">Files are always encrypted – on the wire and at rest on the device.</span></li>
<li><span style="color: #000000">Access to the app is protected by an app passcode – keeping others out even if the device is left unlocked and unattended.</span></li>
<li><span style="color: #000000">Allows for DIGEST and Active Directory Federation Services (ADFS) authentication mechanisms including multi factor authentication.</span></li>
<li><span style="color: #000000">Search for files and folders</span></li>
<li><span style="color: #000000">Open files in other apps that might be specialized to work with a certain file type</span></li>
<li><span style="color: #000000">Integration with Microsoft Intune</span></li>
</ul>
<h3><span style="color: #000000">Android Version Support</span></h3>
<ul>
<li><span style="color: #000000">Work Folders for Android is supported on all devices running Android Version 4.4 KitKat or later.</span></li>
</ul>
<h3><span style="color: #000000">Known Issues</span></h3>
<ul>
<li><span style="color: #000000">Microsoft Office files are read-only when opening the files from the Work Folders app. To workaround this issue, open the file from the Office application (e.g., Microsoft Word).</span></li>
</ul>
<h3><span style="color: #000000">Blogs and Links</span></h3>
<p><span style="color: #000000"><span style="font-family: Times New Roman"> </span>If you’re interested in learning more about Work Folders, here are some great resources:</span></p>
<ul>
<li><span style="color: #000000">Work Folders</span> <a href="https://blogs.technet.microsoft.com/filecab/tag/work-folders/">blogs</a> <span style="color: #000000">on Server Storage blog</span></li>
<li><span style="color: #000000">Nir Ben Zvi</span> <a href="https://blogs.technet.microsoft.com/b/filecab/archive/2013/07/09/introducing-work-folders-on-windows-server-2012-r2.aspx">introduced Work Folders on Windows Server 2012 R2</a></li>
<li><a href="http://windows.microsoft.com/en-us/windows/work-folders-ipad-faq">Work Folders for iOS help</a></li>
<li><span style="color: #000000">Work Folders for Windows 7 SP1: Check out this</span> <a href="https://blogs.technet.microsoft.com/b/filecab/archive/2014/04/24/work-folders-for-windows-7.aspx">post by Jian Yan</a> <span style="color: #000000">on the Server Storage blog</span></li>
<li><span style="color: #000000">Roiy Zysman posted</span> <a href="https://blogs.technet.microsoft.com/b/filecab/archive/2014/03/06/windows-server-work-folders-resources-list.aspx">a great list of Work Folders resources in this blog</a>.</li>
<li><span style="color: #000000">See this</span> <a href="https://blogs.technet.microsoft.com/b/storageserver/archive/2014/02/26/q-amp-a-with-fabian-uhse-program-manager-for-work-folders-in-windows-server-2012-r2.aspx">Q&A With Fabian Uhse, Program Manager for Work Folders</a><span style="color: #000000"> in Windows Server 2012 R2</span></li>
<li><span style="color: #000000">Also, check out these posts about</span> <a href="https://blogs.technet.microsoft.com/b/filecab/archive/2013/07/10/work-folders-test-lab-deployment.aspx">how to setup a Work Folders test lab</a>, <a href="https://blogs.technet.microsoft.com/b/filecab/archive/2013/08/09/work-folders-certificate-management.aspx">certificate management</a>,<br />
<span style="color: #000000">and</span> <a href="https://blogs.technet.microsoft.com/b/filecab/archive/2013/11/06/work-folders-on-clusters.aspx">tips on running Work Folders on Windows Failover Clusters</a>.</li>
<li><span style="color: #000000">Using Work Folders with</span> <a href="https://blogs.technet.microsoft.com/b/filecab/archive/2013/10/15/windows-server-2012-r2-resolving-port-conflict-with-iis-websites-and-work-folders.aspx">IIS websites or the Windows Server Essentials Role (Resolving Port Conflicts)</a></li>
</ul>
<p><span style="color: #000000;font-family: Times New Roman"> </span><span style="color: #000000">Introduction and Getting Started</span></p>
<ul>
<li><a href="https://blogs.technet.microsoft.com/b/filecab/archive/2013/07/09/introducing-work-folders-on-windows-server-2012-r2.aspx">Introducing Work Folders on Windows Server 2012 R2</a></li>
<li><a href="http://technet.microsoft.com/en-us/library/dn265974.aspx">Work Folders Overview</a> <span style="color: #000000">on TechNet</span></li>
<li><a href="http://technet.microsoft.com/en-us/library/dn479242.aspx">Designing a Work Folders Implementation</a> <span style="color: #000000">on TechNet</span></li>
<li><a href="http://technet.microsoft.com/en-us/library/dn528861.aspx">Deploying Work Folders</a> <span style="color: #000000">on TechNet</span></li>
<li><a href="http://windows.microsoft.com/en-us/windows-8/work-folders-faq">Work folders FAQ (Targeted for Work Folders end users)</a></li>
<li><a href="https://blogs.technet.microsoft.com/b/storageserver/archive/2014/02/26/q-amp-a-with-fabian-uhse-program-manager-for-work-folders-in-windows-server-2012-r2.aspx">Work Folders Q&A</a></li>
<li><a href="http://technet.microsoft.com/library/dn296644.aspx">Work Folders Powershell Cmdlets</a></li>
<li><a href="https://blogs.technet.microsoft.com/b/filecab/archive/2013/07/10/work-folders-test-lab-deployment.aspx">Work Folders Test Lab Deployment</a></li>
<li><a href="https://blogs.technet.microsoft.com/b/storageserver/archive/2013/10/09/windows-storage-server-2012-r2-work-folders.aspx">Windows Storage Server 2012 R2 — Work Folders</a></li>
<li><a href="https://blogs.technet.microsoft.com/b/filecab/archive/2014/04/24/work-folders-for-windows-7.aspx">Work Folders for Windows 7</a></li>
</ul>
<p><span style="color: #000000">Advanced Work Folders Deployment and Management</span></p>
<ul>
<li><a href="https://blogs.technet.microsoft.com/b/filecab/archive/2014/02/24/work-folders-interoperability-with-other-file-server-technologies.aspx">Work Folders interoperability with other file server technologies</a></li>
<li><a href="https://blogs.technet.microsoft.com/b/filecab/archive/2013/11/01/performance-considerations-for-large-scale-work-folders-deployments.aspx">Performance Considerations for Work Folders Deployments</a></li>
<li><a href="https://blogs.technet.microsoft.com/b/filecab/archive/2013/10/15/windows-server-2012-r2-resolving-port-conflict-with-iis-websites-and-work-folders.aspx">Windows Server 2012 R2 – Resolving Port Conflict with IIS Websites and Work Folders</a></li>
<li><a href="https://blogs.technet.microsoft.com/b/filecab/archive/2013/10/09/a-new-user-attribute-for-work-folders-server-url.aspx">A new user attribute for Work Folders server Url</a></li>
<li><a href="https://blogs.technet.microsoft.com/b/filecab/archive/2013/08/09/work-folders-certificate-management.aspx">Work Folders Certificate Management</a></li>
<li><a href="https://blogs.technet.microsoft.com/b/filecab/archive/2013/11/06/work-folders-on-clusters.aspx">Work Folders on Clusters</a></li>
<li><a href="https://blogs.technet.microsoft.com/b/filecab/archive/2013/10/15/monitoring-windows-server-2012-r2-work-folders-deployments.aspx">Monitoring Windows Server 2012 R2 Work Folders Deployments.</a></li>
<li><a href="https://blogs.technet.microsoft.com/b/filecab/archive/2014/03/03/deploying-work-folders-with-ad-fs-and-web-application-proxy-wap.aspx">Deploying Work Folders with AD FS and Web Application Proxy (WAP)</a></li>
<li><a href="https://blogs.technet.microsoft.com/b/filecab/archive/2014/02/28/deploying-windows-server-2012-r2-work-folders-in-a-virtual-machine-in-windows-azure.aspx">Deploying Windows Server 2012 R2 Work Folders in a Virtual Machine in Windows Azure</a></li>
<li><a href="https://blogs.technet.microsoft.com/filecab/2016/08/12/offline-files-csc-to-work-folders-migration-guide/">Offline Files (CSC) to Work Folders Migration Guide</a></li>
<li><a href="https://www.microsoft.com/en-us/cloud-platform/microsoft-intune-apps">Management with Microsoft Intune</a></li>
</ul>
<p><span style="color: #000000">Videos</span></p>
<ul>
<li><a href="https://channel9.msdn.com/Events/TechEd/Europe/2013/WCA-B214#fbid=">Windows Server Work Folders Overview: My Corporate Data on All of My Devices</a></li>
<li><a href="https://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/WCA-B332#fbid=">Windows Server Work Folders – a Deep Dive into the New Windows Server Data Sync Solution</a></li>
<li><a href="https://channel9.msdn.com/Shows/Edge/Edge-Show-65-Windows-Server-2012-R2-Work-Folders">Work Folders on Channel 9</a></li>
<li><a href="https://channel9.msdn.com/Shows/news/iPad-Workfolder-App">Work Folders iPad reveal – TechEd Europe 2014</a> <span style="color: #000000">(in German)</span></li>
<li><a href="https://channel9.msdn.com/Shows/Edge/Edge-Show-140-WorkFolders-for-iPad--exclusive-first-look-at-iPhone" title="Work Folders on the "Edge Show"">Work Folders on the “Edge Show”</a> <span style="color: #000000">(iPad + iPhone video, English)</span></li>
<li><a href="https://channel9.msdn.com/Blogs/malte_lantin/A-first-look-at-Work-Folders-for-Android-with-Program-Manager-Fabian-Uhse">Work Folders for Android on Channel 9</a></li>
</ul>
]]></content:encoded>
<wfw:commentRss>https://blogs.technet.microsoft.com/filecab/2016/11/08/work-folders-for-android-can-now-upload-files/feed/</wfw:commentRss>
<slash:comments>0</slash:comments>
</item>
<item>
<title>Survey: Internet-connected servers</title>
<link>https://blogs.technet.microsoft.com/filecab/2016/10/20/survey-internet-connected-servers/</link>
<comments>https://blogs.technet.microsoft.com/filecab/2016/10/20/survey-internet-connected-servers/#comments</comments>
<pubDate>Thu, 20 Oct 2016 17:34:58 +0000</pubDate>
<dc:creator><![CDATA[NedPyle [MSFT]]]></dc:creator>
<category><![CDATA[Survey]]></category>
<category><![CDATA[Uncategorized]]></category>
<guid isPermaLink="false">https://blogs.technet.microsoft.com/filecab/?p=7125</guid>
<description><![CDATA[Hi folks, Ned here again with another quickie engineering survey. As always, it’s anonymous, requires no registration, and should take no more than 30 seconds. We’d like to learn about your server firewalling, plus a couple drill-down questions. Survey: What percentage of your Windows Servers have Internet access? This may require some uncomfortable admissions, but it’s for a... <a aria-label="read more about Survey: Internet-connected servers" href="https://blogs.technet.microsoft.com/filecab/2016/10/20/survey-internet-connected-servers/" class="read-more">Read more</a>]]></description>
<content:encoded><![CDATA[<p>Hi folks, <a href="https://twitter.com/nerdpyle">Ned </a>here again with another quickie engineering survey. As always, it’s anonymous, requires no registration, and should take no more than 30 seconds. We’d like to learn about your server firewalling, plus a couple drill-down questions.</p>
<p><strong>Survey: <a href="https://www.surveymonkey.com/r/582SJHM">What percentage of your Windows Servers have Internet access?</a></strong></p>
<p>This may require some uncomfortable admissions, but it’s for a good cause, I promise. Honesty is always the best policy in helping us make better software for you.</p>
<p><strong>Note:</strong> for a handful of you early survey respondents, the # of servers question had the wrong limit. It’s fixed now and you can adjust up.</p>
<p>– Ned “census taker” Pyle</p>
<p> </p>
]]></content:encoded>
<wfw:commentRss>https://blogs.technet.microsoft.com/filecab/2016/10/20/survey-internet-connected-servers/feed/</wfw:commentRss>
<slash:comments>1</slash:comments>
</item>
<item>
<title>Storage Spaces Direct with Persistent Memory</title>
<link>https://blogs.technet.microsoft.com/filecab/2016/10/17/storage-spaces-direct-with-persistent-memory/</link>
<comments>https://blogs.technet.microsoft.com/filecab/2016/10/17/storage-spaces-direct-with-persistent-memory/#comments</comments>
<pubDate>Mon, 17 Oct 2016 19:32:42 +0000</pubDate>
<dc:creator><![CDATA[clausjor]]></dc:creator>
<category><![CDATA[Software Defined Storage]]></category>
<category><![CDATA[Windows Server 2016]]></category>
<category><![CDATA[Storage]]></category>
<category><![CDATA[Storage Spaces Direct]]></category>
<guid isPermaLink="false">https://blogs.technet.microsoft.com/filecab/?p=7095</guid>
<description><![CDATA[Howdy, Claus here again, this time with Dan Lovinger. At our recent Ignite conference we had some very exciting results and experiences to share around Storage Spaces Direct and Windows Server 2016. One of the more exciting ones that you may have missed was an experiment we did on a set of systems built with the... <a aria-label="read more about Storage Spaces Direct with Persistent Memory" href="https://blogs.technet.microsoft.com/filecab/2016/10/17/storage-spaces-direct-with-persistent-memory/" class="read-more">Read more</a>]]></description>
<content:encoded><![CDATA[<p>Howdy, <a href="https://twitter.com/ClausJor">Claus</a> here again, this time with Dan Lovinger.</p>
<p>At our recent Ignite conference we had some very exciting results and experiences to share around Storage Spaces Direct and Windows Server 2016. One of the more exciting ones that you may have missed was an experiment we did on a set of systems built with the help of Mellanox and Hewlett-Packard Enterprise’s NVDIMM-N technology.</p>
<p>What’s exciting about NVDIMM-N is that it is part of the first wave of new memory technologies referred to as Persistent Memory (PM), sometimes also referred to as Storage Class Memory (SCM ). A PM device offers persistent storage – stays around after the server resets or the power drops – but can be on the super high speed memory bus, and accessible at the granularity (bytes not blocks!) and latencies we’re more familiar with for memory. In the case of NVDIMM-N it is literally memory (DRAM) with the addition of natively persistent storage, usually NAND flash, and some power capacity and to allow capture of the DRAM to that persistent storage regardless of conditions.</p>
<p>These 8 HPE ProLiant DL380 Gen9 nodes had Mellanox CX-4 100Gb adapters connected through a Mellanox Spectrum switch and <strong><em>16</em></strong> 8GiB NVDIMM-N modules along with 4 NVMe flash drives – <strong><em>each</em></strong> – for an eye-watering <strong><em>1TiB</em></strong> of NVDIMM-N around the cluster.</p>
<p>Of course, being storage nerds, what did we do: we created three-way mirrored Storage Spaces Direct virtual disks over each type of storage – NVMe and, in their block personality, the NVDIMM-N – and benched them off. Our partners in SQL Server showed it like this:<a href="https://msdnshared.blob.core.windows.net/media/2016/10/PMperf.png"><img width="590" height="733" class="aligncenter size-full wp-image-7105" alt="PMperf" src="https://msdnshared.blob.core.windows.net/media/2016/10/PMperf.png" /></a></p>
<p>What we’re seeing here are simple, low intensity DISKSPD loads – equal in composition – which lets us highlight the relative latencies of each type of storage. In the first pair of 64K IO tests we see the dramatic difference which gets PM up to the line rate of the 100Gb network before NVME is even 1/3<sup>rd</sup> of the way there. In the second we can see how PM neutralizes the natural latency of going all the way into a flash device – even as efficient and high speed as our NVMe devices were – and provides reads at less than 180us to the 99<sup>th</sup> percentile – <strong><em>99% of the read IO was over three times faster </em></strong>for three-way mirrored, two node fault tolerant storage!</p>
<p>We think this is pretty exciting! Windows Server is on a journey to integrate Persistent Memory and this is one of the steps along the way. While we may do different things with it in the future, this was an interesting experiment to point to where we may be able to go (and more!).</p>
<p>Let us know what you think.</p>
<p>Claus and Dan.</p>
<p>p.s. if you’d like to see the entire SQL Server 2016 & HPE Persistent Memory presentation at Ignite (video available!), follow this link: <a href="https://myignite.microsoft.com/sessions/2767">https://myignite.microsoft.com/sessions/2767</a></p>
]]></content:encoded>
<wfw:commentRss>https://blogs.technet.microsoft.com/filecab/2016/10/17/storage-spaces-direct-with-persistent-memory/feed/</wfw:commentRss>
<slash:comments>4</slash:comments>
</item>
<item>
<title>TLS for Windows Standards-Based Storage Management (SMI-S) and System Center Virtual Machine Manager (VMM)</title>
<link>https://blogs.technet.microsoft.com/filecab/2016/10/14/tls-for-windows-standards-based-storage-management-smi-s-and-system-center-virtual-machine-manager-vmm/</link>
<comments>https://blogs.technet.microsoft.com/filecab/2016/10/14/tls-for-windows-standards-based-storage-management-smi-s-and-system-center-virtual-machine-manager-vmm/#respond</comments>
<pubDate>Fri, 14 Oct 2016 16:31:04 +0000</pubDate>
<dc:creator><![CDATA[Jeff Goldner [MSFT]]]></dc:creator>
<category><![CDATA[Uncategorized]]></category>
<category><![CDATA[SMI-S]]></category>
<category><![CDATA[SNIA]]></category>
<category><![CDATA[SSL]]></category>
<category><![CDATA[Storage Area Network (SAN)]]></category>
<category><![CDATA[Storage Management]]></category>
<category><![CDATA[TLSv1.2]]></category>
<category><![CDATA[VMM]]></category>
<category><![CDATA[Windows Server 2012 R2]]></category>
<category><![CDATA[Windows Server 2016]]></category>
<guid isPermaLink="false">https://blogs.technet.microsoft.com/filecab/?p=7075</guid>
<description><![CDATA[In a previous blog post, I discussed setting up the Windows Standards-Based Storage Management Service (referred to below as Storage Service) on Windows Server 2012 R2. For Windows Server 2016 and System Center 2016 Virtual Machine Manager, configuration is much simpler since installation of the service includes setting up the necessary self-signed certificate. We also... <a aria-label="read more about TLS for Windows Standards-Based Storage Management (SMI-S) and System Center Virtual Machine Manager (VMM)" href="https://blogs.technet.microsoft.com/filecab/2016/10/14/tls-for-windows-standards-based-storage-management-smi-s-and-system-center-virtual-machine-manager-vmm/" class="read-more">Read more</a>]]></description>
<content:encoded><![CDATA[<p>In a <a href="https://blogs.technet.microsoft.com/filecab/2013/05/22/using-indications-with-the-windows-standards-based-storage-management-service-smi-s/">previous blog post</a>, I discussed setting up the Windows Standards-Based Storage Management Service (referred to below as Storage Service) on Windows Server 2012 R2. For Windows Server 2016 and System Center 2016 Virtual Machine Manager, configuration is much simpler since installation of the service includes setting up the necessary self-signed certificate. We also allow using CA signed certificates now provided the Common Name (CN) is “MSSTRGSVC”.</p>
<p>Before I get into those changes, I want to talk about the Transport Layer Security 1.2 (TLS 1.2) protocol, which is now a required part of the Storage Management Initiative Specification (SMI-S).</p>
<h1>TLS 1.2</h1>
<p>Secure communication through the Hyper Text Transport Protocol (HTTPS) is accomplished using the encryption capabilities of <a href="https://en.wikipedia.org/wiki/Transport_Layer_Security">Transport Layer Security</a>, which is itself an update to the much older Security Sockets Layer protocol (SSL) – although still commonly called Secure Sockets. Over the years, several vulnerabilities in SSL and TLS have been exposed, making earlier versions of the protocol insecure. TLS 1.2 is the latest version of the protocol and is defined by <a href="https://tools.ietf.org/html/rfc5246">RFC 5246</a>.</p>
<p>The Storage Networking Industry Association (SNIA) made TLS 1.2 a mandatory part of SMI-S (even <em>retroactively</em>). In 2015, the International Standards Organization (ISO) published <a href="http://www.iso.org/iso/catalogue_detail?csnumber=44404">ISO 27040:2015</a> “Information Technology – Security Techniques – Storage Security”, and this is incorporated by reference into the SMI-S protocol and pretty much all things SNIA.</p>
<p>Even though TLS 1.2 was introduced in 2008, it’s uptake was impeded by interoperability concerns. Adoption was accelerated after several exploits (e.g., <a href="http://www.webopedia.com/TERM/S/ssl_beast.html">BEAST</a>) ushered out the older SSL 3.0 and TLS 1.0 protocols (TLS 1.1 did not see broad adoption). Microsoft Windows offered <a href="https://blogs.msdn.microsoft.com/kaushal/2011/10/02/support-for-ssltls-protocols-on-windows/">support</a> for TLS 1.2 beginning in Windows 7 and Windows Server 2008 R2. That being said, there were still a lot of interop issues at the time, and TLS 1.1 and 1.2 support was hidden behind various registry keys.</p>
<p>Now it’s 2016, and there are no more excuses for using older, proven-insecure protocols, so it’s time to update your SMI-S providers. But unfortunately, you still need to take action to fully enable TLS 1.2. There are three primary Microsoft components that are used by the Storage Service which affect HTTPS communications between providers and the service: SCHANNEL, which implements the SSL/TLS protocols; HTTP.SYS, an HTTP server used by the Storage Service to support indications; and .NET 4.x, used by Virtual Machine Manager (VMM) (not by the Storage Service itself).</p>
<p>I’m going to skip some of the details of how clients and servers negotiate TLS versions (this may or may not allow older versions) and cipher suites (the most secure suite mutually agreed upon is always selected, but refer to this <a href="https://weakdh.org/">site</a> for a recent exploit involving certain cipher suites).</p>
<h3>A sidetrack: Certificate Validation</h3>
<p>How certificates are validated varies depending on whether the certificate is self-signed or created by a trusted Certificate Authority (CA). For the most part, SMI-S will use self-signed certificates – and providers should never, ever, be exposed to the internet or another untrusted network. A quick overview:</p>
<p>A CA signed certificate contains a signature that indicates what authority signed it. The user of that certificate will be able to establish a chain of trust to a well-known CA.</p>
<p>A self-signed certificate needs to establish this trust in some other way. Typically, the self-signed certificate will need to be loaded into a local certificate store on the system that will need to validate it. See below for more on this.</p>
<p>In either case, the following conditions must be true: the certificate has not expired; the certificate has not been revoked (look up <a href="https://en.wikipedia.org/wiki/Revocation_list">Revocation List</a> for more about this); and the purpose of the certificate makes sense for its use. Additional checks include “Common Name” matching (disabled by default for the Storage Service; must not be used by providers) and key length. Note that we have seen issues with certificates being valid “from” a time and there is a time mismatch between the provider and the storage service. These tend to cure themselves once the start time has been passed on both ends of the negotiation. When using the Windows PowerShell cmdlet <a href="https://technet.microsoft.com/en-us/library/jj884241(v=wps.630).aspx">Register-SmisProvider</a> you will see this information.</p>
<p>In some instances, your provider may ignore one or more of the validation rules and just accept any certificate that we present. A useful debugging approach but not very secure!</p>
<p><strong>One more detail</strong>: when provisioning certificates for the SMI-S providers, make sure they use key lengths of 1024 or 2048 bits only. 512 bit keys are no longer supported due to recent exploits. And odd length keys won’t work either. At least I have never seen them work, even though technically allowed.</p>
<h1>Microsoft product support for TLS 1.2</h1>
<p>This article will discuss Windows Server and System Center Releases, and the .NET Framework. It should not be necessary to mess with registry settings that control cipher suites or SSL versions except as noted below for the .NET framework.</p>
<h2>Windows Server 2012 R2/2016</h2>
<p>Since the initial releases of these products, there have been <em>many</em> security fixes released as patches, and more than a few of them changed SCHANNEL and HTTP.SYS behavior. Rather than attempt to enumerate all of the changes, let’s just say it is essential to apply ALL security hotfixes.</p>
<p>If you are using Windows Server 2016 RTM, you also need to apply all available.</p>
<p>There is no .NET dependency.</p>
<h2>System Center 2012 R2 Virtual Machine Manager</h2>
<p>SC 2012 R2 VMM uses the .NET runtime library but the Storage Service does not. If you are using VMM 2012 R2, to fully support TLS 1.2, the most recent version of .NET 4.x should be installed; this is currently <a href="https://blogs.msdn.microsoft.com/dotnet/2016/08/02/announcing-net-framework-4-6-2/">.NET 4.6.2</a>. Also, update VMM to the latest Update Release.</p>
<p>If, for some reason, you must stay on .NET 4.5.2, then a registry change will be required to turn on TLS 1.2 on the VMM Server(s) since by default, .NET 4.5.2 only enables SSL 3.0 and TLS 1.0.</p>
<p>The registry value (which changes to allow TLS 1.0, TLS 1.1 and TLS 1.2 and <em>not </em>SSL 3.0 which you should never use anyway) is:</p>
<p>HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\.NETFramework\<strong>v4.0.30319</strong> “SchUseStrongCrypto”=dword:00000001</p>
<p> </p>
<p>You can use this PowerShell command to change the behavior:</p>
<p>Set-ItemProperty -Path “HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\.NETFramework\v4.0.30319” -Name “SchUseStrongCrypto” -Value “1” -Force</p>
<p>(Note that the version number highlighted applies regardless of a particular release of .NET 4.5; do not change it!)</p>
<p>This change will apply to every application using the .NET 4.x runtime on the same system. Note that Exchange 2013 does not support 4.6.x, but you shouldn’t be running VMM and Exchange on the same server anyway! Again, apply this to the VMM <em>Server </em>system or VM, which may not be the same place you are running the VMM <em>UI.</em></p>
<h2>System Center 2016 VMM</h2>
<p>VMM 2016 uses .NET 4.6.2; no changes required.</p>
<h1>Exporting the Storage Service Certificate</h1>
<p>Repeating the information from a previous blog, follow these steps on the VMM Server machine:</p>
<ul>
<li>Run MMC.EXE from an administrator command prompt.</li>
<li>Add the Certificates Snap-in using the File\Add/Remove Snap-in menu.</li>
<li>Make sure you select Computer Account when the wizard prompts you, select Next and leave Local Computer selected. Click Finish.</li>
<li>Click OK.</li>
<li>Expand Certificates (Local Computer), then Personal and select Certificates.</li>
<li>In the middle pane, you should see the msstrgsvc Right click, select All Tasks, Export… That will bring up the Export Wizard.</li>
<li>Click Next to not export the private key (this might be grayed out anyway), then select a suitable format. Typically DER or Base-64 encoded are used but some vendors may support .P7B files. For EMC, select Base-64.</li>
<li>Specify a file to store the certificate. Note that Base-64 encoded certificates are text files and can be open with Notepad or any other editing program.</li>
</ul>
<p>Note: if you deployed VMM in a HA configuration, you will need to repeat these steps on each VMM Server instance. Your vendor’s SMI-S provider must support a certificate store that allows multiple certificates.</p>
<h2>Storage Providers</h2>
<p>Microsoft is actively involved in SNIA plugfests and directly with storage vendors to ensure interoperability. Some providers may require settings to ensure the proper security protocols are enabled and used, and many require updates.</p>
<h3>OpenSSL</h3>
<p>Many SMI-S providers and client applications rely on the open source project <a href="https://www.openssl.org/">OpenSSL</a>.</p>
<p>Storage vendors who use OpenSSL must absolutely keep up with the latest version(s) of this library and it is up to them to provide you with updates. We have seen a lot of old providers that rely on the long obsolete OpenSSL 0.9.8 releases or unpatched later versions. Microsoft will not provide any support if your provider is out-of-date, so if you have been lazy and not keeping up-to-date, time to get with the program. At the time of this writing there are three current branches of OpenSSL, each with patches to mend security flaws that crop up frequently. Consult the link above. How a provider is updated is a vendor-specific activity. (Some providers – such as EMC’s – do not use OpenSSL; check with the vendor anyway.)</p>
<h3>Importing the Storage Service certificate</h3>
<p>This step will vary greatly among providers. You will need to consult the vendor documentation for how to import the certificate into their appropriate Certificate Store. If they do not provide a mechanism to import certificates, you will not be able to use fully secure indications or mutual authentication with certificate validation.</p>
<h1>Summary</h1>
<p>To ensure you are using TLS 1.2 (and enabling indications), you must do the following:</p>
<ul>
<li>Check with your storage vendor for the latest provider updates and apply them as directed</li>
<li>Update to .NET 4.6.2 on your VMM Servers <em>or</em> enable .NET strong cryptography if you must use .NET 4.5.x for any reason</li>
<li>Install the Storage Service (installing VMM will do this for you)</li>
<li>If you are using Windows Server 2012 R2, refer back to this <a href="https://blogs.technet.microsoft.com/filecab/2013/05/22/using-indications-with-the-windows-standards-based-storage-management-service-smi-s/">previous blog post</a> to properly configure the Storage Service (skip this for Windows Server 2016)</li>
<li>Export the storage service certificate</li>
<li>Import the certificate into your provider’s certificate store (see vendor instructions)</li>
<li><em>Then </em>you can register one or more SMI-S providers, either through the Windows <a href="https://technet.microsoft.com/en-us/library/jj884241(v=wps.630).aspx">Register-SmisProvider</a> cmdlet or using VMM</li>
</ul>
<p> </p>
<p> </p>
]]></content:encoded>
<wfw:commentRss>https://blogs.technet.microsoft.com/filecab/2016/10/14/tls-for-windows-standards-based-storage-management-smi-s-and-system-center-virtual-machine-manager-vmm/feed/</wfw:commentRss>
<slash:comments>0</slash:comments>
</item>
<item>
<title>Squeezing hyper-convergence into the overhead bin, for barely $1,000/server: the story of Project Kepler-47</title>
<link>https://blogs.technet.microsoft.com/filecab/2016/10/14/kepler-47/</link>
<comments>https://blogs.technet.microsoft.com/filecab/2016/10/14/kepler-47/#comments</comments>
<pubDate>Fri, 14 Oct 2016 15:18:10 +0000</pubDate>
<dc:creator><![CDATA[Cosmos Darwin]]></dc:creator>
<category><![CDATA[SDS]]></category>
<category><![CDATA[Software Defined Storage]]></category>
<category><![CDATA[Uncategorized]]></category>
<category><![CDATA[Windows Server 2016]]></category>
<category><![CDATA[S2D]]></category>
<category><![CDATA[Storage]]></category>
<category><![CDATA[Storage Spaces Direct]]></category>
<guid isPermaLink="false">https://blogs.technet.microsoft.com/filecab/?p=6997</guid>
<description><![CDATA[The Challenge In the Windows Server team, we tend to focus on going big. Our enterprise customers and service providers are increasingly relying on Windows as the foundation of their software-defined datacenters, and needless to say, our hyperscale public cloud Azure does too. Recent big announcements like support for 24 TB of memory per server... <a aria-label="read more about Squeezing hyper-convergence into the overhead bin, for barely $1,000/server: the story of Project Kepler-47" href="https://blogs.technet.microsoft.com/filecab/2016/10/14/kepler-47/" class="read-more">Read more</a>]]></description>
<content:encoded><![CDATA[<p><div id="attachment_7045" style="width: 889px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/10/Carryon-1024x597.png" alt="This tiny two-server cluster packs powerful compute and spacious storage into one cubic foot." width="879" height="512" class="wp-image-7045 size-large" /><p class="wp-caption-text">This tiny two-server cluster packs powerful compute and spacious storage into one cubic foot.</p></div></p>
<p><strong>The Challenge</strong></p>
<p>In the Windows Server team, we tend to focus on going <em>big. </em>Our enterprise customers and service providers are increasingly relying on Windows as the foundation of their software-defined datacenters, and needless to say, our hyperscale public cloud Azure does too. Recent <em>big </em>announcements like support for <a href="https://blogs.technet.microsoft.com/windowsserver/2016/08/25/windows-server-scalability-and-more/">24 TB of memory per server</a> with Hyper-V, or <a href="https://www.youtube.com/watch?v=0LviCzsudGY&t=28m00s">6+ million IOPS per cluster</a> with Storage Spaces Direct, or delivering <a href="https://youtu.be/6IFmjMr0Oao?t=45m00s">50 Gb/s of throughput per virtual machine</a> with Software-Defined Networking are the proof.</p>
<p>But what can these same features in Windows Server do for smaller deployments? Those known in the IT industry as Remote-Office / Branch-Office (“ROBO”) – think retail stores, bank branches, private practices, remote industrial or constructions sites, and more. After all, their basic requirement isn’t so different – they need high availability for mission-critical apps, with rock-solid storage for those apps. And generally, they need it to be <em>local, </em>so they can operate – process transactions, or look up a patient’s records – even when their Internet connection is flaky or non-existent.</p>
<p>For these deployments, cost is paramount. Major retail chains operate thousands, or tens of thousands, of locations. This multiplier makes IT budgets <em>extremely</em> sensitive to the per-unit cost of each system. The simplicity and savings of hyper-convergence – using the same servers to provide compute <em>and storage </em>– present an attractive solution.</p>
<p>With this in mind, under the auspices of <em>Project Kepler-47</em>, we set about going <em>small</em>…</p>
<h3 style="text-align: center"></h3>
<p> </p>
<p><strong>Meet Kepler-47</strong></p>
<p>The resulting prototype – and it’s just that, a <em>prototype </em>– was revealed at Microsoft Ignite 2016.</p>
<p><div id="attachment_7055" style="width: 889px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/10/Kepler-47-1024x768.jpg" alt="Kepler-47 on expo floor at Microsoft Ignite 2016 in Atlanta." width="879" height="659" class="size-large wp-image-7055" /><p class="wp-caption-text">Kepler-47 on expo floor at Microsoft Ignite 2016 in Atlanta.</p></div></p>
<p>In our configuration, this tiny two-server cluster provides over 20 TB of available storage capacity, and over 50 GB of available memory for a handful of mid-sized virtual machines. The storage is flash-accelerated, the chips are Intel Xeon, and the memory is error-correcting DDR4 – no compromises. The storage is mirrored to tolerate hardware failures – drive or server – with continuous availability. And if one server goes down or needs maintenance, virtual machines live migrate to the other server with no appreciable downtime.</p>
<p>(Did we mention it also has not one, but <em>two</em> 3.5mm headphone jacks? <a href="http://www.theverge.com/2016/9/7/12823596/apple-iphone-7-no-headphone-jack-lightning-earbuds">Hah</a>!)</p>
<p><div id="attachment_7005" style="width: 889px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/10/Size-1024x390.png" alt="Kepler-47 is 45% smaller than standard 2U rack servers." width="879" height="335" class="wp-image-7005 size-large" /><p class="wp-caption-text">Kepler-47 is 45% smaller than standard 2U rack servers.</p></div></p>
<p>In terms of size, Kepler-47 is barely one cubic foot – 45% smaller than standard 2U rack servers. For perspective, this means both servers fit readily in one carry-on bag in the overhead bin!</p>
<p>We bought (almost) every part online at retail prices. The total cost for each server was just $1,101. This excludes the drives, which we salvaged from around the office, and which could vary wildly in price depending on your needs.</p>
<p><div id="attachment_7015" style="width: 850px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/10/Pricetag.png" alt="Each Kepler-47 server cost just $1,101 retail, excluding drives." width="840" height="540" class="wp-image-7015 size-full" /><p class="wp-caption-text">Each Kepler-47 server cost just $1,101 retail, excluding drives.</p></div></p>
<p> </p>
<p><strong>Technology</strong></p>
<p>Kepler-47 is comprised of two servers, each running <a href="https://www.microsoft.com/en-us/cloud-platform/windows-server">Windows Server 2016 Datacenter</a>. The servers form one hyper-converged <a href="https://technet.microsoft.com/en-us/windows-server-docs/failover-clustering/failover-clustering-overview">Failover Cluster</a>, with the new <a href="https://technet.microsoft.com/en-us/windows-server-docs/failover-clustering/deploy-cloud-witness">Cloud Witness</a> as the low-cost, low-footprint quorum technology. The cluster provides high availability to <a href="https://technet.microsoft.com/en-us/windows-server-docs/compute/hyper-v/hyper-v-on-windows-server">Hyper-V</a> virtual machines (which may also run Windows, at no additional licensing cost), and <a href="https://technet.microsoft.com/en-us/windows-server-docs/storage/storage-spaces/storage-spaces-direct-overview">Storage Spaces Direct</a> provides fast and fault tolerant storage using just the local drives.</p>
<p>Additional fault tolerance can be achieved using new features such as <a href="https://technet.microsoft.com/en-us/library/mt126104(v=ws.12).aspx">Storage Replica</a> with Azure Site Recovery.</p>
<p>Notably, Kepler-47 does not use traditional Ethernet networking between the servers, eliminating the need for costly high-speed network adapters and switches. Instead, it uses Intel Thunderbolt™ 3 over a USB Type-C connector, which provides up to 20 Gb/s (or up to 40 Gb/s when utilizing display and data together!) – plenty for replicating storage and live migrating virtual machines.</p>
<p>To pull this off, we partnered with our friends at Intel, who furnished us with pre-release PCIe add-in-cards for Thunderbolt™ 3 and a proof-of-concept driver.</p>
<p><div id="attachment_7025" style="width: 889px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/10/Thunderbolt-1024x404.png" alt="Kepler-47 does not use traditional Ethernet between the servers; instead, it uses Intel Thunderbolt™ 3." width="879" height="347" class="wp-image-7025 size-large" /><p class="wp-caption-text">Kepler-47 does not use traditional Ethernet between the servers; instead, it uses Intel Thunderbolt™ 3.</p></div></p>
<p>To our delight, it worked like a charm – here’s the <em>Networks</em> view in Failover Cluster Manager. Thanks, Intel!</p>
<p><div id="attachment_7036" style="width: 889px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/10/Screenshot-Cropped-1024x498.png" alt="The Networks view in Failover Cluster Manager, showing Thunderbolt™ Networking." width="879" height="427" class="size-large wp-image-7036" /><p class="wp-caption-text">The Networks view in Failover Cluster Manager, showing Thunderbolt™ Networking.</p></div></p>
<p>While Thunderbolt™ 3 is already in widespread use in laptops and other devices, this kind of server application is new, and it’s one of the main reasons Kepler-47 is <em>strictly </em>a prototype. It also boots from USB 3 DOM, which isn’t yet supported, and has no host-bus adapter (HBA) nor SAS expander, both of which are currently required for Storage Spaces Direct to leverage SCSI Enclosure Services (SES) for slot identification. However, it otherwise passes all our validation and testing and, as far as we can tell, works flawlessly.</p>
<p>(In case you missed it, support for Storage Spaces Direct clusters with just two servers was announced at Ignite!)</p>
<p> </p>
<p><strong>Parts List</strong></p>
<p>Ok, now for the juicy details. Since Ignite, we have been asked repeatedly what parts we used. Here you go:</p>
<p><div id="attachment_7035" style="width: 889px" class="wp-caption aligncenter"><img src="https://msdnshared.blob.core.windows.net/media/2016/10/Parts-1024x576.png" alt="The key parts of Kepler-47." width="879" height="494" class="wp-image-7035 size-large" /><p class="wp-caption-text">The key parts of Kepler-47.</p></div></p>
<table>
<tbody>
<tr>
<td width="173"><em>Function</em></td>
<td width="402"><em>Product</em></td>
<td width="96"><em>View Online</em></td>
<td width="96"><em>Cost</em></td>
</tr>
<tr>
<td width="173"><strong>Motherboard</strong></td>
<td width="402">ASRock C236 WSI</td>
<td width="96"><a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16813599009&cm_re=asrock_c236_wsi-_-13-599-009-_-Product">Link</a></td>
<td width="96">$199.99</td>
</tr>
<tr>
<td width="173"><strong>CPU</strong></td>
<td width="402">Intel Xeon E3-1235L v5 25w 4C4T 2.0Ghz</td>
<td width="96"><a href="http://www.serversdirect.com/Components/CPUs_and_Processors/id-CP9160/Intel_Xeon_E3-1235Lv5_2GHz_Quad-core_8M_Cache_25W_LowVoltage_HD_Graphics__Quick_Sync">Link</a></td>
<td width="96">$283.00</td>
</tr>
<tr>
<td width="173"><strong>Memory</strong></td>
<td width="402">32 GB (2 x 16 GB) Black Diamond ECC DDR4-2133</td>
<td width="96"><a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16820014107">Link</a></td>
<td width="96">$208.99</td>
</tr>
<tr>
<td width="173"><strong>Boot Device</strong></td>
<td width="402">Innodisk 32 GB USB 3 DOM</td>
<td width="96"><a href="http://www.nextwarehouse.com/item/?1679318">Link</a></td>
<td width="96">$29.33</td>
</tr>
<tr>
<td width="173"><strong>Storage (Cache) </strong></td>
<td width="402">2 x 200 GB Intel S3700 2.5” SATA SSD</td>
<td width="96"><a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16822106011&cm_re=intel_s3700_200gb-_-22-106-011-_-Product">Link</a></td>
<td width="96">–</td>
</tr>
<tr>
<td width="173"><strong>Storage (Capacity)</strong></td>
<td width="402">6 x 4 TB Toshiba MG03ACA400 3.5” SATA HDD</td>
<td width="96"><a href="http://www.newegg.com/Product/Product.aspx?Item=9SIAAP63ZS6252&cm_re=MG03ACA400-_-09Z-01S5-00008-_-Product">Link</a></td>
<td width="96">–</td>
</tr>
<tr>
<td width="173"><strong>Networking (Adapter)</strong></td>
<td width="402">Intel Thunderbolt™ 3 JHL6540 PCIe Gen 3 x4 Controller Chip</td>
<td width="96"><a href="http://ark.intel.com/products/94031/Intel-JHL6540-Thunderbolt-3-Controller">Link</a></td>
<td width="96">–</td>
</tr>
<tr>
<td width="173"><strong>Networking (Cable)</strong></td>
<td width="402">Cable Matters 0.5m 20 Gb/s USB Type-C Thunderbolt™ 3</td>
<td width="96"><a href="https://www.amazon.com/USB-IF-Certified-Cable-Matters-Thunderbolt/dp/B01AS8U7GU">Link</a></td>
<td width="96">$17.99*</td>
</tr>
<tr>
<td width="173"><strong>SATA Cables</strong></td>
<td width="402">8 x SuperMicro CBL-0481L</td>
<td width="96"><a href="http://store.supermicro.com/cable/sas-sata/81cm-sata-cbl-0481l.html">Link</a></td>
<td width="96">$13.20</td>
</tr>
<tr>
<td width="173"><strong>Chassis</strong></td>
<td width="402">U-NAS NSC-800</td>
<td width="96"><a href="http://www.u-nas.com/xcart/product.php?productid=17617">Link</a></td>
<td width="96">$199.99</td>
</tr>
<tr>
<td width="173"><strong>Power Supply</strong></td>
<td width="402">ASPower 400W Super Quiet 1U</td>
<td width="96"><a href="http://www.u-nas.com/xcart/product.php?productid=17624">Link</a></td>
<td width="96">$119.99</td>
</tr>
<tr>
<td width="173"><strong>Heatsink</strong></td>
<td width="402">Dynatron K2 75mm 2 Ball CPU Fan</td>
<td width="96"><a href="http://www.newegg.com/Product/Product.aspx?Item=N82E16835114115">Link</a></td>
<td width="96">$34.99</td>
</tr>
<tr>
<td width="173"><strong>Thermal Pads</strong></td>
<td width="402">StarTech Heatsink Thermal Transfer Pads (Set of 5)</td>
<td width="96"><a href="http://www.newegg.com/Product/Product.aspx?Item=9SIA0ZX44E7726&cm_re=startech_thermal-_-35-230-030-_-Product">Link</a></td>
<td width="96">$6.28*</td>
</tr>
</tbody>
</table>
<p>* Just one needed for both servers.</p>
<p> </p>
<p><strong>Practical Notes</strong></p>
<p>The ASRock C236 WSI motherboard is the only one we could locate that is mini-ITX form factor, has eight SATA ports, and supports server-class processors and error-correcting memory with SATA hot-plug. The E3-1235L v5 is just 25 watts, which helps keep Kepler-47 very quiet. (Dan has been running it <em>literally </em>on his desk since last month, and he hasn’t complained yet.)</p>
<p>Having spent all our SATA ports on the storage, we needed to boot from something else. We were delighted to spot the USB 3 header on the motherboard.</p>
<p>The U-NAS NSC-800 chassis is not the cheapest option. You could go cheaper. However, it features an aluminum outer casing, steel frame, and rubberized drive trays – the quality appealed to us.</p>
<p>We actually had to order two sets of SATA cables – the first were not malleable enough to weave their way around the tight corners from the board to the drive bays in our chassis. The second set we got are flat and 30 AWG, and they work great.</p>
<p>Likewise, we had to confront physical limitations on the heatsink – the fan we use is barely 2.7 cm tall, to fit in the chassis.</p>
<p>We salvaged the drives we used, for cache and capacity, from other systems in our test lab. In the case of the SSDs, they’re several years old and discontinued, so it’s not clear how to accurately price them. In the future, we imagine ROBO deployments of Storage Spaces Direct will vary tremendously in the drives they use – we chose 4 TB HDDs, but some folks may only need 1 TB, or may want 10 TB. This is why we aren’t focusing on the price of the drives themselves – it’s really up to you.</p>
<p>Finally, the Thunderbolt™ 3 controller chip in PCIe add-in-card form factor was pre-release, for development purposes only. It was graciously provided to us by our friends at Intel. They have cited a price-tag of $8.55 for the chip, but not made us pay yet. <img src="https://s.w.org/images/core/emoji/2/72x72/1f642.png" alt="" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p> </p>
<p><strong>Takeaway</strong></p>
<p>With <em>Project Kepler-47</em>, we used Storage Spaces Direct and Windows Server 2016 to build an unprecedentedly low-cost high availability solution to meet remote-office, branch-office needs. It delivers the simplicity and savings of hyper-convergence, with compute and storage in a single two-server cluster, with next to no networking gear, that is <em>very </em>budget friendly.</p>
<p>Are you or is your organization interested in this type of solution? Let us know in the comments!</p>
<p> </p>
<p>// Cosmos Darwin (<a href="https://twitter.com/CosmosDarwin">@CosmosDarwin</a>), Dan Lovinger, and Claus Joergensen (<a href="https://twitter.com/ClausJor">@ClausJor</a>)</p>
]]></content:encoded>
<wfw:commentRss>https://blogs.technet.microsoft.com/filecab/2016/10/14/kepler-47/feed/</wfw:commentRss>
<slash:comments>21</slash:comments>
</item>
<item>
<title>Fixed: Work Folders does not work on iOS 10 when using Digest authentication</title>
<link>https://blogs.technet.microsoft.com/filecab/2016/10/10/work-folders-does-not-work-on-ios-10-when-using-digest-authentication/</link>
<comments>https://blogs.technet.microsoft.com/filecab/2016/10/10/work-folders-does-not-work-on-ios-10-when-using-digest-authentication/#comments</comments>
<pubDate>Mon, 10 Oct 2016 19:21:37 +0000</pubDate>
<dc:creator><![CDATA[Jeff Patterson - MSFT]]></dc:creator>
<category><![CDATA[Information Worker]]></category>
<category><![CDATA[Work Folders]]></category>
<guid isPermaLink="false">https://blogs.technet.microsoft.com/filecab/?p=6965</guid>
<description><![CDATA[Hi all, I’m Jeff Patterson, Program Manager for Work Folders. I wanted to let you know that Digest authentication does not work on iOS 10. Please review the issue details below if you’re currently using the Work Folders iOS client in your environment. Symptom After upgrading to iOS 10, Work Folders fails with the following... <a aria-label="read more about Fixed: Work Folders does not work on iOS 10 when using Digest authentication" href="https://blogs.technet.microsoft.com/filecab/2016/10/10/work-folders-does-not-work-on-ios-10-when-using-digest-authentication/" class="read-more">Read more</a>]]></description>
<content:encoded><![CDATA[<p><span>Hi all,</span></p>
<p><span>I’m Jeff Patterson, Program Manager for Work Folders. </span></p>
<p><span>I wanted to let you know that Digest authentication does not work on iOS 10. Please review the issue details below if you’re currently using the Work Folders iOS client in your environment. </span></p>
<h5><strong><span>Symptom</span></strong></h5>
<p>After upgrading to iOS 10, Work Folders fails with the following error after user credentials are provided:</p>
<p><span style="color: #ff0000">Check your user name and password</span></p>
<h5><strong><span>Cause</span></strong></h5>
<p><span>There’s a bug in iOS 10 which causes Digest authentication to fail.</span></p>
<h5><strong><span>Status</span></strong></h5>
<p><span style="color: #000000">This issue is fixed in iOS 10.2 (released December 12th).</span></p>
<p><span>Thanks,</span></p>
<p><span>Jeff</span></p>
]]></content:encoded>
<wfw:commentRss>https://blogs.technet.microsoft.com/filecab/2016/10/10/work-folders-does-not-work-on-ios-10-when-using-digest-authentication/feed/</wfw:commentRss>
<slash:comments>7</slash:comments>
</item>
<item>
<title>All The Windows Server 2016 sessions at Ignite</title>
<link>https://blogs.technet.microsoft.com/filecab/2016/09/22/all-the-windows-server-2016-sessions-at-ignite/</link>
<comments>https://blogs.technet.microsoft.com/filecab/2016/09/22/all-the-windows-server-2016-sessions-at-ignite/#respond</comments>
<pubDate>Fri, 23 Sep 2016 03:58:46 +0000</pubDate>
<dc:creator><![CDATA[NedPyle [MSFT]]]></dc:creator>
<category><![CDATA[Uncategorized]]></category>
<guid isPermaLink="false">https://blogs.technet.microsoft.com/filecab/?p=6945</guid>
<description><![CDATA[Hi folks, Ned here again. If you were smart/cool/lucky enough to land some Microsoft Ignite tickets for next week, here’s the nicely organized list of all the Windows Server 2016 sessions. Color-coding, filters, it’s very sharp. aka.ms/ws2016ignite Naturally, the killer session you should register for is Drill into Storage Replica in Windows Server 2016. I hear the... <a aria-label="read more about All The Windows Server 2016 sessions at Ignite" href="https://blogs.technet.microsoft.com/filecab/2016/09/22/all-the-windows-server-2016-sessions-at-ignite/" class="read-more">Read more</a>]]></description>
<content:encoded><![CDATA[<p>Hi folks, Ned here again. If you were smart/cool/lucky enough to land some Microsoft Ignite tickets for next week, here’s the nicely organized list of all the Windows Server 2016 sessions. Color-coding, filters, it’s very sharp.</p>
<h3 style="padding-left: 30px"><strong><a href="http://aka.ms/ws2016ignite">aka.ms/ws2016ignite</a></strong></h3>
<p><a href="https://msdnshared.blob.core.windows.net/media/2016/09/Capture17.png"><img width="879" height="614" class="alignnone wp-image-6955 size-large" alt="Capture" src="https://msdnshared.blob.core.windows.net/media/2016/09/Capture17-1024x715.png" /></a></p>
<p>Naturally, the killer session you should register for is <a href="https://myignite.microsoft.com/sessions/2689">Drill into Storage Replica in Windows Server 2016</a>. I hear the presenter kicks ass and has swag for attendees.</p>
<p>– Ned “not so humble brag” Pyle</p>
]]></content:encoded>
<wfw:commentRss>https://blogs.technet.microsoft.com/filecab/2016/09/22/all-the-windows-server-2016-sessions-at-ignite/feed/</wfw:commentRss>
<slash:comments>0</slash:comments>
</item>
</channel>
</rss>