Questions? Check out the .NET Standard FAQ.
In my last post, I talked about how we want to make porting to .NET Core easier. In this post, I’ll focus on how we’re making this plan a reality with .NET Standard. We’ll cover which APIs we plan to include, how cross-framework compatibility will work, and what all of this means for .NET Core.
If you’re interested in details, this post is for you. But don’t worry if you don’t have time or you’re not interested in details: you can just read the TL;DR section.
For the impatient: TL;DR
.NET Standard solves the code sharing problem for .NET developers across all platforms by bringing all the APIs that you expect and love across the environments that you need: desktop applications, mobile apps & games, and cloud services:
- .NET Standard is a set of APIs that all .NET platforms have to implement. This unifies the .NET platforms and prevents future fragmentation.
- .NET Standard 2.0 will be implemented by .NET Framework, .NET Core, and Xamarin. For .NET Core, this will add many of the existing APIs that have been requested.
- .NET Standard 2.0 includes a compatibility shim for .NET Framework binaries, significantly increasing the set of libraries that you can reference from your .NET Standard libraries.
- .NET Standard will replace Portable Class Libraries (PCLs) as the tooling story for building multi-platform .NET libraries.
- You can see the .NET Standard API definition in the dotnet/standard repo on GitHub.
Why do we need a standard?
As explained in detail in the post Introducing .NET Core, the .NET platform was forked quite a bit over the years. On the one hand, this is actually a really good thing. It allowed tailoring .NET to fit the needs that a single platform wouldn’t have been able to. For example, the .NET Compact Framework was created to fit into the (fairly) restrictive footprint of phones in the 2000 era. The same is true today: Unity (a fork of Mono) runs on more than 20 platforms. Being able to fork and customize is an important capability for any technology that requires reach.
But on the other hand, this forking poses a massive problem for developers writing code for multiple .NET platforms because there isn’t a unified class library to target:
There are currently three major flavors of .NET, which means you have to master three different base class libraries in order to write code that works across all of them. Since the industry is much more diverse now than when .NET was originally created it’s safe to assume that we’re not done with creating new .NET platforms. Either Microsoft or someone else will build new flavors of .NET in order to support new operating systems or to tailor it for specific device capabilities.
This is where the .NET Standard comes in:
For developers, this means they only have to master one base class library. Libraries targeting .NET Standard will be able to run on all .NET platforms. And platform providers don’t have to guess which APIs they need to offer in order to consume the libraries available on NuGet.
Applications. In the context of applications you don’t use .NET Standard directly. However, you still benefit indirectly. First of all, .NET Standard makes sure that all .NET platforms share the same API shape for the base class library. Once you learn how to use it in your desktop application you know how to use it in your mobile application or your cloud service. Secondly, with .NET Standard most class libraries will become available everywhere, which means the consistency at the base layer will also apply to the larger .NET library ecosystem.
Portable Class Libraries. Let’s contrast this with how Portable Class Libraries (PCL) work today. With PCLs, you select the platforms you want to run on and the tooling presents you with the resulting API set you can use. So while the tooling helps you to produce binaries that work on multiple platforms, it still forces you to think about different base class libraries. With .NET Standard you have a single base class library. Everything in it will be supported across all .NET platforms — current ones as well as future ones. Another key aspect is that the API availability in .NET Standard is very predictable: higher version equals more APIs. With PCLs, that’s not necessarily the case: the set of available APIs is the result of the intersection between the selected platforms, which doesn’t always produce an API surface you can easily predict.
Consistency in APIs. If you compare .NET Framework, .NET Core, and Xamarin/Mono, you’ll notice that .NET Core offers the smallest API surface (excluding OS-specific APIs). The first inconsistency is having drastic differences in the availability of foundational APIs (such as networking- and crypto APIs). The second problem .NET Core introduced was having differences in the API shape of core pieces, especially in reflection. Both inconsistencies are the primary reason why porting code to .NET Core is much harder than it should be. By creating the .NET Standard we’re codifying the requirement of having consistent APIs across all .NET platforms, and this includes availability as well as the shape of the APIs.
Versioning and Tooling. As I mentioned in Introducing .NET Core our goal with .NET Core was to lay the foundation for a portable .NET platform that can unify APIs in shape and implementation. We intended it to be the next version of portable class libraries. Unfortunately, it didn’t result in a great tooling experience. Since our goal was to represent any .NET platform we had to break it up into smaller NuGet packages. This works reasonably well if all these components can be deployed with the application because you can update them independently. However, when you target an abstract specification, such as PCLs or the .NET Standard, this story doesn’t work so well because there is a very specific combination of versions that will allow you to run on the right set of platforms. In order to avoid that issue, we’ve defined .NET Standard as a single NuGet package. Since it only represents the set of required APIs, there is no need to break it up any further because all .NET platforms have to support it in its entirety anyways. The only important dimension is its version, which acts like an API level: the higher the version, the more APIs you have, but the lower the version, the more .NET platforms have already implemented it.
To summarize, we need .NET Standard for two reasons:
- Driving force for consistency. We want to have an agreed upon set of required APIs that all .NET platforms have to implement in order to gain access to the .NET library ecosystem.
- Foundation for great cross-platform tooling. We want a simplified tooling experience that allows you to target the commonality of all .NET platforms by choosing a single version number.
What’s new in .NET Standard 2.0?
When we shipped .NET Core 1.0, we also introduced .NET Standard. There are multiple versions of the .NET Standard in order to represent the API availability across all current platforms. The following table shows which version of an existing platform is compatible with a given version of .NET Standard:
| .NET Platform | .NET Standard | |||||||
|---|---|---|---|---|---|---|---|---|
| 1.0 | 1.1 | 1.2 | 1.3 | 1.4 | 1.5 | 1.6 | 2.0 | |
| .NET Core | → | → | → | → | → | → | 1.0 | vNext |
| .NET Framework | → | 4.5 | 4.5.1 | 4.6 | 4.6.1 | 4.6.2 | vNext | 4.6.1 |
| Xamarin.iOS | → | → | → | → | → | → | → | vNext |
| Xamarin.Android | → | → | → | → | → | → | → | vNext |
| Universal Windows Platform | → | → | → | → | 10.0 | → | → | vNext |
| Windows | → | 8.0 | 8.1 | |||||
| Windows Phone | → | → | 8.1 | |||||
| Windows Phone Silverlight | 8.0 | |||||||
The arrows indicate that the platform supports a higher version of .NET Standard. For instance, .NET Core 1.0 supports the .NET Standard version 1.6, which is why there are arrows pointing to the right for the lower versions 1.0 – 1.5.
You can use this table to understand what the highest version of .NET Standard is that you can target, based on which .NET platforms you intend to run on. For instance, if you want to run on .NET Framework 4.5 and .NET Core 1.0, you can at most target .NET Standard 1.1.
You can also see which platforms will support .NET Standard 2.0:
- We’ll ship updated versions of .NET Core, Xamarin, and UWP that will add all the necessary APIs for supporting .NET Standard 2.0.
- .NET Framework 4.6.1 already implements all the APIs that are part of .NET Standard 2.0. Note that this version appears twice; I’ll cover later why that is and how it works.
.NET Standard is also compatible with Portable Class Libraries. The mapping from PCL profiles to .NET Standard versions is listed in our documentation.
From a library targeting .NET Standard you’ll be able to reference two kinds of other libraries:
- .NET Standard, if their version is lower or equal to the version you’re targeting.
- Portable Class Libraries, if their profile can be mapped to a .NET Standard version and that version is lower or equal to the version you’re targeting.
Graphically, this looks as follows:
Unfortunately, the adoption of PCLs and .NET Standard on NuGet isn’t as high as it would need to be in order to be a friction free experience. This is how many times a given target occurs in packages on NuGet.org:
| Target | Occurrences |
|---|---|
| .NET Framework | 46,894 |
| .NET Standard | 1,886 |
| Portable | 4,501 |
As you can see, it’s quite clear that the vast majority of class libraries on NuGet are targeting .NET Framework. However, we know that a large number of these libraries are only using APIs we’ll expose in .NET Standard 2.0.
In .NET Standard 2.0, we’ll make it possible for libraries that target .NET Standard to also reference existing .NET Framework binaries through a compatibility shim:
Of course, this will only work for cases where the .NET Framework library uses APIs that are available for .NET Standard. That’s why this isn’t the preferred way of building libraries you intend to use across different .NET platforms. However, this compatibility shim provides a bridge that enables you to convert your libraries to .NET Standard without having to give up referencing existing libraries that haven’t been converted yet.
If you want to learn more about how the compatibility shim works, take a look at the specification for .NET Standard 2.0.
.NET Standard 2.0 breaking change: adding .NET Framework 4.6.1 compatibility
A standard is only as useful as there are platforms implementing it. At the same time, we want to make the .NET Standard meaningful and useful in and of itself, because that’s the API surface that is available to libraries targeting the standard:
- .NET Framework. .NET Framework 4.6.1 has the highest adoption, which makes it the most attractive version of .NET Framework to target. Hence, we want to make sure that it can implement .NET Standard 2.0.
- .NET Core. As mentioned above, .NET Core has a much smaller API set than .NET Framework or Xamarin. Supporting .NET Standard 2.0 means that we need to extend the surface area significantly. Since .NET Core doesn’t ship with the OS but with the app, supporting .NET Standard 2.0 only requires updates to the SDK and our NuGet packages.
- Xamarin. Xamarin already supports most of the APIs that are part of .NET Standard. Updating works similar to .NET Core — we hope we can update Xamarin to include all APIs that are currently missing. In fact, the majority of them were already added to the stable Cycle 8 release/Mono 4.6.0.
The table listed earlier shows which versions of .NET Framework supports which version of .NET Standard:
| 1.4 | 1.5 | 1.6 | 2.0 | |
|---|---|---|---|---|
| .NET Framework | 4.6.1 | 4.6.2 | vNext | 4.6.1 |
Following normal versioning rules one would expect that .NET Standard 2.0 would only be supported by a newer version of .NET Framework, given that the latest version of .NET Framework (4.6.2) only supports .NET Standard 1.5. This would mean that the libraries compiled against .NET Standard 2.0 would not run on the vast majority of .NET Framework installations.
In order to allow .NET Framework 4.6.1 to support .NET Standard 2.0, we had to remove all the APIs from .NET Standard that were introduced in .NET Standard 1.5 and 1.6.
You may wonder what the impact of that decision is. We ran an analysis of all packages on NuGet.org that target .NET Standard 1.5 or later and use any of these APIs. At the time of this writing we only found six non-Microsoft owned packages that do. We’ll reach out to those package owners and work with them to mitigate the issue. From looking at their usages, it’s clear that their calls can be replaced with APIs that are coming with .NET Standard 2.0.
In order for these package owners to support .NET Standard 1.5, 1.6 and 2.0, they will need to cross-compile to target these versions specifically. Alternatively, they can chooose to target .NET Standard 2.0 and higher given the broad set of platforms that support it.
What’s in .NET Standard?
In order to decide which APIs will be part of .NET Standard we used the following process:
- Input. We start with all the APIs that are available in both .NET Framework and in Xamarin.
- Assessment. We classify all these APIs into one of two buckets:
- Required. APIs that we want all platforms to provide and we believe can be implemented cross-platform, we label as required.
- Optional. APIs that are platform-specific or are part of legacy technologies we label as optional.
Optional APIs aren’t part of .NET Standard but are available as separate NuGet packages. We try to build these as libraries targeting .NET Standard so that their implementation can be consumed from any platform, but that might not always be feasible for platform specific APIs (e.g. Windows registry).
In order to make some APIs optional we may have to remove other APIs that are part of the required API set. For example, we decided that AppDomain is in .NET Standard while Code Access Security (CAS) is a legacy component. This requires us to remove all members from AppDomain that use types that are part of CAS, such as overloads on CreateDomain that accept Evidence.
The .NET Standard API set, as well as our proposal for optional APIs will be reviewed by the .NET Standard’s review body.
Here is the high-level summary of the API surface of .NET Standard 2.0:
If you want to look at the specific API set of .NET Standard 2.0, you can take a look at the .NET Standard GitHub repository. Please note that .NET Standard 2.0 is a work in progress, which means some APIs might be added, while some might be removed.
Can I still use platform-specific APIs?
One of the biggest challenges in creating an experience for multi-platform class libraries is to avoid only having the lowest-common denominator while also making sure you don’t accidentally create libraries that are much less portable than you intend to.
In PCLs we’ve solved the problem by having multiple profiles, each representing the intersection of a set of platforms. The benefit is that this allows you to max out the API surface between a set of targets. The .NET Standard represents the set of APIs that all .NET platforms have to implement.
This brings up the question how we model APIs that cannot be implemented on all platforms:
- Runtime specific APIs. For example, the ability to generate and run code on the fly using reflection emit. This cannot work on .NET platforms that do not have a JIT compiler, such as .NET Native on UWP or via Xamarin’s iOS tool chain.
- Operating system specific APIs. In .NET we’ve exposed many APIs from Win32 in order to make them easier to consume. A good example is the Windows registry. The implementation depends on the underlying Win32 APIs that don’t have equivalents on other operating systems.
We have a couple of options for these APIs:
- Make the API unavailable. You cannot use APIs that do not work across all .NET platforms.
- Make the API available but throw
PlatformNotSupportedException. This would mean that we expose all APIs regardless of whether they are supported everywhere or not. Platforms that do not support them provide the APIs but throwPlatformNotSupportedException. - Emulate the API. Mono implements the registry as an API over
.inifiles. While that doesn’t work for apps that use the registry to read information about the OS, it works quite well for the cases where the application simply uses the registry to store its own state and user settings.
We believe the best option is a combination. As mentioned above we want the .NET Standard to represent the set of APIs that all .NET platforms are required to implement. We want to make this set sensible to implement while ensuring popular APIs are present so that writing cross-platform libraries is easy and intuitive.
Our general strategy for dealing with technologies that are only available on some .NET platforms is to make them NuGet packages that sit above the .NET Standard. So if you create a .NET Standard-based library, it’ll not reference these APIs by default. You’ll have to add a NuGet package that brings them in.
This strategy works well for APIs that are self-contained and thus can be moved into a separate package. For cases where individual members on types cannot be implemented everywhere, we’ll use the second and third approach: platforms have to have these members but they can decide to throw or emulate them.
Let’s look at a few examples and how we plan on modelling them:
- Registry. The Windows registry is a self-contained component that will be provided as a separate NuGet package (e.g.
Microsoft.Win32.Registry). You’ll be able to consume it from .NET Core, but it will only work on Windows. Calling registry APIs from any other OS will result inPlatformNotSupportedException. You’re expected to guard your calls appropriately or making sure your code will only ever run on Windows. We’re considering improving our tooling to help you with detecting these cases. - AppDomain. The
AppDomaintype has many APIs that aren’t tied to creating app domains, such as getting the list of loaded assemblies or registering an unhandled exception handler. These APIs are heavily used throughout the .NET library ecosystem. For this case, we decided it’s much better to add this type to .NET Standard and let the few APIs that deal with app domain creation throw exceptions on platforms that don’t support that, such as .NET Core. - Reflection Emit. Reflection emit is reasonably self-contained and thus we plan on following the model as Registry, above. There are other APIs that logically depend on being able to emit code, such as the expression tree’s
Compilemethod or the ability to compile regexes. In some cases we’ll emulate their behavior (e.g. interpreting expression trees instead of compiling them) while in other cases we’ll throw (e.g. when compiling regexes).
In general, you can always work around APIs that are unavailable in .NET Standard by targeting specific .NET platforms, like you do today. We’re thinking about ways how we can improve our tooling to make the transitions between being platform-specific and being platform-agnostic more fluid so that you can always choose the best option for your situation and not being cornered by earlier design choices.
To summarize:
- We’ll expose concepts that might not be available on all .NET platforms.
- We generally make them individual packages that you have to explicitly reference.
- In rare cases, individual members might throw exceptions.
The goal is to make .NET Standard-based libraries as powerful and as expressive as possible while making sure you’re aware of cases where you take dependencies on technologies that might not work everywhere.
What does this mean for .NET Core?
We designed .NET Core so that its reference assemblies are the .NET portability story. This made it harder to add new APIs because adding them in .NET Core preempts the decision on whether these APIs are made available everywhere. Worse, due to versioning rules, it also means we have to decide which combination of APIs are made available in which order.
Out-of-band delivery. We’ve tried to work this around by making those APIs available “out-of-band” which means making them new components that can sit on top of the existing APIs. For technologies where this is easily possible, that’s the preferred way because it also means any .NET developer can play with the APIs and give us feedback. We’ve done that for immutable collections with great success.
Implications for runtime features. However, for features that require runtime work, this is much harder because we can’t just give you a NuGet package that will work. We also have to give you a way to get an updated runtime. That’s harder on platforms that have a system wide runtime (such as .NET Framework) but is also harder in general because we have multiple runtimes for different purposes (e.g. JIT vs AOT). It’s not practical to innovate across all these spectrums at once. The nice thing about .NET Core is that this platform is designed to be fully self-contained. So for the future, we’re more likely to leverage this capability for experimentation and previewing.
Splitting .NET Standard from .NET Core. In order to be able to evolve .NET Core independently from other .NET platforms we’ve divorced the portability mechanism (which I referred to earlier) from .NET Core. .NET Standard is defined as an independent reference assembly that is satisfied by all .NET platforms. Each of the .NET platforms uses a different set of reference assemblies and thus can freely add new APIs in whatever cadence they choose. We can then, after the fact, make decisions around which of these APIs are added to .NET Standard and thus should become universally available.
Separating portability from .NET Core helps us to speed up development of .NET Core and makes experimentation of newer features much simpler. Instead of artificially trying to design features to sit on top of existing platforms, we can simply modify the layer that needs to be modified in order to support the feature. We can also add the APIs on the types they logically belong to instead of having to worry about whether that type has already shipped in other platforms.
Adding new APIs in .NET Core isn’t a statement whether they will go into the .NET Standard but our goal for .NET Standard is to create and maintain consistency between the .NET platforms. So new members on types that are already part of the standard will be automatically considered when the standard is updated.
As a library author, what should I do now?
As a library author, you should consider switching to .NET Standard because it will replace Portable Class Libraries for targeting multiple .NET platforms.
In case of .NET Standard 1.x the set of available APIs is very similar to PCLs. But .NET Standard 2.x will have a significantly bigger API set and will also allow you to depend on libraries targeting .NET Framework.
The key differences between PCLs and .NET Standard are:
- Platform tie-in. One challenge with PCLs is that while you target multiple platforms, it’s still a specific set. This is especially true for NuGet packages as you have to list the platforms in the lib folder name, e.g.
portable-net45+win8. This causes issues when new platforms show up that support the same APIs. .NET Standard doesn’t have this problem because you target a version of the standard which doesn’t include any platform information, e.g.netstandard1.4. - Platform availability. PCLs currently support a wider range of platforms and not all profiles have a corresponding .NET Standard version. Take a look at the documentation for more details.
- Library availability. PCLs are designed to enforce that you cannot take dependencies on APIs and libraries that the selected platforms will not be able to run. Thus, PCL projects will only allow you to reference other PCLs that target a superset of the platforms your PCL is targeting. .NET Standard is similar, but it additionally allows referencing .NET Framework binaries, which are the de facto exchange currency in the library ecosystem. Thus, with .NET Standard 2.0 you’ll have access to a much larger set of libraries.
In order to make an informed decision, I suggest you:
- Use API Port to see how compatible your code base is with the various versions of .NET Standard.
- Look at the .NET Standard documentation to ensure you can reach the platforms that are important to you.
For example, if you want to know whether you should wait for .NET Standard 2.0 you can check against both, .NET Standard 1.6 and .NET Standard 2.0 by downloading the API Port command line tool and run it against your libraries like so:
> apiport analyze -f C:\src\mylibs\ -t ".NET Standard,Version=1.6"^
-t ".NET Standard,Version=2.0"
Note: .NET Standard 2.0 is still work in progress and therefore API availability is subject to change. I also suggest that you watch out for the APIs that are available in .NET Standard 1.6 but are removed from .NET Standard 2.0.
Summary
We’ve created .NET Standard so that sharing and re-using code between multiple .NET platforms becomes much easier.
With .NET Standard 2.0, we’re focusing on compatibility. In order to support .NET Standard 2.0 in .NET Core and UWP, we’ll be extending these platforms to include many more of the existing APIs. This also includes a compatibility shim that allows referencing binaries that were compiled against the .NET Framework.
Moving forward, we recommend that you use .NET Standard instead of Portable Class Libraries. The tooling for targeting .NET Standard 2.0 will ship in the same timeframe as the upcoming release of Visual Studio, code-named “Dev 15”. You’ll reference .NET Standard as a NuGet package. It will have first class support from Visual Studio, VS Code as well as Xamarin Studio.
You can follow our progress via our new dotnet/standard GitHub repository.
Please let us know what you think!






https://github.com/dotnet/standard
This link is broken by the way
Try again 🙂
It works, thanks!
🙂
what’s the meaning of your comment ?!
check : https://barnamenevisan.org
Wow. quite the fantastic post, as always.
Looks like this will greatly assist in getting .NET back into the browser via WebAssembly — or maybe even transpiled JavaScript? — as well, if I am not mistaken.
(For those who have not voted yet: https://visualstudio.uservoice.com/forums/121579-visual-studio-2015/suggestions/10027638-create-a-ubiquitous-net-client-application-develo )
Thank you for bringing quality developer engagement into our ecosystem, Sir Immo!
It just occurs to me that we can use Unity as a cross-platform client solution if Unity implements .NET Standard. Building data services by traditional class library, with the UI components in Unity 2D, it is possible to combine them in enterprise development. Add Unity visual effects are way better than XAML, and has better hardware acceleration support.
Definitely worth a consideration now. I checked out this very scenario last year around this time and the major roadblock I found were that Unity was using a .NET 3.5 API. It was recently announced that they upgraded to 4.6, so that is no longer a blocking issue.
However, you still have to account for the UI components and framework. Unity is very much a gaming platform and their UI/menuing system is completely different from WPF/LOB applications. Additionally, Unity’s integration with .NET is very “un-.NET” in how it goes about introducing its aspects into POCO’s (pretty major, IMO) and also its naming guidelines (minor).
So you could definitely use .NET to interface with Unity and build applications that way, but if it isn’t a game, it will be a very “un-.NET” experience.
Still, it does manage to work in a web page via OpenGL, which is what JSIL/CSHTML use (and MSFT SHOULD be doing 😛 ), which technically makes it the closest ubiquitous .NET solution available ATM if you can deal with the friction points above.
Getting Microsoft to add 2D graphical library to cross platform .NET core is a key stepping point, a basis for XAML to use it as a rendering surface… A way to look forward… Asking for existing Silverlight or WPF “conversion” or a “bridge” solution into HTML, may be not the right question to ask.
Both Silverlight or WPF are large code base… Who’s original teams may not be in the company. Unlikely Microsoft would grow these two code bases.
However, be more willing to invest in this new thing… .NET Core… Again, cross platform 2D graphical rendering surface … With a way to bind to WebGL
This looks really nic, but would you please also unifyhaving either WPF or HTML as the UI interface language. The final goal should be to have one base class core and one UI core which is then “translated” into the respective platform. Having Xamarin for iOS/Android, UWP for .NET core and WPF for Windows does we do not get rid of different UI languages. I really hope that what is shown above is only valid for one of the next release and then finally all ends up in a final unified UI solution. Of course the might be differences on different platforms, but these difference shoudl be handles with different layouts and not with different UI approaches.
If you ask me, if would simply assimilate Xamarin into WPF, write a renderer for Windows, for .NET core and for iOS/Android an let Xamarin (at least it’s name die). I’m currentlx reading the book from Adam Nathan WPF Unleashed, and reading things like the WrapPanel is named WrapPanel in WPF but not in Windows Store Apps, the ScrollViewer is able to zooming Windows Store Apps , but not in WPF really really annoys me.
So my final question is, is WPF still of interest for you? Or would’t it be better to simply leave the path of WPF and switch everything to HTML. Don’t get me wrong, I really like WPF, more than HTML, but having different techniques for the same result on the same platform or on other platforms when talking about unifying things is not very believable to me and my colleagues, as well as our customers.
The problem with HTML/JS is that it is completely incompatible with .NET, so you end up with two disparate, incompatible code bases, which is incredibly expensive both in cost and in time. Developers are having to develop, manage, and maintain 2 different code bases, and unfortunately this has been the “guidance” of the .NET team since Silverlight was put to death over five years ago.
There are 4500 votes in the uservoice above, which are basically asking for the same thing as what Silverlight was: .NET that works in the client-side browser process. It is nothing short of remarkable that so many .NET developers have been asking for .NET in a browser yet “leadership” continues to ignore their wishes, and would rather .NET developers continue to work in two different, incompatible code bases and languages, each with their own set of defects and (EXPENSIVE) cost concerns.
Please, take UWP (or rather, WPF), make it cross platform so that it works with iOS, Droid, Windows, AND THE WEB, and do away with Xamarin.Forms. I am also 100% in agreement to reinstate the WPF API, as UWP’s API and especially Xaml system is a terrible, terrible joke. What’s great about UWP is its Windows Store integration. Everything else feels like it was developed by an army of interns.
Please Microsoft: unite the .NET ecosystem, do away with “web” and “native” developers, and purge the scourge of JavaScript that is infecting our code bases! A .NET developer should simply be that, with their skillset transferable to any platform, web or otherwise.
And in case anyone is wondering, the ask here is not for another “installable plugin” like Silverlight, but a .NET development paradigm that operates in 100% HTML5-compliant resources and artifacts. Check out http://jsil.org/ for a little taste. That is a project run by some lone, genius developer, when really it should have been the efforts of an entire Microsoft division, rather than pointing us to the heIIscape that is HTML5 and JavaScript!
Or, get some ideas from here: http://www.cshtml5.com Awesome project!
Indeed, CSHTML5 is based on JSIL. I am in agreement here. Truly incredulous that this wizkid developer did the work of an entire MS group. Imagine if they had focused their resources on doing the right thing with .NET back in 2011 and made it work in a browser in a HTML5-compliant manner, rather than forcing us to use completely different programming languages in JavaScript and .NET that are unable to leverage and share code between the two sides.
Now for every .NET solution that needs to work on the web, we have .NET written in C#\F# for server-side code and the web written in JavaScript code, with each project being unable to share code between the other. We are forced to spend time fixing bugs in each set of code, while also learning all the particulars of each development scenario, practically doubling development costs.
The lack of innovation and complacency towards inefficient practices here is appalling.
Something like Fable? https://fable-compiler.github.io/
System.Xaml is the most requested feature on .NET Core’s repo right now and it continues to be ignored.
https://github.com/dotnet/corefx/issues/5766
They don’t care.
It has my upvote. Even in this thread they seem more willing to talk about JSON, a non-Microsoft technology, rather than one that was created by Microsoft and is loved by more Microsoft developers than practically any other technology in Microsoft’s history. In fact Xaml is used in every group in Microsoft with the exception of the asp.net group and somehow this factor has managed to be an inexplicable reason to shutter or suppress the growth and adoption of Xaml, despite the outcry and demand of its own developers.
Why ask for feedback or say you listen to feedback when you do not listen to the most obvious and popular requests, Microsoft?
> Why ask for feedback or say you listen to feedback when you do not listen to the most obvious and popular requests, Microsoft?
We do listen to feedback but we prioritize based on the overall customer base and needs. It’s not that we don’t care about System.Xaml, but it’s current absence from .NET Core isn’t the biggest problem our customers are hitting. It’s the large number of small differences in the BCL that prevent most people’s code from working. FYI, System.Xaml isn’t used very highly on NuGet. That’s not to say that System.Xaml isn’t used a lot — it’s just an indicator that isn’t used very much outside from WPF and WF, neither of which are a cross-platform scenario right now.
you cant use nuget as the only metric for use.. XAML is not cross platform because you’ve chosen to make it cross platform. You’re making a circular argument here, is not a priority to make cross platform because its not used a lot in cross platform development. if XAML was more widely available it would be used more, as indicated by the numerous votes this issue has gotten
Being a front end technology, it’s not going to be very visible on nuget, but it is still used a lot by, if not the majority of, windows lob and app developers.
+1 to aL here. I hate to be the squeaky wheel here, Immo. 🙂 You’re still my hero, haha. But it definitely feels like you won’t give Xaml a chance, or are tying one hand behind its back and saying “it can’t fight.” NuGet is a terrible metric for the reasons stated. Further (and probably more damaging), it really continues to be pigeon-holed into a UI/front-end technology when in fact it can be used for so much more. UWP’s take on it — as many others have said — have only further damaged its brand, value, and perception.
Finally, you mention below that JSON is used across the industry, while Xaml is used across all of MICROSOFT, with the exception of the web groups (and I am in total agreement that they should be using it, too!). Where is the love for your own popular (and powerful) technology? It seems you would protect the MSFT investments FIRST, then outside considerations. The developers that have spent a decade or more with your constantly evolving technologies are the ones that are suffering and being punished for it here, and that truly seems backwards.
I will close here by mentioning that the System.Xaml vote is a few shy away from breaking 100, and there isn’t a single GitHub metric that you can point to that matches that, not by a long shot (closest is 17 as of this morning). Please consider listening to and rewarding the developers and customers who have been with you the longest, and who also understand and appreciate your own technology as you do… maybe even a little more. 🙂
> … WPF and WF, neither of which are a cross-platform scenario right now.
And who’s fault is this again, exactly? This is certainly not due to lack of requests, votes, submitted GitHub issues, and even demands from your faithful flock, but more so seems like a concerted effort to simply ignore their persistent, obvious, and repeated wishes for a relevant .NET. One that features the foundational frameworks that have made .NET great thus far. Your GitHub and UserVoices are becoming wastelands where good ideas go to die; you simply do not listen to them.
Correction, the System.Xaml vote is now *over* 100 votes. 🙂 Thank you to all who have shown their support!!!
To be clear, I’m not saying that System.Xaml cannot or shouldn’t be cross-platform. What I am saying is that System.Xaml isn’t very useful outside of WPF and WF, and NuGet is a sensible proxy metric for that statement. Of course, System.Xaml could be made meaningful outside of the UI realm, but that’s a secondary step. I’d rather use our resources RIGHT NOW to make the foundational pieces work well and complete. Besides, by having a converged base, porting higher-level components, such as System.Xaml becomes much easier as well.
So it’s not an either-or statement; it’s a statement of order of execution.
>NuGet is a sensible proxy metric for that statement.
in my opinion it is not. For me this is like saying there is not a lot of html in npm, therefor html is not a commonly used technology. Yes, XAML is mostly used by ui, but there are a lot of ui out there 🙂
> Yes, XAML is mostly used by ui, but there are a lot of ui out there
+1 to aL here. Not just that, but lots of **applications** out here. Which makes this statement by Immo especially alarming:
> What I am saying is that System.Xaml isn’t very useful outside of WPF and WF.
XAML stands for eXtendible **Application** Markup Language. However, what seems to be baked in the minds of anyone who has the power to pull the trigger on this issue over there is that XAML is simply for UI.
With Xaml, you can define your **applications** in an expressive and powerful manner. Not just UI. There is a whole serialization engine behind Xaml that seems to be willfully overlooked by MSFT management. This same powerful serialization engine is why you are seeing many ask for it in different offerings and solutions — MSBuild being one of them (which, too, is being ignored despite a great deal of interest and upvotes).
As for the rest of your statements, I have been around enough project manager parlance to know when we are being given the runaround. I will hold this to your word that you will give this issue a fair look when .NETStandard2.0 has been given the opportunity to surface — something a .NET Core System.Xaml does not seem to enjoy ATM.
That’s great. I’d like to see JSON serialization / deserialization in this as it is more or less a standard data format for modern cross platform development. I can see that the serializer is being considered an extension, but I believe JSON handling should be part of the the core…
Today, the best library for dealing with JSON is JSON.NET. But by adding it to the .NET Standard we’d do the community a disservice. What matters is that the JSON support is widely available. And James, the author of JSON.NET, does a great job making sure that JSON.NET is available everywhere. His ability to do this successfully is a function of how hard it is for him to make changes. The best way to do this is by creating a library that targets .NET Standard because it can be updated independently from the standard itself and everyone immediately benefits.
Of course, this doesn’t mean we can’t or shouldn’t provide some built-in JSON support. We’ve talked with James about this in the past and I believe there is a lot of opportunity for us to collaborate with him on an even more performant way to provide JSON support in .NET. However, we’re very interested in doing this with him rather than just building “another” JSON.NET. We want a strong ecosystem for .NET, but this can only happen if we embrace libraries based on merit, rather than by who wrote it. That’s what open source is all about.
Great repsonse Immo. I like the consideration behind it and it makes perfect sense from that point of view.
Seems like developers are asking for built-in Xaml support as noted above, as well. Xaml is far superior to JSON as a serialization mechanism, and is a Microsoft technology. Will you not support your own inventions?
.NET from the get-go has set on open formats for exchange. Back in the early 2000s that was XML. In today’s world of RESTful services the de facto format is JSON. To me, it’s not an either-or. Just because we have JSON doesn’t mean we can’t have Xaml. But from a prioritization stand point, JSON is much more urgent because it’s used across the industry and thus relevant to a much larger .NET customer base.
I’d love to see standard interfaces for json serialization that json.net and others implement. there are other json serializers out there, like Jil and service stack, that don’t get the attention they deserve.
Interesting. What would these interfaces provide that a generic data pipeline (such as Streams) wouldn’t achieve?
Besides having a common interface for doing the actual serialization/deserialization(that people usually end up defining them selves in every project), they could include stuff for setting names of json properties or otherwise customizing the serialization process. Similar to how data annotations provide an abstract way to annotate classes for ui that then can be used by different implementations
While JSON library is great. This is not a full story of serialization. It can certainly be more standard. The fact that he insists on putting $type as first element of serialized object and such, (as an aside the whole type handling of json need more thought work). But his json.schema is a commercial product. And while I don’t mind paying, putting my projects in OpenSource if they use it is a no go from the start. I think Microsoft can certainly make it less painful to rely on third party to what became a most used feature in web development API.
No silly $type, $schema or other arbitrary, polluting artifacts in Xaml. The POCO class definition is the schema, the way God intended. Why developers prefer more arcane serialization formats — and Microsoft continues to enable them despite having created a better one — is apparently way beyond my pay grade.
The problem as an library author with targetting .NET Standard is that this simply isn’t available for a lot of consumers yet. So we can’t just move to .NET Standard, there are lots of issues with the way targetting .NET Standard only works in that you have to specify all dependencies which then end up being referenced in a scenario where these really not be referenced since they are part of the framework. There really needs to be better docs around nuget package authoring and targetting both .NET Standard, PCL, frameworks etc. at the same time, since .NET standard simply isn’t enough.
I have yet, to author a simple package with only system.runtime dependencies without having to list all pcl targets, frameworks and .net standard, otherwise some users can’t since .net standard targetting simply isn’t working e.g. 1.67.
Agreed on all fronts. That’s what .NET Standard 2.0 is about: more compatibility and wider platform support. From a tooling experience side, it should feel very similar to .NET Framework, except that the framework itself is referenced as a NuGet package. It might not be a single package, but it will certainly not be the 30+ packages that it is today. .NET Standard is much larger than just System.Runtime.
It clarified the view of .Net for me.
Holy moly! This is getting all so confusing for a gent like me. There are too many Cores and Foundations and Standards and Frameworks. I am losing sight of the big picture because there is just so much to look at now! I yearn for the days of simple .NET, when little old me could understand so much of it without being buried up to my ears.
– Karl
Your criticism of our naming is duly noted. There is certainly room for improvement. However, to a certain extent I also think some part of the complexity is inherent to being a cross-platform technology. We’ll try our best to make this as easy is possible, but we can’t fully hide the fact that libraries may run on different operating systems. We believe that .NET Standard is large step forward, but I also think we’ll need to invest in more tooling to help developers to reason about the differences.
If you have specific questions or areas of confusion, please let me know. I’m more than happy to explain or simplify.
And what is DNX again? Is it dead already?
DNX was many things at once. I assume you’re referring to the concept of having a shared framework that your app runs on, rather than having your app deploy the runtime and framework itself. Our tooling simply handles this by building a .NET Core application and not using the self-contained option.
I was mostly confused about naming changes. You guys go great lengths to avoid porting time-tested tech from .NET Framework!
That’s actually the number one thing that .NET Standard 2.0 achieves: it brings in more of the existing tech from .NET Framework, which, via our compat shim, also extends to 3rd party libraries.
– As you said, .NET Core changed the way reflection worked. Does that mean that you now need to make breaking changes to .NET Core to get reflection back in line with how it works in .NET Standard?
– Why is Phone Silverlight supported here but not browser-Silverlight?
– I don’t understand why .NET Framework and Xamarin continue to exist in this new model. It makes it super-confusing. I thought the whole point of .NET Core was a cross-platform .NET that will eventually subsume Mono and .NET Framework once its fully developed.
You guys need to take a step back and take a serious look at this solution. This is a 14-page document and after reading it cover to cover I still barely understand what the changes are or why. Is this really the story you want to be presenting to devs about your platform? I think it makes it a lot more likely that they’ll leave for less muddy waters elsewhere. Why is it that .NET Core cannot become the only .NET and be an upgrade path for .NET Framework / Xamarin? Having a gaggle of different .NET-something or others is a total mess.
> Does that mean that you now need to make breaking changes to .NET Core to get reflection back in line with how it works in .NET Standard?
No. The reason being that when we created the API shape we had to be able to run on .NET Framework. Thus, we created the different shape partially by subsetting and partially by adding some new APIs (such as TypeInfo). .NET Standard 2.0 reflection will simply look as .NET Framework does today, which is a strict superset of .NET Core today.
> Why is Phone Silverlight supported here but not browser-Silverlight?
Because Phone Silverlight is newer and thus has more APIs than the regular Silverlight.
> I don’t understand why .NET Framework and Xamarin continue to exist in this new model. It makes it super-confusing. I thought the whole point of .NET Core was a cross-platform .NET that will eventually subsume Mono and .NET Framework once its fully developed.
.NET Core is currently not large enough in order to subsume all the scenarios that you can realize with .NET Framework and Xamarin today. My personal hope is that we can grow .NET Core enough to be able to do that because that would mean having a unified implementation.
However, even in that world you need some concept of a platform specific “distribution” of .NET Core. For example, both UWP as well as Xamarin include APIs that are outside of .NET and are specific to the operating system. In UWP that’s WinRT and in Xamarin’s case these are the iOS and Android bindings. If you want to produce a component like Xamarin Forms (which is an abstraction over the platform-specific UI stacks) you still need a way to target a operating system. I’d say the easiest way to model this by having specific versions of .NET. In order to make targeting them all approachable, we’ve added .NET Standard.
Let me try to said it differently: .NET is about giving you the most productive environment for a given user experience. And .NET Standard provides the vocabulary (i.e. types) that allow you provide components and abstractions that can work across many .NET user experiences.
I have faith that we’ll get there, but right now, even though I really love Core, it’s a MAJOR productivity killer.
We have a customer that’s a Mac & Linux shop using Node, and would jump at the chance to use core, but at this point, we just can’t recommend it for anything other than proof of concept play things.
The advantages of C# over Javascript are massive, and IMO, that’s an understatement!
It’s imperative that MS gets this right as soon as possible.
We wholeheartedly agree! Thanks for your patience. The good news is that most of our time is currently spend on .NET Core and making it implement .NET Standard 2.0, which we believe to capture the essence of .NET productivity. This includes having the APIs you know & love or the compatibility with existing .NET Framework libraries.
Thanks for all your work. Those of us deep in the Java world are also starting to take notice.
Throwing PlatformNotSupportedException when you want to compile a regex is nonsense: you have a perfectly valid interpreted implementation already, just use that. I don’t want to check if adding the RegexOptions.Compiled flag will make my app explode or not every time, just ignore the flag entirely if it’s not supported – it’s a pure optimization without change in behavior after all.
We generally prefer emulation over throwing exceptions. Let me look into this, but AFAIK there was a reason why settled on throwing.
And this is how we end up with unexpected and awkward to solve perf problems like this;
https://github.com/dotnet/corefx/issues/10470
https://github.com/JamesNK/Newtonsoft.Json/issues/968
I know I’ll lose this argument, but I would personally *hugely* prefer that members that don’t ‘work’ on all platforms are not available by default *at design time*. There should be extra hoops to jumps through to call those members so you know at the time you write the code that it will only work on platform x/y rather than being lulled into the pit of failure. This is one reason I still prefer PCLs. Yes it is inconvenient, but far less so than runtime errors in environments you didn’t/couldn’t test on.
> I know I’ll lose this argument, but I would personally *hugely* prefer that members that don’t ‘work’ on all platforms are not available by default *at design time*.
That’s our general approach. However, if you think about it: you can’t make type members (e.g. methods, properties, etc) additive. The only thing you can make additive are types, as two different types can live in separate assemblies but we don’t have a mechanism to split a single type across two different assemblies. That’s why larger technologies, such as registry behave exactly as you describe: they aren’t available by default, but you can add them.
Moving forward, we try to avoid creating types where only parts of it work everywhere. But as always, there will be cases where we couldn’t predict the future and are forced to throw.
My hope is that we can provide design-time tooling to help with this.
Simply use extension methods and put those methods that are not available on all platforms into separate assemblies. There are so many possibilities to inject or attach platform specifics to a common functionality. It’s all about architecture and the features that C# gives us already.
PS: Even if I respectfully disagree with some decisions, I appreciate all the hard work and the thinking that come out of the MS and .Net teams.
+1!!!
Not exactly an easy problem to solve, any step of the way.
That said, I would also like to echo Lucas’ concern. Throwing on unsupported functionality seems like a nightmare. Simply not having the API available to code with (as we have had with PCLs to date) seems much more natural. It’s a pain, but much less than having to code and then blow up once you run the code.
If there is some awesome magic sauce you can apply to it during development (as mentioned) then that might help. Guess it would have to come down to developer experience, as always. 🙂 It just seems that throwing an exception during runtime rather than catching something during development time is a bad way to go. As long as that doesn’t happen, we good.
Please tell me that this is different from this: https://xkcd.com/927/
I’m very much aware of that challenge and I’ve talked about it here:
https://blogs.msdn.microsoft.com/dotnet/2014/12/04/introducing-net-core/#relationship
This has to be the comment of the year. 😛
Any plan to standardize the C-API for embedding .NET?
Which C-API are you referring to?
Edit: I assume you’re referring to the CLR hosting APIs?
Exactly those CLR hosting APIs, right now there are 3 different APIs for .NET Framework, .NET Core, Mono. This is very hard to maintain to open-source projects.
The .NET Framework CLR host is COM based (and thus does not work cross-plat) and it assumes the CLR is installed in a well-known central location, which does not work for the .NET Core CoreCLR runtime. The CoreCLR hosting API are 4 plain C methods, same for both Windows and Unix: https://github.com/dotnet/coreclr/blob/master/src/coreclr/hosts/inc/coreclrhost.h
.NET Native currently doesn’t have hosting APIs but we expect that we’ll eventually add them and align them with the ones from CoreCLR.
Mono currently has a very different set of hosting APIs and provides many knobs for influencing runtime behavior.
So currently we don’t have a converged hosting API. If we were to add them, which scenarios would you like to see handled the most? Are the CoreCLR ones enough?
I’m definitely not qualified to answer if CoreCLR C-API is enough, but Luke Stratman may be:
https://github.com/pythonnet/pythonnet/issues/96#issuecomment-234818085
I can’t speak for everyone’s usage scenarios, obviously, but the projects that I am working on focus on initializing the CLR in other runtime processes to enable cross-language interaction with .NET code. Specifically, Node.js (https://github.com/tjanczuk/edge) and Python (https://github.com/pythonnet/pythonnet). For these usages, the hosting APIs from CoreCLR are sufficient to do what we need.
One thing that would be useful for CoreCLR specifically is to add a shared API method to libhostpolicy specific to simply spinning up the CLR. libhostpolicy’s corehost_main method assumes that you are executing an application, but for scenarios like mine (and I imagine many others) where we are trying to embed the CLR in other runtimes or applications, it would be useful to have one method in libhostpolicy to call to spin up the CLR and get a host handle and a domain ID. As it stands, we are not using libhostpolicy at all and have to duplicate about 95% of what’s in https://github.com/dotnet/core-setup/blob/master/src/corehost/cli/hostpolicy.cpp#L18 in order to accomplish the same thing. It’s certainly not a world-ending problem, but having that extra method would be convenient for CoreCLR embedding projects. I plan on submitting an enhancement to the core-setup project, but figured I’d leave this hear to facilitate potential discussion.
“Separating portability from .NET core”
Doesn’t this create more fragmentation defeating the whole purpose you are trying to solve?
I really feel this decision is going to cause more problems then it solves.
Not necessarily. I think we’ll converge our implementations over time more and more. However, we still need a specification that allows forking to occur without immediately resulting in massive amounts of incompatibilities. That’s what .NET Standard brings to the table.
Forking is vital for experimenting and innovation. I think we’ll spearhead innovation in .NET Core as we planned and later add them to .NET Standard. However, the key is later. This allows us to push the envelope, see whether we like the result and then standardize. That’s similar to how other standards with different parties work (e.g. C++ or POSIX).
When is Dev 15 slated to ship?
Not much information still regarding support for VB.NET in .net core and asp.net core and for crating .net standard libraries. Any information regarding the plans for VB.NET support will be welcome as I can take an informed decision moving forward.
I didn’t mention any language (C#, VB, or F#) because all of the above is transparent to the languages. From the compiler’s view point it’s merely the set of APIs you can use. All languages will be able to target any version of .NET Standard.
Thanks. I understand I can target .NET standard from all languages. What i wanted to know was, what are the plans for supporting creation of .NET Standard libraries and ASP.NET core applications using VB.NET.
Gotcha. That’s not my area but I’ll forward the question.
Thank you.
Will Unity implement .NET Standard?
Great question. We’re closely working with Unity. They are currently in the middle of the moving to the latest version of Mono. Once that is completed, they will also implement .NET Standard 2.0 (because Mono does).
How does it work on windows 10 v1607? there is just .net 4.6.2, is it means we should wait for next version .net framework which support .net standard 1.6 and 2.0 or above?
.NET Framework 4.6.1 and all later versions will support .NET Standard 2.0. You can either target .NET Standard 1.4 today or you can wait for .NET Standard 2.0. I recommend not using .NET Standard 1.5 or 1.6 because they added APIs that .NET Standard 2.0 doesn’t support.
Very pleased to see AppDomain included!
Where is System.Data ? Is EF 7 considered part of the app model ?
EF will be a library that targets .NET Standard. So it will be available everywhere but EF itself isn’t part of the standard.
Is it means that .net standard 2.0 already in former released .NET Framework 4.6.1 install package? or I have to install an update for .NET Framework 4.6.1?
I guess you could understand it as a yes. I think, to be more accurate, netstandard2.0 is not “in” but “supported” in .Net Framework 4.6.1.
One great thing of .Net Standard is its backwards compatibility, which means a package targeting higher standard always can depends on a package targeting lower(or same) standard. But introducing the breaking changes, this is not guaranteed any more. I think this is serious, and we have two options:
1. Remove .Net Standard 1.5/1.6 or mark them outdated/deprecated immediately and tell all libraries authors not to targeting them any more, and do anything possible to keep backwards compatibility of .Net Standard now or in the future.
2. Face and accept “breaking changes” as a valid concept by establishing certain packages dependencies management mechanism. For example, identify the breaking change by the first number in package version, so everyone and the tooling knows the LibA-2.x are not compatible with LibA-1.x. This might bring a lot of updating/multi-targeting work for libraries authosrs, because they might have to treat LibA-2.x and LibA-1.x as two different packages. Anyway, we have a standard way to deal with breaking changes.
.Net Standard is evolving fast, and it’s highly possible that breaking changes in .Net Standard‘s evolution are inevitable. We survive this time luckily, but how should we do next time if we do nothing now?
> One great thing of .Net Standard is its backwards compatibility, which means a package targeting higher standard always can depends on a package targeting lower(or same) standard. But introducing the breaking changes, this is not guaranteed any more.
This breaking as a one-off thing to bootstrap adoption of .NET Standard. We want folks to be able to use .NET Standard 2.0 because it’s much better compat with the past, but in order to be viable it has to be implemented by the platforms that you care about at the time we release. I don’t expect breaking changes like this to occur moving forward.
Our product is built against .Net Framework 4.6.2, and requires .Net Frameework 4.6.2 to be installed.
If we want our ‘next’ version to use .Net Core 2.0 does that mean we will have to ask all our customers to revert to .Net 4.6.2?
Or would the release of .Net Core 2.0 co-inside with .Net Framework vNext meaning we can urge them forward on to that one?
I think we would need .Net Core 2.0 because of our reliance on AppDomains – see another comment of mine.
Matt
By “ask all our customers to revert to .Net 4.6.2?” I of course meant ask all our customers to “revert to .Net 4.6.1?”
I still need to know whether this will break applications, if we ship .Net Standard 2.0 apps on Servers with .Net Framework 4.6..2 Will this screw our customers?
Is there a way that can tailor my dependencies into just-enough dependencies instead of referencing the full packages of a certain version of .Net Standard.
I guess it would be best that there is tailor option when we release, maybe “dotnet pack -tailor”?
I was just about to ask a similar question so rather than doing it separately, I’d like to expand on this question if I may.
One of the supposed benefits of the way the original .Net Core was that it “broke it into lots of smaller nuget packages” and you had to explicitly opt-in to what you wanted. This meant that if you didn’t need, say, XML support then you wouldn’t include that. The benefit of that was a lower memory footprint which should vastly increase hosting density, given that most small web sites and even micro services aren’t that heavily utilised.
I read somewhere that the minimum memory footprint of a website on the full .Net framework is ~200mb, whereas with (the original) .Net Core you should be able to get that down to ~11MB. This is a big deal in a world where we are breaking applications apart into microservices.
If I read this article correctly, .Net Standard is a single thing that encompasses a whole bunch of stuff, therefore forcing you to suffer that memory footprint? Since it will only get bigger over time, does that mean that the goal of minimising memory footprint by componentizing the framework is gone?
I am not wholly opposed to that, I can see the simplification benefit probably outweighs the memory optimisation for 99% of people, but it would be good to understand this better.
Or, am I completely missing how this actually works? Or is there something in @jarenduan’s point where maybe some build-time analysis could be done?
It works the different way. .NET Standard is deployed as thin package – called reference package, it is small in size and have only API definitions.
If your code needs some package (ex. System.LINQ) it will be referenced by project.json separately.
So .NET Standard is not bunch of packages bundled in single package, it is just contract definition package.
This is my understanding of how it will work
I was hoping that would be the answer. If that is correct then it *sounds like* jarenduan’s “tailor” option is in effect the reality… ?
The answer Alexander.Sidorenko provided above is pretty close. Think of .NET Standard as toolable specification, like a header file in C++ that is delivered as a single NuGet package.
Each platform in turn decides how that specification is implemented and delivered. For .NET Core, we have many packages today, but as I hinted at in the post we’re likely to condense the number of packages. Rather than forcing you guys to hand craft the list of dependencies in a JSON file, we’ll provide tooling to tailor the dependencies your application has to deploy.
Virtually all platforms will implement .NET Standard as a set of components not a not as a monolith, but we treat that as an implementation-level concern which gives us more flexibility, including static linking and submodule trimming (as we do it in .NET Native and iOS). The benefit for you is that you’ve fewer things you have to manage.
Forgive if this is a stupid question, but can you point to an explanation of the difference between .Net Core and .Net Standard. Or, rather, “what is .Net Core”.
I’ve been developing on “.Net Core” since October 2015 and have some apps in production, which use the .Net Core paradigms and ASP.Net Core but target the full .Net Framework. I’ve also got some smaller services and a couple of smaller sites running on the .Net Core tooling/paradigm/whatever and targeting netcoreapp and netstandardxx as well as some internal libraries multi-targeting Net Standard and Net4xx etc (which is a great developer experience btw).
But, despite that, I don’t think I could actually explain to someone what, exactly, “.Net Core” is as opposed to what it’s not. Is there a concise explanation somewhere?
PS That compatibility matrix you show in the blog post; It’s a really important matrix and I have seen and used it before to build libraries. But it only seems to appear in blog posts and is really hard to google and even harder to bing. It may be worth giving it an official home and doing some SEO work on it 🙂
“.NET Core is a blazing fast, lightweight and modular platform for creating web applications and services that run on Windows, Linux and Mac. ”
-https://www.microsoft.com/net/core/platform.
I understand you’re confusing, and sometimes people are using the two words without distinctions.
When I start to confuse, I tell myself that .Net Core is a platform that implements .NET Standard, which seems to help.
You can find the matrix in Github:
https://github.com/dotnet/standard/tree/master/docs/netstandard-20
.NET Core and .NET Standard are GitHub open projects, so a lot of progress are going there. Docs are not organized well comparing to the MSDN Library, and are hard to google.
.NET Framework, .NET Core, Xamarin, Mono, and Unit are concrete implementation of .NET.
.NET Standard is a specification that covers which APIs a valid implementation has to offer.
Think of .NET Standard as POSIX for .NET.
> we’ve defined .NET Standard as a single NuGet package.
Can you expand on this a bit more (perhaps that’s a different blog posts)
The goal for .net core and also a stated goal in general going forward has been that modularity would be a key feature of the platform, you’d only use the things you needed any not pull in a bunch of other stuff. How does this relate to that? will I end up with the entire .net standard in my bin folder all the time if i make a standalone deploy? will you have to rev the entire .net standard if a change is made to the api surface of one library? what about changes that don’t change the api surface, like performance updates or patches?
It’d just be sad if we end up in the same situation as we did with the bcl, a global thing that is difficult to change because its so big and shared across everything on the machine.
The modularity and ability to make stand alone deployments is super valuable, please don’t loose that 🙂
See my comment above.
Well to be honest, as with any complete overhaul, or brand new product, I’m almost eager to try out version 1.1, never version 1.0.
Looks like there are a lot of moving pieces, but kudos to getting this information/thought process out in the open.
For now, I’m still sticking to the .NET framework – but maybe in a year or so, I’ll start porting over my stuff to Core.
I’ve lost any idea after 30-th page. Hey, are you sure you EXPLAIN something? I feel I become more confused. MS never was good in naming: bedraggled word “Windows” you push everywhere despite it fits or not (hello, WinRT!). Now “.NET” – tens of names hardly having something common except “made by MS”. When you stop that circus and start naming things DIFFERENT? Obvious “.NET Core” IS NOT .NET – we know “.NET” as a “Windows-only” huge framework. “Core” is a different story. .NET Compact framework is absolutely another world. But all that MESS is named “.NET” – MS out of fantasy???
And now you came with “standard”. Because MS itself tired of own “similar names”. But your standard are useless until you say straight what is GOAL of .NET; You have no idea what you want, mixing desktop and mobile worlds. Obvious that even if you create thousand of standards it won’t help – your mess not in PC, your mess is in head.
You might want to read this blog post more carefully 🙂 It is not complex thing – .NET STandard, they just try to unify API across all .NET platforms
Finally I see someone saying the truth. They have no idea what they want but pretend they do. Trying so hard to drive every follower crazy, they just don’t care. Whatever, I’m tired.
I am confused by the name and what exactly is supported by each version.
Oh,BTW, there is .NET Micro Framework ..
My only understanding is that, .Net Standard is the new .Net Core ? And .Net Core becomes one of the non-Core flavours?
.net core is the runtime and .net standard is the base class library
With .net standard, all the runtimes, .net framework, .net core and mono will use the same base class library. .net core will stillb e its own thing (and hopefully as it develops it will replace the other two)
Thanks for the great summary. I feel like it clarified some things related to the direction of ,NET.
What on earth is a “tooling story”?
Otherwise it’s great to see an attempt at unification and clarity.
I meant the approach people use for building reusable class libraries with our tooling. For example, PCLs was a tooling story for building cross-plaform .NET class libraries. The .NET Standard based approach is a different story.
Will .NET Standard be fully supported in current Visual Studio versions or only in the upcoming one?
The fully-featured support will only be available in the upcoming release. Since it’s a NuGet package you can probably use it with previous versions, but some things might not work as expected (for instance, the tooling will probably not let you reference a .NET Framework library from a library targeting .NET Standard).
This article makes a lot more sense if you take the version numbers for .NET Standard and Core and divide them all by 2. The 2.0 version of .NET Standard will be the first reasonable target for a production codebase.
As a former customer I can’t blame you. Now that I work for Microsoft: how dare you? 😉
Seriously though, we think .NET Core 1.0 is quite usable today. It’s major weakness is compatibility with existing code, and that’s what .NET Standard 2.0 and the upcoming version of .NET Core will address.
Great post it really clears up the PCL and .NET standard discussion. I’m still having some concerns though. Is it possible to expand the NuGet tooling to not show packages that are not compatible for the selected projects? Sometimes I’m trying to install a package but then its says, sorry can’t do. Your profile doesn’t match with the selected NuGet package. If its not possible then please display for what profiles the NuGet package is compatible. Well you get the point, the tooling for NuGet packages doesn’t go well with the profiles.
Another thing which worries me more is Ahead of Time compilation for iOS. I was able to use code with reflection in a shared PCL by Xamarin Android and Xamarin iOS. Everything seemed fine after developing and testing my code on the iOS simulator. However when I tried to run my code on an actual device all hell broke loose. The Ahead of Time compilation caused that certain reflection code was not executable and this caused runtime issues?! How are these reflection API’s available for Xamarin iOS? This shouldn’t be possible right? I noticed that in the .NET Standard library reflection is still available. I do not understand this. Reflection is simply not available on Xamarin iOS since it requires JIT compilation. I know that AOT compilation is not .NET’s default way of compiling but since you claim that .NET Standard supports Xamarin, you implicitly claim that all API’s in the .NET Standard lib are supported in Xamarin iOS, however this is not true. Can you please clarify on this.
Yes, the plan is to improve the NuGet tooling as well. Whether we’ll hide packages that aren’t compatible is a different thing. Imagine a package isn’t compatible due to the way your project is configured, i.e. your project might be targeting a lower version of a .NET platform from what the package requires. Hiding the package doesn’t help you discover that. It might be better to let the package install, and fail the build with an actionable error message, potentially providing a right-click-fix experience.
I’m not an expert on AOT for iOS, but AOT for Windows (AKA .NET Native) certainly supports reflection. Of course, in order to allow for tree shaking the linker has to know which APIs to maintain. In .NET Native you can configure which metadata to keep. Generally speaking, reflection, including reflection invoke, does not require a JIT. What does require a JIT is runtime code generation using reflection emit. That’s why reflection is part of .NET Standard while reflection emit is not.
Well there is more then hiding and showing of course. You can display that the package isn’t applicable for the current profile for instance. I’m sure you guys will figure it out in an intuitive manner. I’m looking forward to the improvements.
Great to hear that the .NET Standard is fully compliant with AOT on iOS. Does this also mean that when I have for instance System.Net.Http referenced in a PCL, I don’t have to reference it again in my iOS project? Since the .net45 variant in my PCL will be replaced with a .NET Standard one which actually works with AOT on iOS.
Look at this stack overflow question: http://stackoverflow.com/questions/18669339/httpclient-execution-runtime-error-executionengineexception-attempting-to-jit/38992635#38992635
All I ask is that you separate out .net core, .net standard and full framework in project templates. They aren’t interchangeable and people keep selecting the wrong one because class library (.net core) shows up higher than class library in project templates for some reason. When you add in class library (.net standard) it will be really confusing. Especially sine core and standard mean the same thing to a lot of people. Please consider moving all .net core templates to their own category and the same for .net standard.
Yes, we will have separate templates for the reasons you’re mentioning.
It has become a full-time job just keeping track of Microsoft’s changes. Unfortunately, I don’t get paid for this overhead. I am over trying to keep up with Microsoft’s, “oh, this is what we REALLY meant” planning style. Microsoft is at war with itself trying to both trap and free developers at the same time. As talented as Microsoft is, .Net Standard should not be the new standard but the plan all along. So I can’t really congratulate Microsoft for racing to work on time only to arrive without pants.
There are too many changes happening too quickly, even for this business. The whole thing is incredibly confusing.
Please, build a universal XAML system for ALL clients including Web Browser, XAML is a very powerful UI presentation technology, and C#+XAML is much better than JS+HTML5/CSS3 for application development.
Currently, our team (and many others in different companies as I know) develop applications for Windows Server(backend service), Windows Desktop, Windows 10 Store, Android, iOS and Web, so have to take skills of C#/Java/Object C/HTML/JavaScript/CSS (actually many component libraries such as jQuery, Angular, Bootstrap…), we need to implement the same business logic in 3 or 4 times(as well as bugs and maintenance…), You know the team is become bigger and bigger, If MS can bring .net+XAML to all the client environment, I believe out team will as smaller as 1/3 to 1/4.
And that is not the end of the story, we develop Windows desktop applications (WPF), Windows Store Apps (UWP), and even we try to use Xamarin for iOS/Android), but the XAML have different dialect cannot share XAML code between them even they are all come from Microsoft, .net UI technology is fragmented and developers are frustrated,.
Suggestion:
(1) Build a universal XAML system for Desktop/Windows 10 Store/Droid/IOS, this step will not difficult and take many time, as all under Microsoft’s control, merge different XAML dialects into one is good.
(2)Compile ..net + XAML to HTML/CSS + JavaScript ( or Html 5 Canvas + web assembly in the future), this is a revolutionary technology and Microsoft will be written into the annuals of history. Developers take Html + JS to develop web applications for more than 20 years, Html2/3/4->Html5, CSS-CSS3, Javascript version also upgraded, and many Javascript library appear and die, but the runtime mode (HTML + script language) is not changed, If .Net +XAML application can directly in web browser, I believe this will eventrually change Web development.
Plus one: Silverlight is really a good technology, but your Microsoft is give up it, it’s a stupid decision.
*English is not my native language, I spend a lot of time to write this down, hope you will read it , thanks! *
Indeed, Aaron. Your English is great and the message is loud and clear, sounding VERY MUCH like the long list of comments found in this vote: https://visualstudio.uservoice.com/forums/121579-visual-studio-2015/suggestions/10027638-create-a-ubiquitous-net-client-application-develo
Have you voted yet?
For every non-.NET language such as JavaScript that is introduced into a .NET solution, the overall development costs are increased by said factor. As you point out, in addition to the amount of code that you now have to develop and maintain between these code bases, organizations are saddled with the inconvenient and expensive task of finding non-.NET resources to handle the incompatible .NET-language(s). This increases the team size by AT LEAST 25%-33% as you mention. So, the expensive nature of this problem does not just impact the amount of code, but also businesses’ bottom line in terms of added overhead that is required to deal with two incompatible and disparate code bases.
This also accounts for why organizations are slowly starting to adopt JavaScript-only solutions and dropping .NET altogether, as JavaScript is truly ubiquitous and can reach all the places that .NET can today as well as all the places it can’t (the client browser). It doesn’t help that Microsoft is assisting these efforts by creating and promoting the necessary tooling such as VSCode that enables these organizations to do exactly that.
Thanks, Mike.
I have already read that vote in uservoice before, and I have voted it with strong support.
I have strong Win32 programming background, I use Visual C++ to learn develop Windows 3.1 programs since 1997, and use Delphi 6 to develop Windows 95/98/2000 C/S applications in the year 2000. In Delphi, there was a IntraWeb for develop web applications, it was amazing compared to ASP/JSP at that period. I love strong-typed/compile-to-native programming language for my work though I still can write very long Javascript.
What I mean is, I hope C#/XAML can run everywhere, just like native language such as C++/Object Pascal, write once, compile to different target binary/IL. Single codebase can much reduce time and cost for project team.
I could not agree more, Aaron. Microsoft has indeed a diamond in their hands (C# + Xaml) and “compile for Web” would be something that will rock developer’s world and bring back developers into MS tech in drones! As i said above, look at the work these guys are doing here: http://www.cshtml5.com Imagine if that project (or something similar) had the power of Microsoft behind it…
And as you said, the cost to maintain 2 code bases, 2 teams etc is enormous!
At Microsoft: By compiling C# + Xaml to browser, you have a chance to make history. Do not miss it…
*high five* Anthony. 🙂 Much respect and agreement!!! JSIL/CSHTML is indeed the path we should be on. Unfortunately, it doesn’t allow for PCL support, which is what enables shared code between client and server. So, you are ultimately still stuck with two different code bases, even though they are both technically .NET. Close, but no cigar. Otherwise I (and many others) would be all over it.
But like you said: imagine if MSFT was behind it. That problem (and any others) would be but a distant, bad memory!
Can I suggest not to throw when you can’t compile a Regex? I mean, it’s only used to get an improvement in performance.
Your suggestion would be absolutely the wrong one for our kind of products (not taking away from the fact that may be the right choice for yours)
I guess we should get the ability to choose when we buy-in to a package
Hi, I love .Net Standard but I don’t understand .Net Core.
Why would you develop .Net Core when you already have Mono, which is more stable, runs on more platforms and has a bigger API surface?
It seems that the only thing that .Net Core has going for it over Mono is it being modular. Why then not work on improving Mono and making it modular instead of doing all the effort on .Net Core? What am I missing here?
We’re working very closely with the folks on Mono and code is flowing in both direction between Mono and .NET Framework/.NET Core. At this point, we can’t simply take one over the other because each side has unique features which makes it more suitable for certain tasks.
Thanks for the post. I have two follow-up questions coming from the Apache REEF community, where we are trying to move to CoreCLR. .NET Standard 2.0 seems like a much more attractive target for that than CoreCLR 1.0.x. However:
(1) How does one begin to target .NET Standard 2.0 today with a .NET Framework code base? Given that both Mono and .NET 4.6.1 support it, it should technically be feasible even before Dev15, right?
(2) What does this mean for the (few) libraries that painstakingly ported to CoreCLR 1.0? Do they have to add a build for .NET Standard 2.0 to be consumable by a .NET Standard 2.0 project? Or is that addressed by the compatibility shim mentioned above? I assume that’s a way of convincing the compiler / runtime that a library is in fact .NET Standard compliant when it doesn’t advertise it?
At this point .NET Standard 2.0 isn’t available yet. We’re working on providing bits and we’ll announced on this blog when a preview is available. Generally speaking libraries compiled against .NET Core today will work just fine against .NET Standard 2.0. The only caveat is the very small number of APIs that were part of .NET Standard 1.5. and 1.6. that we removed. https://github.com/dotnet/standard/blob/master/docs/netstandard-20/netstandard-20-removals.md.
I’d like to chat more with you about supporting .NET Standard 2.0. Do you mind contacting me via email? It’s immol at microsoft dot com. Thanks!
404 file not found on netstandard-20-removals.md linked above
I createa a .net core class library it uses netstandard 1.6 . Then I start to add references like ef core 1.0 (including other libraries), it doesn’t work , however when I change it to netcoreapp 1.0 it works. Based on this article will the netstandard 2.0 be able to support ef core and other libraries?
Not all libraries are supporting .NET Standard yet. It looks like EF is compiled specifically for .NET Core and .NET Framework which is why you’re able to use it from netcoreapp but not .NET Standard. We’re working with library authors to make sure they can target .NET Standard 2.0, which includes EF.
It would be nice to have System.Data classes in standard.
XML documents, schemas, (de)serializers etc. indeed are useful in their own way.
But so far there are no other classes, besides DataSet/DataTable,
for easy in-memory data and object organizing, manipulation and data save/restore or exchange tasks.
In my opinion, System.Data is kind of fundamental namespace in .NET framework.
We’ll have to use serializable collections (of collections (of generics( ….)) to replace data rows, tables, relations.
Still having at very least, DataTable class intact, will save a lot of time.
+1! – still much love for ADO.Net, DataTable out here =)
Hi.
I’m trying hard to understand this but am really struggling. In particular the teasers about AppDomain support
You say “If you want to look at the specific API set of .NET Standard 2.0, you can take a look at the .NET Standard GitHub repository. ” but when I go there it’s just a warrant and impossible to find any specific information. Like all Git stuff it is designed around those contributing – not those trying to use it as a source of information. I can’t tell you how much I hate it as a ‘source’ of documentation.
Can I user CreateInstance? I do not know. Will I be able to use CrateInstance? I don’t know. How can I possibly decide to use .Net Core/Standard in our enterprise app? I don’t know
Matt
Right now, the only way is to check out this folder on GitHub: https://github.com/dotnet/standard/tree/master/netstandard/ref. I plan on making it possible to browse .NET Standard via https://apisof.net.
Thanks, I had indeed found that but I’ve got no idea what, for example,
‘public static object CreateInstance(System.Type type) { throw null; }’
means.
Is that just a placeholder implying it is/will be supported, or does it mean that if I call it it will blow?
thx
Matt
The ref folders contains what we call reference assemblies. These are assemblies that are only used to compile against (because .NET Standard itself isn’t runnable, as it’s just a specification). However, C# doesn’t have such a concept. So in order to compile these assemblies we simply have fake bodies.
> Is that just a placeholder implying it is/will be supported, or does it mean that if I call it it will blow?
The body is meaningless in .NET Standard. It just means the API is part of the .NET Standard specification. A given platform can implement the API or throw PlatformNotSupportedException. AFAIK all platforms will support CreateInstance(Type) and not throw. Of course, for platforms that don’t use a JIT (such as UWP/iOS) you need to have the metadata at runtime in order to create the type, but it will generally work as expected.
Excellent, thanks
Matt
@ImmoLandwerth
> Because Phone Silverlight is newer and thus has more APIs than the regular Silverlight.
This appears to be incorrect. The latest Phone Silverlight was based on Silverlight 4 and came out in 2010. Silverlight 5 came out in 2011. Not to mention, it is MUCH more likely that enterprises have production apps based on desktop Silverlight rather than Phone Silverlight.
Microsoft has started too late to support other platforms, and is now in difficulty.
.NET Must have a common base (standard) on all platforms for many years.
Today there is the need that there is only one language for the user interface.
There are two possibilities:
1) XAML must be usable even in Mobile and Web;
2) HTML 6 (new version) that incorporates the XAML functions for the interface, including animation, 3D etc.
The fact that you have to use a different technology for each environment has meant that Microsoft has insignificant market shares in Mobile and Web Server.
If the resources are few and should be given priority in development, is due only to the fact that Microsoft is very late compared to the technology evolves and not the expected (as he waited in the cabinet).
It should be allowed to invest more resources assuming a lot of programmers.
Competition (Apple and Google) will not wait.
Since this replaces Portable Class Libraries, will tools like PEX be updated? PEX was a really good tool potentially but the restrictions of PCLs made it limited. Will .Net Standard lift some of those restrictions?
There’s one (at least ;)) thing I don’t understand here. Since UWP is one of the application models of .NET Core, why is it shown separately on the Platform to Standard mapping table?
Good catch. Isn’t .NET Core cross platform? Another idea rotting to death over at the UWP board:
https://wpdev.uservoice.com/forums/110705-dev-platform/suggestions/7989744-make-universal-windows-platform-open-source-and-cr
As bad as the .NET group is at getting around to Xaml, they are nowhere near as bad as the UWP group, which is simply killing Microsoft with their backwards, dated thinking and operations. That entire organization needs a clean sweep!!!
If you currently using vb.net, your screwed, and they broke your ability to support previous build. vb.net can no longer target .net core and .net 4.6.1
Also vb.net will not let you target .net standard at all.
This is complete BULLSHIT!
I pay good money for my MSDN subscription, looks like I have no reason to renew now.
Hi! Thank you very much for informing and for answering the questions!
There is a little mistake:
In “What’s new in .NET Standard 2.0?” table 4.6.1 is mentioned twice:
.NET Framework → 4.5 4.5.1 4.6 4.6.1 4.6.2 vNext 4.6.1
That is intentional, and you get it now, right? It is the breaking change we were discussing about above.
release 4.6.2 as recommend/important update for Win7/8.1 Systems and use 4.6.2 in Standard 2.0, so that older and wi10 1607 use the same .net versions
If .net4.6.1 mentioned in netstandard1.4 did not support APIs netcorecore1.0 support, how in netstandard2.0 it will support APIs that the newer version of netcoreapp1.1 will support ?
Also, when you said “As you can see, it’s quite clear that the vast majority of class libraries on NuGet are targeting .NET Framework. However, we know that a large number of these libraries are only >>>using APIs we’ll expose in .NET Standard 2.0.<<<" doesn't that mean that .net framework has already a higher netstandard than what was given to it from before ? Its either you move back netcore1.0 to a lower netstandard or move .net framework to a higher netstandard; but having .net framework 4.6.1 and netcore1.1 be compatible while netcore1.0 is higher than .net framework 4.6.1 is extremely confusing!!!
Does anybody know if WF is going to end up in the Standard or not?
You probably have a better chance at winning every state lottery on the same day.
Read above. They are actively trying to shutter/censor the more mature, valuable, and superior Microsoft technologies in favor of the weaker, error-prone web technologies that possess a fraction of functionality and usefulness. Truly a sad day in Microsoft history. It used to mean something to be a Microsoft developer. They once took pride in their technology. No longer is this the case.
Have a look at CoreWF ( https://github.com/dmetzgar/corewf ). WF is mostly ported. The team still needs to evaluate with .net standard 2.0 if the DynamicActivity can be ported. Please add your voice in the github issues.
Jesus man, are you Sauron?, The Lord of The Rings? It’s possible to have just one framework, with libraries in each device for that framework version? But now, we have dozen of “frameworks” for the same version. Computers was made to do the job instead of humans, then why we have to deal with all those libraries. You don’t have that problem on Java, even when C# is much more powerful than Java, I realize that.
All these names are confusing….. I thought “.NET Core” is the “Standard” (minimal set (i.e. Core) of API that all platform should support).
I never heard of “.NET Standard” until today and it is already version 2.0…….
I think it’s time to stop all .NET development and go back to C++ only.
Microsoft, this means you.
Microsoft is correcting its “billing dollar mistake”.
When .Net 1.0 was released, I thought it was cross-platform, because it was another JRE-like framework. Too disappointed, how come a VM machine is not cross-platform, that’s why Mono came out for. But Microsoft keeps piling up Windows platform specific APIs in .Net Framework, Mono team could catch up at all.
I hope books about this will be release next day at least in electronic mode. Otherwise not possible to keep up
I am porting a number of legacy libraries that make extensive use of the ICloneable interface. Will ICloneable also be part of .NET Standard 2.0? If so, in which NuGet package can I expect to find it?
Excellent news, finally, a good vision; encapsulate what varies.
I wish Windows developers stop the whole agile deployment of bug-ware and Microsoft reclaims its past glory. I am working with Java in Linux these days and it’s not much fun, I know it could be JavaScript which is MUCH worse, but I miss happy C# days in Windows 7.
This smells of Bill… came back for a week to fix things again?
Are there any .NET Standard 2.0 pre-release NuGet packages available anywhere for those who want to test the applicability right now?
Does the article contradicts itself or I am missing something?
“Another key aspect is that the API availability in .NET Standard is very predictable: higher version equals more APIs. ”
“In order to allow .NET Framework 4.6.1 to support .NET Standard 2.0, we had to remove all the APIs from .NET Standard that were introduced in .NET Standard 1.5 and 1.6.”
Such a mess 🙁
I still not get it: So all this efforts invested to make .Net cross X-platform, and all we actually can run X-Platform is the back-end part of a web service and console applications. It is clear MSFT need to do some ground-building work first, but we need a reliable commitment that the client will follow some not-to-far day. Why would someone use a language/framework exclusively for the back-end?
All the development gain at the server side is more than eaten up by the complete lack of possibilities to share code and developer knowledge between front/backend. The clock is ticking, and MSFT is losing projects to other technologies (read node.js on the server) every day. And all those projects won’t come back. So please Micke-EEE, continue to nag the hell out of them on all channels 😉 Until they give a a from road-map and – for this time – stick to it.
By the way: MSFT just comes from ditching LightSwitch HTML. Two years after selling it as a replacement to Silverlight devs at connect 2014. Know, they are selling PowerApps as a replacement for LightSwitch HTML. Oops, they did it gain…
LOL @Joerg! I was about to post a link to the vote as a reply until you mentioned my name, so you clearly have seen it! Now we just need MSFT to actually acknowledge this and address it in a conference, which of course we have Connect(); 2016 in about a month. I really (REALLY) hope Connect() 2015 is the last time we see anyone from MSFT leadership stand on stage and seriously suggest pairing a .NET backend with a JavaScript front end. Talk about a very embarrassing moment. 😛 Right up there with Sinofsky up on stage in Mix 2011 admitting that he worked on Clippy and then proceeded to try to impress a room full of developers with moving sprites in a web page (hint: the room was dead silent as you would expect).
You know what? I’ll go ahead and post the link anyways for those who scroll all the way to the bottom, skipping all the good parts. 😉 Getting very close to 5,000 votes after 1 year of existence:
https://visualstudio.uservoice.com/forums/121579-visual-studio-2015/suggestions/10027638-create-a-ubiquitous-net-client-application-develo
BTW, I am in agreement with you on PowerApps. What we need is a mix between UWP and Xamarin, with WPF’s Xaml model. <3
I can not understand why Microsoft doesn’t provide the cross platform desktop solutions for the developers. I think the cross platform solutions given out by MS is not complete. I am very disappointed with that.
So I think that Java is the general trend of software development and the software developers’ future.
From now on, I begin to like Java more and more. I have decided to use Java as the major development language to replace .NET!
Good luck, .NET!