Showing posts with label JavaScript. Show all posts
Showing posts with label JavaScript. Show all posts

Wednesday, March 23, 2016

Performance without Compromise

Last week we hosted our latest Netflix JavaScript Talks event at our headquarters in Los Gatos, CA. We gave two talks about our unflinching stance on performance. In our first talk, Steve McGuire shared how we achieved a completely declarative, React-based architecture that’s fast on the devices in your living room. He talked about our architecture principles (no refs, no observation, no mixins or inheritance, immutable state, and top-down rendering) and the techniques we used to hit our tough performance targets. In our second talk, Ben Lesh explained what RxJS is, and why we use and love it. He shared the motivations behind a new version of RxJS and how we built it from the ground up with an eye on performance and debugging.

React.js for TV UIs



RxJS Version 5


Videos from our past talks can always be found on our Netflix UI Engineering channel on YouTube. If you’re interested in being notified of future events, just sign up on our notification list.

By Kim Trott

Thursday, December 3, 2015

Debugging Node.js in Production

By Kim Trott, Yunong Xiao

We recently hosted our latest JavaScript Talks event on our new campus at Netflix headquarters in Los Gatos, California. Yunong Xiao, senior software engineer on our Node.js platform, presented on debugging Node.js in production. Yunong showed hands-on techniques using the scientific method to root cause and solve for runtime performance issues, crashes, errors, and memory leaks.





We’ve shared some useful links from our talk:
Videos from our past talks can always be found on our Netflix UI Engineering channel on YouTube. If you’re interested in being notified of future events, just sign up on our notification list.

Monday, August 17, 2015

Netflix Releases Falcor Developer Preview

by Jafar Husain, Paul Taylor and Michael Paulson

Developers strive to create the illusion that all of their application’s data is sitting right there on the user’s device just waiting to be displayed. To make that experience a reality, data must be efficiently retrieved from the network and intelligently cached on the client.

That’s why Netflix created Falcor, a JavaScript library for efficient data fetching. Falcor powers Netflix’s mobile, desktop and TV applications.

Falcor lets you represent all your remote data sources as a single domain model via JSON Graph. Falcor makes it easy to access as much or as little of your model as you want, when you want it. You retrieve your data using familiar JavaScript operations like get, set, and call. If you know your data, you know your API.

You code the same way no matter where the data is, whether in memory on the client or over the network on the server. Falcor keeps your data in a single, coherent cache and manages stale data and cache pruning for you. Falcor automatically traverses references in your graph and makes requests as needed. It transparently handles all network communications, opportunistically batching and de-duping requests.

Today, Netflix is unveiling a developer preview of Falcor:

Falcor is still under active development and we’ll be unveiling a roadmap soon. This developer preview includes a Node version of our Falcor Router not yet in production use.

We’re excited to start developing in the open and share this library with the community, and eager for your feedback and contributions.

For ongoing updates, follow Falcor on Twitter!

Wednesday, August 5, 2015

Making Netflix.com Faster

by Kristofer Baxter

Simply put, performance matters. We know members want to immediately start browsing or watching their favorite content and have found that faster startup leads to more satisfying usage. So, when building the long-awaited update to netflix.com, the Website UI Engineering team made startup performance a first tier priority.

The impact of this effort netted a 70% reduction in startup time, and was focused in three key areas:

  1. Server and Client Rendering
  2. Universal JavaScript
  3. JavaScript Payload Reductions

Server and Client Rendering

The netflix.com legacy website stack had a hard separation between server markup and client enhancement. This was primarily due to the different programming languages used in each part of our application. On the server, there was Java with Tomcat, Struts and Tiles. On the browser client, we enhanced server-generated markup with JavaScript, primarily via jQuery.

This separation led to undesirable results in our startup time. Every time a visitor came to any page on netflix.com our Java tier would generate the majority of the response needed for the entire page's lifetime and deliver it as HTML markup. Often, users would be waiting for the generation of markup for large parts of the page they would never visit.

Our new architecture renders only a small amount of the page's markup, bootstrapping the client view. We can easily change the amount of the total view the server generates, making it easy to see the positive or negative impact. The server requires less data to deliver a response and spends less time converting data into DOM elements. Once the client JavaScript has taken over, it can retrieve all additional data for the remainder of the current and future views of a session on demand. The large wins here were the reduction of processing time in the server, and the consolidation of the rendering into one language.

We find the flexibility afforded by server and client rendering allows us to make intelligent choices of what to request and render in the server and the client, leading to a faster startup and a smoother transition between views.

Universal JavaScript

In order to support identical rendering on the client and server, we needed to rethink our rendering pipeline. Our previous architecture's separation between the generation of markup on the server and the enhancement of it on the client had to be dropped.

Three large pain points shaped our new Node.js architecture:

  1. Context switching between languages was not ideal.
  2. Enhancement of markup required too much direct coupling between server-only code generating markup and the client-only code enhancing it.
  3. We’d rather generate all our markup using the same API.

There are many solutions to this problem that don't require Universal JavaScript, but we found this lesson was most appropriate: When there are two copies of the same thing, it's fairly easy for one to be slightly different than the other. Using Universal JavaScript means the rendering logic is simply passed down to the client.

Node.js and React.js are natural fits for this style of application. With Node.js and React.js, we can render from the server and subsequently render changes entirely on the client after the initial markup and React.js components have been transmitted to the browser. This flexibility allows for the application to render the exact same output independent of the location of the rendering. The hard separation is no longer present and it's far less likely for the server and client to be different than one another.

Without shared rendering logic we couldn't have realized the potential of rendering only what was necessary on startup and everything else as data became available.

Reduce JavaScript Payload Impact

Building rich interactive experiences on the web often translates into a large JavaScript payload for users. In our new architecture, we placed significant emphasis on pruning large dependencies we can knowingly replace with smaller modules and delivering JavaScript only applicable for the current visitor.

Many of the large dependencies we relied on in the legacy architecture didn't apply in the new one. We've replaced these dependencies in favor of newer, more efficient libraries. Replacing these libraries resulted in a much smaller JavaScript payload, meaning members need less JavaScript to start browsing. We know there is significant work remaining here, and we're actively working to trim our JavaScript payload down further.

Time To Interactive

In order to test and understand the impact of our choices, we monitor a metric we call time to interactive (tti).

Amount of time spent between first known startup of the application platform and when the UI is interactive regardless of view. Note that this does not require that the UI is done loading, but is the first point at which the customer can interact with the UI using an input device.

For applications running inside a web browser, this data is easily retrievable from the Navigation Timing API (where supported).

Work is Ongoing

We firmly believe high performance is not an optional engineering goal – it's a requirement for creating great user-experiences. We have made significant strides in startup performance, and are committed to challenging our industry’s best-practices in the pursuit of a better experience for our members.

Over the coming months we'll be investigating Service Workers, ASM.js, Web Assembly, and other emerging web standards to see if we can leverage them for a more performant website experience. If you’re interested in helping create and shape the next generation of performant web user-experiences apply here.

Wednesday, January 28, 2015

Netflix Likes React


We are making big changes in the way we build the Netflix experience with Facebook’s React library. Today, we will share our thoughts on what makes React so compelling and how it is evolving our approach to UI development.

At the beginning of last year, Netflix UI engineers embarked on several ambitious projects to dramatically transform the user experience on our desktop and mobile platforms. Given a UI redesign of a scale similar to that undergone by TVs and game consoles, it was essential for us to re-evaluate our existing UI technology stack and to determine whether to explore new solutions. Do we have the right building blocks to create best-in-class single-page web applications? And what specific problems are we looking to solve?
Much of our existing front-end infrastructure consists of hand-rolled components optimized for the current website and iOS application. Our decision to adopt React was influenced by a number of factors, most notably: 1) startup speed, 2) runtime performance, and 3) modularity.

Startup Speed

We want to reduce the initial load time needed to provide Netflix members with a much more seamless, dynamic way to browse and watch individualized content. However, we find that the cost to deliver and render the UI past login can be significant, especially on our mobile platforms where there is a lot more variability in network conditions.

In addition to the time required to bootstrap our single-page application (i.e. download and process initial markup, scripts, stylesheets), we need to fetch data, including movie and show recommendations, to create a personalized experience. While network latency tends to be our biggest bottleneck, another major factor affecting startup performance is in the creation of DOM elements based on the parsed JSON payload containing the recommendations. Is there a way to minimize the network requests and processing time needed to render the home screen? We are looking for a hybrid solution that will allow us to deliver above-the-fold static markup on first load via server-side rendering, thereby reducing the tax incurred in the aforementioned startup operations, and at the same time enable dynamic elements in the UI through client-side scripting.

Runtime Performance

To build our most visually-rich cinematic Netflix experience to date for the website and iOS platforms, efficient UI rendering is critical. While there are fewer hardware constraints on desktops (compared to TVs and set-top boxes), expensive operations can still compromise UI responsiveness. In particular, DOM manipulations that result in reflows and repaints are especially detrimental to user experience.

Modularity

Our front-end infrastructure must support the numerous A/B tests we run in terms of the ability to rapidly build out new features and designs that code-wise must co-exist with the control experience (against which the new experiences are tested). For example, we can have an A/B test that compares 9 different design variations in the UI, which could mean maintaining code for 10 views for the duration of the test. Upon completion of the test, it should be easy for us to productize the experience that performed the best for our members and clean up code for the 9 other views that did not.

Advantages of React

React stood out in that its defining features not only satisfied the criteria set forth above, but offered other advantages including being relatively easy to grasp and ability to opt-out, for example, to handle custom user interactions and rendering code. We were able to leverage the following features to improve our application’s initial load times, runtime performance, and overall scalability: 1) isomorphic JavaScript, 2) virtual DOM rendering, and 3) support for compositional design patterns.

Isomorphic JavaScript

React enabled us to build JavaScript UI code that can be executed in both server (e.g. Node.js) and client contexts. To improve our start up times, we built a hybrid application where the initial markup is rendered server-side and the resulting UI elements are subsequently manipulated as done in a single-page application. It was possible to achieve this with React as it can render without a live DOM, e.g. via React.renderToString, or React.renderToStaticMarkup. Furthermore, the UI code written using the React library that is responsible for generating the markup could be shared with the client to handle cases where re-rendering was necessary.

Virtual DOM

To reduce the penalties incurred by live DOM manipulation, React applies updates to a virtual DOM in pure JavaScript and then determines the minimal set of DOM operations necessary via a diff algorithm. The diffing of virtual DOM trees is fast relative to actual DOM modifications, especially using today’s increasingly efficient JavaScript engines such as WebKit’s Nitro with JIT compilation. Furthermore, we can eliminate the need for traditional data binding, which has its own performance implications and scalability challenges.

React Components and Mixins

React provides powerful Component and Mixin APIs that we relied on heavily to create reusable views, share common functionality, and patterns to facilitate feature extension. When A/B testing different designs, we can implement the views as separate React subcomponents that get rendered by a parent component depending on the user’s allocation in the test. Similarly, differences in behavioral logic can be abstracted into React mixins. Although it is possible to achieve modularity with a classical inheritance pattern, frequent changes in superclass interfaces to support new features affects existing subclasses and increases code fragility. React’s compositional pattern is ideal for overall maintenance and scalability of our front-end codebase as it isolates much of the A/B test code.

React has exceeded our requirements and enabled us to build a tremendous foundation on which to innovate the Netflix experience. Stay tuned in the coming months, as we will dive more deeply into how we are using React to transform traditional UI development!

By Jordanna Kwok


Monday, December 8, 2014

Version 7: The Evolution of JavaScript

By Jafar Husain


We held our last Netflix JavaScript Talks event of 2014 on Dec. 2nd, where we discussed some of the interesting features already available for use in JavaScript 6, and where things are headed with JavaScript 7. As a member of the JavaScript TC39 ECMAScript committee, Netflix has been working with other committee members to explore powerful new features like Object.observe, async functions and async generators. Many of these features have already started appearing in modern browsers and it’s not too early to start playing with them.

The video from the event is now available to watch below or on our Netflix UI Engineering channel on YouTube along with other talks from past events.


It’s been a lot of fun hosting our Netflix JavaScript Talks series this past year and we have several exciting talks planned for 2015. If you’re interested in being notified of our future events, just sign up on our notification list. We’ll also be sharing some interesting work we’re doing with React at the React.js Conf at the end of January.



Wednesday, November 19, 2014

Node.js in Flames

We’ve been busy building our next-generation Netflix.com web application using Node.js. You can learn more about our approach from the presentation we delivered at NodeConf.eu a few months ago. Today, I want to share some recent learnings from performance tuning this new application stack.

We were first clued in to a possible issue when we noticed that request latencies to our Node.js application would increase progressively with time. The app was also burning CPU more than expected, and closely correlated to the higher latency. While using rolling reboots as a temporary workaround, we raced to find the root cause using new performance analysis tools and techniques in our Linux EC2 environment.

Flames Rising

We noticed that request latencies to our Node.js application would increase progressively with time. Specifically, some of our endpoints’ latencies would start at 1ms and increase by 10ms every hour. We also saw a correlated increase in CPU usage.

skitch.png

This graph plots request latency in ms for each region against time. Each color corresponds to a different AWS AZ. You can see latencies steadily increase by 10 ms an hour and peak at around 60 ms before the instances are rebooted.

Dousing the Fire

Initially we hypothesized that there might be something faulty, such as a memory leak in our own request handlers that was causing the rising latencies. We tested this assertion by load-testing the app in isolation, adding metrics that measured both the latency of only our request handlers and the total latency of a request, as well as increasing the Node.js heap size to 32Gb.

We saw that our request handler’s latencies stayed constant across the lifetime of the process at 1 ms. We also saw that the process’s heap size stayed fairly constant at around 1.2 Gb. However, overall request latencies and CPU usage continued to rise. This absolved our own handlers of blame, and pointed to problems deeper in the stack.

Something was taking an additional 60 ms to service the request. What we needed was a way to profile the application’s CPU usage and visualize where we’re spending most of our time on CPU. Enter CPU flame graphs and Linux Perf Events to the rescue.

For those unfamiliar with flame graphs, it’s best to read Brendan Gregg’s excellent article explaining what they are -- but here’s a quick summary (straight from the article).
  • Each box represents a function in the stack (a "stack frame").
  • The y-axis shows stack depth (number of frames on the stack). The top box shows the function that was on-CPU. Everything beneath that is ancestry. The function beneath a function is its parent, just like the stack traces shown earlier.
  • The x-axis spans the sample population. It does not show the passing of time from left to right, as most graphs do. The left to right ordering has no meaning (it's sorted alphabetically).
  • The width of the box shows the total time it was on-CPU or part of an ancestry that was on-CPU (based on sample count). Wider box functions may be slower than narrow box functions, or, they may simply be called more often. The call count is not shown (or known via sampling).
  • The sample count can exceed elapsed time if multiple threads were running and sampled concurrently.
  • The colors aren't significant, and are picked at random to be warm colors. It's called "flame graph" as it's showing what is hot on-CPU. And, it's interactive: mouse over the SVGs to reveal details.
Previously Node.js flame graphs had only been used on systems with DTrace, using Dave Pacheco’s Node.js jstack() support. However, the Google v8 team has more recently added perf_events support to v8, which allows similar stack profiling of JavaScript symbols on Linux. Brendan has written instructions for how to use this new support, which arrived in Node.js version 0.11.13, to create Node.js flame graphs on Linux.



Here’s the original SVG of the flame graph. Immediately, we see incredibly high stacks in the application (y-axis). We also see we’re spending quite a lot of time in those stacks (x-axis). On closer inspection, it seems the stack frames are full of references to Express.js’s router.handle and router.handle.next functions. The Express.js source code reveals a couple of interesting tidbits1.
  • Route handlers for all endpoints are stored in one global array.
  • Express.js recursively iterates through and invokes all handlers until it finds the right route handler.
A global array is not the ideal data structure for this use case. It’s unclear why Express.js chose not to use a constant time data structure like a map to store its handlers. Each request requires an expensive O(n) look up in the route array in order to find its route handler. Compounding matters, the array is traversed recursively. This explains why we saw such tall stacks in the flame graphs. Interestingly, Express.js even allows you to set many identical route handlers for a route. You can unwittingly set a request chain like so.
[a, b, c, c, c, c, d, e, f, g, h]
Requests for route c would terminate at the first occurrence of the c handler (position 2 in the array). However, requests for d would only terminate at position 6 in the array, having needless spent time spinning through a, b and multiple instances of c. We verified this by running the following vanilla express app.
var express = require('express');
var app = express();
app.get('/foo', function (req, res) {
   res.send('hi');
}); 
// add a second foo route handler
app.get('/foo', function (req, res) {
   res.send('hi2');
});
console.log('stack', app._router.stack);
app.listen(3000);
Running this Express.js app returns these route handlers.
stack [ { keys: [], regexp: /^\/?(?=/|$)/i, handle: [Function: query] },
 { keys: [],
   regexp: /^\/?(?=/|$)/i,
   handle: [Function: expressInit] },
 { keys: [],
   regexp: /^\/foo\/?$/i,
   handle: [Function],
   route: { path: '/foo', stack: [Object], methods: [Object] } },
 { keys: [],
   regexp: /^\/foo\/?$/i,
   handle: [Function],
   route: { path: '/foo', stack: [Object], methods: [Object] } } ]
Notice there are two identical route handlers for /foo. It would have been nice for Express.js to throw an error whenever there’s more than one route handler chain for a route.

At this point the leading hypothesis was that the handler array was increasing in size with time, thus leading to the increase of latencies as each handler is invoked. Most likely we were leaking handlers somewhere in our code, possibly due to the duplicate handler issue. We added additional logging which periodically dumps out the route handler array, and noticed the array was growing by 10 elements every hour. These handlers happened to be identical to each other, mirroring the example from above.
[...
{ handle: [Function: serveStatic],
   name: 'serveStatic',
   params: undefined,
   path: undefined,
   keys: [],
   regexp: { /^\/?(?=\/|$)/i fast_slash: true },
   route: undefined },
 { handle: [Function: serveStatic],
   name: 'serveStatic',
   params: undefined,
   path: undefined,
   keys: [],
   regexp: { /^\/?(?=\/|$)/i fast_slash: true },
   route: undefined },
 { handle: [Function: serveStatic],
   name: 'serveStatic',
   params: undefined,
   path: undefined,
   keys: [],
   regexp: { /^\/?(?=\/|$)/i fast_slash: true },
   route: undefined },
...
]
Something was adding the same Express.js provided static route handler 10 times an hour. Further benchmarking revealed merely iterating through each of these handler instances cost about 1 ms of CPU time. This correlates to the latency problems we’ve seen, where our response latencies increase by 10 ms every hour.

This turned out be caused by a periodic (10/hour) function in our code. The main purpose of this was to refresh our route handlers from an external source. This was implemented by deleting old handlers and adding new ones to the array. Unfortunately, it was also inadvertently adding a static route handler with the same path each time it ran. Since Express.js allows for multiple route handlers given identical paths, these duplicate handlers were all added to the array. Making matter worse, they were added before the rest of the API handlers, which meant they all had to be invoked before we can service any requests to our service.

This fully explains why our request latencies were increasing by 10ms every hour. Indeed, when we fixed our code so that it stopped adding duplicate route handlers, our latency and CPU usage increases went away.

graph.png
Here we see our latencies drop down to 1 ms and remain there after we deployed our fix.

When the Smoke Cleared

What did we learn from this harrowing experience? First, we need to fully understand our dependencies before putting them into production. We made incorrect assumptions about the Express.js API without digging further into its code base. As a result, our misuse of the Express.js API was the ultimate root cause of our performance issue.

Second, given a performance problem, observability is of the utmost importance. Flame graphs gave us tremendous insight into where our app was spending most of its time on CPU. I can’t imagine how we would have solved this problem without being able to sample Node.js stacks and visualize them with flame graphs.

In our bid to improve observability even further, we are migrating to Restify, which will give us much better insights, visibility, and control of our applications2. This is beyond the scope of this article, so look out for future articles on how we’re leveraging Node.js at Netflix.

Interested in helping us solve problems like this? The Website UI team is hiring engineers to work on our Node.js stack.

Author: Yunong Xiao @yunongx

Footnotes
1 Specifically, this snippet of code. Notice next() is invoked recursively to iterate through the global route handler array named stack.
2 Restify provides many mechanisms to get visibility into your application, from DTrace support, to integration with the node-bunyan logging framework.

Friday, October 31, 2014

Message Security Layer: A Modern Take on Securing Communication

Netflix serves audio and video to millions of devices and subscribers across the globe. Each device has its own unique hardware and software, and differing security properties and capabilities. The communication between these devices and our servers must be secured to protect both our subscribers and our service.
When we first launched the Netflix streaming service we used a combination of HTTPS and a homegrown security mechanism called NTBA to provide that security. However, over time this combination started exhibiting growing pains. With the advent of HTML5 and the Media Source Extensions and Encrypted Media Extensions we needed something new that would be compatible with that platform. We took this as an opportunity to address many of the shortcomings of the earlier technology. The Message Security Layer (MSL) was born from these dual concerns.

Problems with HTTPS

One of the largest problems with HTTPS is the PKI infrastructure. There were a number of short-lived incidents where a renewed server certificate caused outages. We had no good way of handling revocation: our attempts to leverage CRL and OCSP technologies resulted in a complex set of workarounds to deal with infrastructure downtimes and configuration mistakes, which ultimately led to a worse user experience and brittle security mechanism with little insight into errors. Recent security breaches at certificate authorities and the issuance of intermediate certificate authorities means placing trust in one actor requires placing trust in a whole chain of actors not necessarily deserving of trust.
Another significant issue with HTTPS is the requirement for accurate time. The X.509 certificates used by HTTPS contain two timestamps and if the validating software thinks the current time is outside that time window the connection is rejected. The vast majority of devices do not know the correct time and have no way of securely learning the correct time.
Being tied to SSL and TLS, HTTPS also suffers from fundamental security issues unknown at the time of their design. Examples include padding attacks and the use of MAC-then-Encrypt, which is less secure than Encrypt-then-MAC.
There are other less obvious issues with HTTPS. Establishing a connection requires extra network round trips and depending on the implementation may result in multiple requests to supporting infrastructure such as CRL distribution points and OCSP responders in order to validate a certificate chain. As we continually improved application responsiveness and playback startup time this overhead became significant, particularly in situations with less reliable network connectivity such as Wi-Fi or mobile networks.
Even ignoring these issues, integrating new features and behaviors into HTTPS would have been extremely difficult. The specification is fixed and mandates certain behaviors. Leveraging specific device security features would require hacking the SSL/TLS stack in unintended ways: imagine generating some form of client certificate that used a dynamically generated set of device credentials.

High-level Goals

Before starting to design MSL we had to identify its high-level goals. Other than general best practices when it comes to protocol design, the following objectives are particularly important given the scale of deployment, the fact it must run on multiple platforms, and the knowledge it will be used for future unknown use cases.
  • Cross-language. Particularly subject to JavaScript constraints such as its maximum integer value and native functions found in web browsers.
  • Automatic error recovery. With millions of devices and subscribers we need devices that enter a bad state to be able to automatically recover without compromising security.
  • Performance. We do not want our application performance and responsiveness to be limited any more than it has to be. The network is by far the most expensive performance cost.
    Figure 1. HTTP vs. HTTPS Performance
  • Flexible and extensible. Whenever possible we want to take advantage of security features provided by devices and their software. Likewise if something no longer provides the security we need then there needs to be a migration path forward.
  • Standards compatible. Although related to being flexible and extensible, we paid particular attention to being standards compatible. Specifically we want to be able to leverage the Web Crypto API now available in the major web browsers.

Security Properties

MSL is a modern cryptographic protocol that takes into account the latest cryptography technologies and knowledge. It supports the following basic security properties.
  • Integrity protection. Messages in transit are protected from tampering.
  • Encryption. Message data is protected from inspection.
  • Authentication. Messages can be trusted to come from a specific device and user.
  • Non-replayable. Messages containing non-idempotent data can be non-replayable.
MSL supports two different deployment models, which we refer to as MSL network types. A single device may participate in multiple MSL networks simultaneously.
  • Trusted services network. This deployment consists of a single client device and multiple servers. The client authenticates against the servers. The servers have shared access to the same cryptographic secrets and therefore each server must trust all other servers.
  • Peer-to-peer. This is a typical p2p arrangement where each each side of the communication is mutually authenticated.
MSL Networks
Figure 2. MSL Networks


Protocol Overview

A typical MSL message consists of a header and one or more application payload chunks. Each chunk is individually protected which allows the sender and recipient to process application data as it is transmitted. A message stream may remain open indefinitely, allowing large time gaps between chunks if desired.
MSL has pluggable authentication and may leverage any number of device and user authentication types for the initial message. The initial message will provide authentication, integrity protection, and encryption if the device authentication type supports it. Future messages will make use of session keys established as a result of the initial communication.
If the recipient encounters an error when receiving a message it will respond with an error message. Error messages consist of a header that indicates the type of error that occurred. Upon receipt of the error message the original sender can attempt to recover and retransmit the original application data. For example, if the message recipient believes one side or the other is using incorrect session keys the error will indicate that new session keys should be negotiated from scratch. Or if the message recipient believes the device or user credentials are incorrect the error will request the sender re-authenticate using new credentials.
To minimize network round-trips MSL attempts to perform authentication, key negotiation, and renewal operations while it is also transmitting application data (Figure 2). As a result MSL does not impose any additional network round trips and only minimal data overhead.
Figure 3. MSL Communication w/Application Data
This may not always be possible in which case a MSL handshake must first occur, after which sensitive data such as user credentials and application data may be transmitted (Figure 3).
Figure 4. MSL Handshake followed by Application Data
Once session keys have been established they may be reused for future communication. Session keys may also be persisted to allow reuse between application executions. In a trusted services network the session keys resulting from a key negotiation with one server can be used with all other servers.

Platform Integration

Whenever possible we would like to take advantage of the security features provided by a specific platform. Doing so often provides stronger security than is possible without leveraging those features.
Some devices may already contain cryptographic keys that can be used to authenticate and secure initial communication. Likewise some devices may have already authenticated the user and it is a better user experience if the user is not required to enter their email and password again.
MSL is a plug-in architecture which allows for the easy integration of different device and user authentication schemes, session key negotiation schemes, and cryptographic algorithms. This also means that the security of any MSL deployment heavily depends on the mechanisms and algorithms it is configured with.
The plug-in architecture also means new schemes and algorithms can be incorporated without requiring a protocol redesign.

Other Features

  • Time independence. MSL does not require time to be synchronized between communicating devices. It is possible certain authentication or key negotiation schemes may impose their own time requirements.
  • Service tokens. Service tokens are very similar to HTTP cookies: they allow applications to attach arbitrary data to messages. However service tokens can be cryptographically bound to a specific device and/or user, which prevents data from being migrated without authorization.

The Release

To learn more about MSL and find out how you can use it for your own applications visit the Message Security Layer repository on GitHub.
The protocol is fully documented and guides are provided to help you use MSL securely for your own applications. Java and JavaScript implementations of a MSL stack are available as well as some example applications. Both languages fully support trusted services and peer-to-peer operation as both client and server.

MSL Today and Tomorrow

With MSL we have eliminated many of the problems we faced with HTTPS and platform integration. Its flexible and extensible design means it will be able to adapt as Netflix expands and as the cryptographic landscape changes.
We are already using MSL on many different platforms including our HTML5 player, game consoles, and upcoming CE devices. MSL can be used just as effectively to secure internal communications. In the future we envision using MSL over Web Sockets to create long-lived secure communication channels between our clients and servers.
We take security seriously at Netflix and are always looking for the best to join our team. If you are also interested in attacking the challenges of the fastest-growing online streaming service in the world, check out our job listings.

Wesley Miaw & Mitch Zollinger
Security Engineering

Monday, August 18, 2014

Scaling A/B Testing on Netflix.com with Node.js

By Chris Saint-Amant

Last Wednesday night we held our third JavaScript Talks event at our Netflix headquarters in Los Gatos, Calif. Alex Liu and Micah Ransdell discussed some of the unique UI engineering challenges that go along with running hundreds of A/B tests each year

We are currently migrating our website UI layer to Node.js, and have taken the opportunity to step back and evaluate the most effective way to build A/B tests. The talk explored some of the patterns we’ve built in Node.js to to handle the complexity of running so many multivariate UI tests at scale. These solutions ultimately enable quick feature development and rapid test iteration.

Slides from the talk are now online. No video this time around, but you can come see us talk about UI approaches to A/B testing and our adoption of Node.js at several upcoming events. We hope to see you there!
Lastly, don’t forget to check out our Netflix UI Engineering channel on YouTube to watch videos from past JavaScript Talks.








Sunday, July 6, 2014

Scale and Performance of a Large JavaScript Application



We recently held our second JavaScript Talks event at our Netflix headquarters in Los Gatos, Calif. Matt Seeley discussed the development approaches we use at Netflix to build the JavaScript applications which run on TV-connected devices, phones and tablets. These large, rich applications run across a wide range of devices and require carefully managing network resources, memory and rendering. This talk explores various approaches the team uses to build well-performing UIs, monitor application performance, write consistent code, and scale development across the team.

The video from the talk can found below along with slides from the talk at: https://speakerdeck.com/mseeley/life-on-the-grid.

And don't forget to check out videos from our past JavaScript Talks events on the Netflix UI Engineering YouTube Channel.