AWS Blog

Opening Soon – AWS Office in Dubai to Support Cloud Growth in UAE

by Jeff Barr | on | in Announcements | | Comments

The AWS office in Dubai, UAE will open on January 1, 2017.

We’ve been working with the Dubai Investment Development Agency (Dubai FDI) to launch the office, and plan to support startups, government institutions, and some of the Middle East’s historic and most established enterprises as they make the transition to the AWS Cloud.

Resources in Dubai
The office will be staffed with account managers, solutions architects, partner managers, professional services consultant, and support staff to allow customers to interact with AWS in their local setting and language.

In addition to access to the AWS team, customers in the Middle East have access to several important AWS programs including AWS Activate and AWS Educate:

  • AWS Activate is designed to provide startups with resources that will help them to get started on AWS, including up to $100,000 (USD) in AWS promotional credits.
  • AWS Educate is a global initiative designed to provide students and educators with the resources needed to accelerate cloud-based learning endeavors.
  • AWS Training and Certification helps technologists to develop the skills to design, deploy, and operate infrastructure in the AWS Cloud.

We are also planning to host AWSome days and other large-scale training events in the region.

Customers in the Middle East
Middle Eastern organizations were among the earliest adopters of cloud services when AWS launched in 2006. Customers based in the region are using AWS to run everything from development and test environments to Big Data analytics, from mobile, web and social applications to enterprise business applications and mission critical workloads.

AWS counts some of the UAE’s most well-known and fastest growing businesses as customers, including PayFort and Careem, as well as government institutions and some of the largest companies in the Middle East, such as flydubai and Middle East Broadcasting Center.

Careem is the leading ride booking app in the Middle East and North Africa. Launched in 2012, Careem runs totally on AWS and over the past three years has grown by 10x in size every year. This is growth that would not have been possible without AWS. After starting with one city, Dubai, Careem now serves millions of commuters in 43 cities across the Middle East, North Africa and Asia. Careem uses over 500 EC2 instances as well as a number of other services such as Amazon S3, Amazon DynamoDB, Elastic Beanstalk and others.

PayFort is a startup based in the United Arab Emirates that provides payment solutions to customers across the Middle East through its payments gateway, FORT. The platform enables organizations to accept online payments via debit and credit cards. PayFort counts Etihad Airways, Ferrari World, and Souq.com among its customers. PayFort chose to run FORT entirely on AWS technologies and as a result is saving 32% over their on-premises costs. Although cost was key for PayFort, it turns our that they chose AWS due to the high level of security that they could achieve with the platform. Compliance with Payment Card Industry Data Security Standard (PCI DSS) and International Organization for Standards (ISO) 27001 are central to PayFort’s payment services, both of which are available with AWS (we were actually the first cloud provider to reach compliance with version 3.1 of PCI DSS).

Fly Dubai is the leading low-cost airline in the Middle East, with over 90 destinations, and was launched by the government of Dubai in 2009. flydubai chose to build their online check-in platform on AWS and went from design to production in four months where it is now being used by thousands of passengers a day – this timeline would not have been possible without the cloud. Given the seasonal fluctuations in demand for flights, flydubai also needs an IT provider that allows it to cope with spikes in demand. Using AWS allows them to do this and lead times for new infrastructure services have been reduced from up to 10 weeks to a matter of hours.

Partners
The AWS Partner Network of consulting and technology partners in the region helps our customers to get the most from the cloud. The network includes global members like CSC as well as prominent regional members such as Redington.

Redington is an AWS Consulting Partner and is the Master Value Added Distributor for AWS in Middle East and North Africa. They are also an Authorized Commercial Reseller of AWS cloud technologies. Redington is helping organizations in the MEA region with cloud assessment, cloud readiness, design, implementation, migration, deployment and optimization of cloud resources. They also have an ecosystem of partners including ISV’s with experienced and certified AWS engineers with cross domain experience.

Join Us
This announcement is part of our continued expansion across Europe, the Middle East, and Asia. As part of our investment in these areas, we created over 10,000 new jobs in 2015. If you are interested in joining our team in Dubai or in any other location around the world, check out the Amazon Jobs site.

Jeff;

 

AWS Managed Services – Infrastructure Operations Management for the Enterprise

by Jeff Barr | on | in AWS Managed Services | | Comments

Large-scale, enterprise data centers are generally run “by the book.” Policies, best practices, and operational procedures are developed, refined, captured, and codified, as part of responsible IT management, often with an eye toward the ITIL model. Ideally, all infrastructure improvements, configuration changes, and provisioning requests are handled in a process-oriented fashion that serves to impose some discipline on the operation of the data center without becoming overly complex or bureaucratic. With IT staff responsible for provisioning hardware, installing software, applying patches, monitoring operations, taking and restoring backups, and dealing with unpredictable operational and security incidents, there’s plenty of work to go around.

These organizations have been looking at the AWS Cloud and want to take advantage of the scale and innovation that it offers, while also looking to become more agile and to save money in the process. As they plan their migration to the cloud, they want to build on their existing systems and practices, while also getting all of the benefits that the cloud has to offer. They want to add additional automation, make use of standard components that can be used more than once, and to relieve their staff of as many routine operational duties as possible.

Introducing AWS Managed Services
Today we are launching AWS Managed Services. Designed for the Fortune 1000 and the Global 2000, this service is designed to accelerate cloud adoption. It simplifies deployment,  migration, and management using automation and machine learning, backed up by a dedicated team of Amazon employees. AWS MS builds on AWS and provides a set of integration points (APIs and a set of CLI tools) for connection to your existing service management system. We’ve been working with a representative set of AWS enterprise customers and partners for the last couple of years in order to make sure that this service meets a very wide range of enterprise requirements.

AWS MS is built around the concept of a Virtual Data Center that is linked to one or more AWS accounts. The VDC consists of a Virtual Private Cloud (VPC) which contains multiple Deployment Groups which consist of Multi-AZ subnets for a DMZ, shared services, and for customer applications. Each application or application component is packaged up into a Managed Stack.

Here’s a brief overview of the feature set:

Incident Monitoring & ResolutionAWS MS manages incidents that are detected by our monitoring systems or reported by our customers. It correlates multiple Amazon CloudWatch alarms and looks for failed updates and security events that could impact the health of running applications. Incidents are created within AWS MS for investigation and are then resolved either automatically or manually by AWS engineers. False positives are used to improve our systems and processes, allowing AWS MS to improve over time by drawing on data collected at scale.

Change ControlAWS MS coordinates all actions on resources. Changes must originate with a change request (an RFC, or Request for Change), and can be manual or scripted. AWS MS makes sure that changes are applied to individual stacks on an orderly, non-overlapping basis. It also holds all incoming manual requests until they have been approved.

ProvisioningAWS MS includes a set of predefined stacks (application templates), each built to conform to long-established AWS best practices. The stacks contain sensible defaults, any of which can be overridden when the stack is provisioned.

Patch ManagementAWS MS takes care of the above-the-hypervisor patching. This includes operating system (Linux and Windows) and infrastructure application (SSH, RDP, ISS, Apache, and so forth) security updates and patches. AWS MS employs multiple strategies, patching and building new AMIs for cloud-aware applications that can be easily restarted, and resorting to in-place patches for the rest.

Security & Access ManagementAWS MS uses third-party applications from AWS Marketplace, starting with Trend Micro Deep Security to look for viruses and malware and to detect intrusions on managed instances. It makes extensive use of EC2 Security Groups and manages controlled, time-limited access to production systems.

Backup & Restore – Each stack is backed up at a specified frequency. A percentage of the backup snapshots are tested for integrity and a run book is used to bring failed infrastructure back to life.

ReportingAWS MS provides a set of financial and capacity management reports, delivered by a dedicated Cloud Service Advisor using AWS Trusted Advisor and other tools. The underlying AWS CloudTrail and Amazon CloudWatch logs are also accessible.

Accessing AWS Managed Services
You can connect AWS Managed Services to your existing service management tools using the AWS MS API and command-line tools. You can also access it through the AWS Management Console, but we expect API and CLI usage to be far more popular. However you choose to access AWS MS, the basic objects and operations are the same. You can create, view, approve, and manage RFCs, service requests, and incident reports. Here’s what this looks like from the Console:

Here’s how a Request for Change (RFC) is created:

And here’s how technical users can customize the RFC:

After a change request has been entered, approved, and scheduled, AWS MS supervises the actual change. Automated changes take place with no further human interaction. Manual changes are performed within a scheduled change window using temporary credentials specific to the change. AWS engineers use the same mechanisms and follow the same discipline. Either way, the entire process is tracked and logged.

Partners & Customers
AWS Managed Services was designed with partners in mind. We have set up a pair of new training programs (AWS MS Business Essentials and AWS MS Technical Essentials) that will provide partners with the background information needed to start building a practice around AWS MS. I expect partners to help their customers connect their existing IT Service Management (ITSM) systems, processes, and tools to AWS MS, assist with the on-boarding process, and manage the migration of applications. There are also opportunities for partners to use AWS MS to provide even better levels of support and service to customers.

As I mentioned earlier, we’ve been working with enterprise customers and partners to make sure that AWS MS meets their needs. Here are a few observations that they shared with us.

Tom Ray of Cloudreach (“Intelligent Cloud Adoption”), an AWS Premier Partner:

We see AWS Managed Services as a key solution in the AWS portfolio, designed to meet the need for a cost effective, highly controlled AWS environment, where the heavy lifting of management and control can be outsourced to AWS. This will extend our relationship even further, as Cloudreach will help customers design, migrate to AWS Managed Services, plus provide application level support alongside AWS.

Paul Hannan of SGN (a regulated oil & gas utility):

SGN’s migration to cloud is based upon improving the security and durability of its IT, while becoming more responsive to its business and customer service needs – all at a lower cost. We decided the best way for us to manage the migration into AWS, at the lowest risk to ourselves, was to partner with AWS. Its managed service team has the expertise to optimise the AWS platform, allowing us to accelerate our understanding of how to best manage the infrastructure within AWS. It’s been a real benefit working with a partner which recognises our desire to always put our customer first and which will pull out all the stops to achieve what’s needed.

Available Now
AWS Managed Services is available today. It is able to manage AWS resources in the US East (Northern Virginia), US West (Oregon), EU (Ireland), and Asia Pacific (Sydney) Regions, with others coming online as soon as possible.

Pricing is based on your AWS usage. To learn more about AWS MS or to initiate the on-boarding process, contact your AWS sales representative.

Jeff;

Now Open AWS Canada (Central) Region

by Jeff Barr | on | in Announcements | | Comments

We are growing the AWS footprint once again. Our new Canada (Central) Region is now available and you can start using it today. AWS customers in Canada and the northern parts of the United States have fast, low-latency access to the suite of AWS infrastructure services.

The Details
The new Canada (Central) Region supports Amazon Elastic Compute Cloud (EC2) and related services including Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances, and Dedicated Hosts.

It also supports Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, AWS CloudTrail, Amazon CloudWatch, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Amazon ECS, EC2 Container Registry, AWS Elastic Beanstalk, Amazon EMR, Amazon ElastiCache, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Marketplace, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, AWS Shield Standard, Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Workflow Service (SWF), AWS Storage Gateway, AWS Trusted Advisor, VM Import/Export, and AWS WAF.

The Region supports all sizes of C4, D2, M4, T2, and X1 instances.

As part of our on-going focus on making cloud computing available to you in an environmentally friendly fashion, AWS data centers in Canada draw power from a grid that generates 99% of its electricity using hydropower (read about AWS Sustainability to learn more).

Well Connected
After receiving a lot of positive feedback on the network latency metrics that I shared when we launched the AWS Region in Ohio, I am happy to have a new set to share as part of today’s launch (these times represent a lower bound on latency and may change over time).

The first set of metrics are to other Canadian cities:

  • 9 ms to Toronto.
  • 14 ms to Ottawa.
  • 47 ms to Calgary.
  • 49 ms to Edmonton.
  • 60 ms to Vancouver.

The second set are to locations in the US:

  • 9 ms to New York.
  • 19 ms to Chicago.
  • 16 ms to US East (Northern Virginia).
  • 27 ms to US East (Ohio).
  • 75 ms to US West (Oregon).

Canada is also home to CloudFront edge locations in Toronto, Ontario, and Montréal, Quebec.

And Canada Makes 15
Today’s launch brings our global footprint to 15 Regions and 40 Availability Zones, with seven more Availability Zones and three more Regions coming online through the next year. As a reminder, each Region is a physical location where we have two or more Availability Zones or AZs. Each Availability Zone, in turn, consists of one or more data centers, each with redundant power, networking, and connectivity, all housed in separate facilities. Having two or more AZ’s in each Region gives you the ability to run applications that are more highly available, fault tolerant, and durable than would be the case if you were limited to a single AZ.

For more information about current and future AWS Regions, take a look at the AWS Global Infrastructure page.

Jeff;


Région AWS Canada (Centre) Maintenant Ouverte

Nous étendons la portée d’AWS une fois de plus. Notre nouvelle Région du Canada (Centre) est maintenant disponible et vous pouvez commencer à l’utiliser dès aujourd’hui. Les clients d’AWS au Canada et dans les régions du nord des États-Unis ont un accès rapide et à latence réduite à l’ensemble des services d’infrastructure AWS.

Les détails
La nouvelle Région du Canada (Centre) supporte Amazon Elastic Compute Cloud (EC2) et les services associés incluant Amazon Elastic Block Store (EBS), Amazon Virtual Private Cloud, Auto Scaling, Elastic Load Balancing, NAT Gateway, Spot Instances et Dedicated Hosts.

Également supportés sont Amazon Aurora, AWS Certificate Manager (ACM), AWS CloudFormation, Amazon CloudFront, AWS CloudHSM, AWS CloudTrail, Amazon CloudWatch, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, Amazon ECS, EC2 Container Registry, AWS Elastic Beanstalk, Amazon EMR, Amazon ElastiCache, Amazon Glacier, AWS Identity and Access Management (IAM), AWS Snowball, AWS Key Management Service (KMS), Amazon Kinesis, AWS Marketplace, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, AWS Shield Standard, Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Workflow Service (SWF), AWS Storage Gateway, AWS Trusted Advisor, VM Import/Export, et AWS WAF.

La région supporte toutes les tailles des instances C4, D2, M4, T2 et X1.

Dans le cadre de notre mission continue de vous offrir des services infonuagiques de manière écologique, les centres de données d’AWS au Canada sont alimentés par un réseau électrique dont 99 pour cent de l’énergie fournie est de nature hydroélectrique (consultez AWS Sustainability pour en savoir plus).

Bien connecté
Après avoir reçu beaucoup de commentaires positifs sur les mesures de latence du réseau dont je vous ai fait part lorsque nous avons lancé la région AWS en Ohio, je suis heureux de vous faire part d’un nouvel ensemble de mesures dans le cadre du lancement d’aujourd’hui (ces mesures représentent une limite inférieure à la latence et pourraient changer au fil du temps).

Le premier ensemble de mesures concerne d’autres villes canadiennes:

  • 9 ms à Toronto.
  • 14 ms à Ottawa.
  • 47 ms à Calgary.
  • 49 ms à Edmonton.
  • 60 ms à Vancouver.

Le deuxième ensemble concerne des emplacements aux États-Unis :

  • 9 ms à New York.
  • 19 ms à Chicago.
  • 16 ms à USA Est (Virginie du Nord).
  • 27 ms à USA Est (Ohio).
  • 75 ms à USA Ouest (Oregon).

Le Canada compte également des emplacements périphériques CloudFront à Toronto, en Ontario, et à Montréal, au Québec.

Et le Canada fait 15
Le lancement d’aujourd’hui porte notre présence mondiale à 15 régions et 40 zones de disponibilité avec sept autres zones de disponibilité et trois autres régions qui seront mises en opération au cours de la prochaine année. Pour vous rafraîchir la mémoire, chaque région est un emplacement physique où nous avons deux ou plusieurs zones de disponibilité. Chaque zone de disponibilité, à son tour, comprend un ou plusieurs centres de données, chacun doté d’une alimentation, d’une mise en réseau et d’une connectivité redondantes dans des installations distinctes. Avoir deux zones de disponibilité ou plus dans chaque région vous donne la possibilité d’opérer des applications qui sont plus disponibles, plus tolérantes aux pannes et plus durables qu’elles ne le seraient si vous étiez limité à une seule zone de disponibilité.

Pour plus d’informations sur les régions AWS actuelles et futures, consultez la page Infrastructure mondiale AWS.

Jeff;

Amazon AppStream 2.0 – Stream Desktop Apps from AWS

by Jeff Barr | on | in Amazon AppStream, AWS re:Invent, Guest Post | | Comments

My colleague Gene Farrell wrote the guest post below to tell you how the original vision for Amazon AppStream evolved in the face of customer feedback.

Jeff;


At AWS, helping our customers solve problems and serve their customers with technology is our mission. It drives our thinking, and it’s at the center of how we innovate. Our customers use services from AWS to build next-generation mobile apps, create delightful web experiences, and even run their core IT workloads, all at global scale.

While we have seen tremendous innovation and transformation in mobile, web, and core IT, relatively little has changed with desktops and desktop applications. End users don’t yet enjoy freedom in where and how they work; IT is stuck with rigid and expensive systems to manage desktops, applications, and a myriad of devices; and securing company information is harder than ever. In many ways, the cloud seems to have bypassed this aspect of IT.

Our customers want to change that. They want the same benefits of flexibility, scale, security, performance, and cost for desktops and applications as they’re seeing with mobile, web, and core IT. A little over two years ago, we introduced Amazon WorkSpaces, a fully managed, secure cloud desktop service that provides a persistent desktop running on AWS. Today, I am excited to introduce you to Amazon AppStream 2.0, a fully managed, secure application streaming service for delivering your desktop apps to web browsers.

Customers have told us that they have many traditional desktop applications that need to work on multiple platforms. Maintaining these applications is complicated and expensive, and customers are looking for a better solution. With AppStream 2.0, you can provide instant access to desktop applications using a web browser on any device, by streaming them from AWS. You don’t need to rewrite your applications for the cloud, and you only need to maintain a single version. Your applications and data remain secure on AWS, and the application stream is encrypted end to end.

Looking back at the original AppStream
Before I get into more details about AppStream 2.0, it’s worth looking at the history of the original Amazon AppStream service. We launched AppStream in 2013 as an SDK-based service that customers could use to build streaming experiences for their desktop apps, and move these apps to the cloud. We believed that the SDK approach would enable customers to integrate application streaming into their products. We thought game developers and graphics ISVs would embrace this development model, but it turns out it was more work than we anticipated, and required significant engineering investment to get started. Those who did try it, found that the feature set did not meet their needs. For example, AppStream only offered a single instance type based on the g2.2xlarge EC2 instance. This limited the service to high-end applications where performance would justify the cost. However, the economics didn’t make sense for a large number of applications.

With AppStream, we set out to solve a significant customer problem, but failed to get the solution right. This is a risk that we are willing to take at Amazon. We want to move quickly, explore areas where we can help customers, but be prepared for failure. When we fail, we learn and iterate fast. In this case, we continued to hear from customers that they needed a better solution for desktop applications, so we went back to the drawing board. The result is AppStream 2.0.

Benefits of AppStream 2.0
AppStream 2.0 addresses many of the concerns we heard from customers who tried the original AppStream service. Here are a few of the benefits:

  • Run desktop applications securely on any device in an HTML5 web browser on Windows and Linux PCs, Macs, and Chromebooks.
  • Instant-on access to desktop applications from wherever users are. There are no delays, no large files to download, and no time-consuming installations. Users get a responsive, fluid experience that is just like running natively installed apps.
  • Simple end user interface so users can run in full screen mode, open multiple applications within a browser tab, and easily switch and interact between them. You can upload files to a session, access and edit them, and download them when you’re done. You can also print, listen to audio, and adjust bandwidth to optimize for your network conditions.
  • Secure applications and data that remain on AWS – only encrypted pixels are streamed to end users. Application streams and user input flow through a secure streaming gateway on AWS over HTTPS, making them firewall friendly. Applications can run inside your own virtual private cloud (VPC), and you can use Amazon VPC security features to control access. AppStream 2.0 supports identity federation, which allows your users to access their applications using their corporate credentials.
  • Fully managed service, so you don’t need to plan, deploy, manage, or upgrade any application streaming infrastructure. AppStream 2.0 manages the AWS resources required to host and run your applications, scales automatically, and provides access to your end users on demand.
  • Consistent, scalable performance on AWS, with access to compute capabilities not typically available on local devices. You can instantly scale locally and globally, and ensure that your users always get a low-latency experience.
  • Multiple streaming instance types to run your applications. You can use instance types from the General Purpose, Compute Optimized, and Memory Optimized instance families to optimize application performance and reduce your overall costs.
  • NICE DCV for high-performance streaming provides secure, high-performance access to applications. NICE DCV delivers a fluid interactive experience, and automatically adjusts to network conditions.

Pricing & availability
With AppStream 2.0, you pay only for the streaming instances that you use, and a small monthly fee per authorized user. The charge for streaming instances depends on the instance type that you select, and the maximum number of concurrent users that will access their applications.

A user fee is charged per unique authorized user accessing applications in a region in any given month.  The user fee covers the Microsoft RDS SAL license, and may be waived if you bring your own RDS CAL licenses via Microsoft’s license mobility program. AppStream 2.0 offers a Free Tier, which provides an admin experience for getting started. The Free Tier includes 40 hours per month, for up to two months. For more information, see this page.

AppStream 2.0 is available today in US East (N. Virginia), US West (Oregon), Europe (Ireland), and AP-Northeast (Tokyo) Regions. You can try the AppStream 2.0 end user experience for free today, with no setup required, by accessing sample applications already installed on AppStream 2.0 To access the Try It Now experience, log in with your AWS account and choose an app to get started.

To learn more about AppStream 2.0, visit the AppStream page.

Gene Farrell, Vice President, AWS Enterprise Applications & EC2 Windows

New – IPv6 Support for EC2 Instances in Virtual Private Clouds

by Jeff Barr | on | in Amazon EC2, Amazon VPC, AWS re:Invent | | Comments

The continued growth of the Internet, particularly in the areas of mobile applications, connected devices, and IoT, has spurred an industry-wide move to IPv6. In accord with a mandate that dates back to 2010, United States government agencies have been working to move their public-facing servers and services to IPv6 as quickly as possible. With 128 bits of address space, IPv6 has plenty of room for growth and also opens the door to new applications and new use cases.

IPv6 for EC2
Earlier this year we launched IPv6 support for S3 (including Transfer Acceleration), CloudFront, WAF, and Route 53. Today we are taking the next big step forward with the launch of IPv6 support for Virtual Private Cloud (VPC) and EC2 instances running in a VPC. This support is launching today in the US East (Ohio) Region and is in the works for the others.

IPv6 support works for new and existing VPCs; you can opt in on a VPC-by-VPC basis by simply checking a box on the Console (API and CLI support is also available):

Each VPC is given a unique /56 address prefix from within Amazon’s GUA (Global Unicast Address); you can assign a /64 address prefix to each subnet in your VPC:

As we did with S3, we make use of a dual-stack model that assigns each instance an IPv4 address and an IPv6 address, along with corresponding DNS entries. Support for both versions of the protocol ensures compatibility and flexibility to access resources and applications.

Security Groups, Route Tables, Network ACLs, VPC Peering, Internet Gateway, Direct Connect, VPC Flow Logs, and DNS resolution within a VPC all operate in the same way as today. Application Load Balancer support for the dual-stack model is on the near-term roadmap and I’ll let you know as soon as it is available.

IPv6 Support for Direct Connect
The Direct Connect Console lets you create virtual interfaces (VIFs) with your choice of IPv4 or IPv6 addresses:

Each VIF supports one BGP peering session over IPv4 and one BGP peering session over IPv6.

New Egress-Only Internet Gateway for IPv6
One of the interesting things about IPv6 is that every address is internet-routable and can talk to the Internet by default. In an IPv4-only VPC, assigning a public IP address to an EC2 instance sets up 1:1 NAT (Network Address Translation) to a private address that is associated with the instance. In a VPC where IPv6 is enabled, the address associated with the instance is public. This direct association removes a host of networking challenges, but it also means that you need another mechanism to create private subnets.

As part of today’s launch, we are introducing a new Egress-Only Internet Gateway (EGW) that you can use to implement private subnets for your VPCs. The EGW is easier to set up and to use than a fleet of NAT instances, and is available to you at no cost. It allows you to block incoming traffic while still allowing outbound traffic (think of it as an Internet Gateway mated to a Security Group). You can create an EGW in all of the usual ways, and use it to impose restrictions on inbound IPv6 traffic. You can continue to use NAT instances or NAT Gateways for IPv4 traffic.

Available Now
IPv6 support for EC2 is now available in the US East (Ohio) Region and you can start using it today at no extra charge. It works with all current-generation EC2 instance types with the exception of M3 and G2, and will be supported on upcoming instance types as well.

IPv6 support for other AWS Regions is in works and I’ll let you know (most likely via a tweet), just as soon as it is ready!

Jeff;

 

New – AWS Step Functions – Build Distributed Applications Using Visual Workflows

by Jeff Barr | on | in AWS Lambda, AWS re:Invent, AWS Step Functions | | Comments

We want to make it even easier for you to build complex, distributed applications by connecting multiple web and microservices. Whether you are implementing a complex business process or setting up a processing pipeline for photo uploads, we want you to focus on the code instead of on the coordination. We want you to be able to build reliable applications that are robust, scalable, and cost-effective, while you use the tools and libraries that you are already familiar with.

How does that sound?

Introducing AWS Step Functions
Today we are launching AWS Step Functions to allow you to do exactly what I described above. You can coordinate the components of your application as series of steps in a visual workflow. You create state machines in the Step Functions Console to specify and execute the steps of your application at scale.

Each state machine defines a set of states and the transitions between them. States can be activated sequentially or in parallel; Step Functions will make sure that all parallel states run to completion before moving forward. States perform work, make decisions, and control progress through the state machine.

Here’s a state machine that includes a little bit of everything:

Multiple copies of each state machine can be running independently at the same time; each copy is called an execution. Step Functions will let you run thousands of execution concurrently so you can scale to any desired level.

There are two different ways to specify what you want to happen when a state is run. First, you can supply a Lambda function that will be synchronously invoked when the state runs. Second, you can supply the name of an Activity. This is a reference to a long-running worker function that polls (via the API) for work to be done. Either way, the code is supplied with a JSON statement as input, and is expected to return another JSON statement as output.

As part of your state machine, you can specify error handling behavior and retry logic. This allows you to build robust multi-step apps that will run smoothly even if transient issues in one part of your code cause a momentary failure.

Quick Tour
Let’s set up a state machine through the AWS Management Console. Keep in mind that production applications will most likely use the AWS Step Functions API (described below) to create and run state machines.

I start by creating and saving a simple Lambda function:

While I am there I also capture the function’s ARN:

Then I go over to the AWS Step Functions Console and click on Create a State Machine. I enter a name (MyStateMachine), and I can click on one of the blueprints to get a running start:

I start with Hello World and use elements of Parallel to create this JSON model of my state machine (read the Amazon States Language spec to learn more):

{
  "Comment": "A simple example of the Steps language using an AWS Lambda Function",
  "StartAt": "Hello",

  "States": {
    "Hello": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:eu-west-1:99999999999:function:HelloWord_Step",
      "Next": "Parallel"
    },

    "Parallel": {
      "Type": "Parallel",
      "Next": "Goodbye",
      "Branches": [
        {
          "StartAt": "p1",
          "States": {
            "p1": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:eu-west-1:9999999999:function:HelloWord_Step",
              "End": true
            }
          }
        },

        {
          "StartAt": "p2",
          "States": {
            "p2": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:eu-west-1:99999999999:function:HelloWord_Step",
              "End": true
            }
          }
        }
      ]
    },

    "Goodbye": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:eu-west-1:99999999999:function:HelloWord_Step",
      "End": true
    }
  }
}

I click on Preview to see it graphically:

Then I select the IAM role that Step Functions thoughtfully created for me:

And I am all set! Now I can execute my state machine from the console; I can start it off with a block of JSON that is passed to the first function:

The state machine starts to execute as soon as I click on Start Execution. I can follow along and watch as execution flows from state to state:

I can visit the Lambda Console and see that my function ran four times as expected (I was pressed for time and didn’t bother to create four separate functions):

AWS Step Functions records complete information about each step and I can access it from the Step Console:

AWS Step Functions API
As I mentioned earlier, most of your interaction with AWS Step Functions will happen through the APIs. Here’s a quick overview of the principal functions:

  • CreateStateMachine – Create a new state machine, given a JSON description.
  • ListStateMachines – Get a list of state machines.
  • StartExecution – Run (asynchronously) a state machine.
  • DescribeExecution – Get information about an execution.
  • GetActivityTask – Poll for new tasks to run (used by long-running workers).

You could arrange to run a Lambda function every time a new object is uploaded to an S3 bucket. This function can then kick off a state machine execution by calling StartExecution. The state machine could (as an example) validate the image, generate multiple sizes and formats in parallel, check for particular types of content, and update a database entry.

The same functionality is also available from the AWS Command Line Interface (CLI).

Development Tools
You can use our new statelint gem to check your hand or machine-generated JSON for common errors including unreachable states and the omission of a terminal state.

Download it from the AWS Labs GitHub repo ( it will also be available on RubyGems) and install it like this:

$ sudo gem install j2119-0.1.0.gem statelint-0.1.0.gem

Here’s what happens if you have a problem:

$ statelint my_state.json
2 errors:
 State Machine.States.Goodbye does not have required field "Next"
 No terminal state found in machine at State Machine.States

And if things look good:

$ statelint my_state.json
$

Available Now
AWS Step Functions is available now and you can start using it today in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo) Regions.

As part of the AWS Free Tier, you can perform up to 4,000 state transitions per month at no charge. After that, you pay $0.025 for ever 1,000 state transitions.

You can learn more during our webinar on December 16th. Register here.

Jeff;

Lambda@Edge – Preview

by Jeff Barr | on | in AWS Lambda, AWS re:Invent | | Comments

Just last week, a comment that I made on Hacker News resulted in an interesting email from an AWS customer!

He told me that he runs a single page app that is hosted on S3 (read about this in Host Your Static Website on Amazon S3) and served up at low latency through Amazon CloudFront. The page includes some dynamic elements that are customized for each user via an API hosted on AWS Elastic Beanstalk.

Here’s how he explained his problem to me:

In order to properly get indexed by search engines and in order for previews of our content to show up correctly within Facebook and Twitter, we need to serve a prerendered version of each of our pages. In order to do this, every time a normal user hits our site need for them to be served our normal front end from Cloudfront. But if the user agent matches Google / Facebook / Twitter etc., we need to instead redirect them the prerendered version of the site.

Without spilling any beans I let him know that we were very aware of this use case and that we had some interesting solutions in the works. Other customers have also let us know that they want to customize their end user experience by making quick decisions out at the edge.

It turns out that there are many compelling use cases for “intelligent” processing of HTTP requests at a location that is close (latency-wise) to the customer. These include inspection and alteration of HTTP headers, access control (requiring certain cookies to be present), device detection, A/B testing, expedited or special handling for crawlers or ‘bots, and rewriting user-friendly URLs to accommodate legacy systems. Many of these use cases require more processing and decision-making than can be expressed by simple pattern matching and rules.

Lambda@Edge
In order to provide support for these use cases (and others that you will dream up), we are launching a preview of Lambda@Edge. This new Lambda-based processing model allows you to write JavaScript code that runs within the ever-growing network of AWS edge locations.

You can now write lightweight request processing logic that springs to life quickly and handles requests and responses that flow through a CloudFront distribution. You can run code in response to four distinct events:

Viewer Request – Your code will run on every request, whether the content is cached or not. Here’s some simple header processing code:

exports.viewer_request_handler = function(event, context) {
  var headers = event.Records[0].cf.request.headers;
  for (var header in headers) {
    headers["X-".concat(header)] = headers[header];
  }
  context.succeed(event.Records[0].cf.request);
}

Origin Request – Your code will run when the requested content is not cached at the edge, before the request is passed along to the origin. You can add more headers, modify existing ones, or modify the URL.

Viewer Response – Your code will run on every response, cached or not. You could use this to clean up some headers that need not be passed back to the viewer.

Origin Response – Your code will run after a cache miss causes an origin fetch and returns a response to the edge.

Your code has access to many aspects of the requests and responses including the URL, method, HTTP version, client IP address, and headers. Initially, you will be able to add, delete, and modify the headers. Soon, you will have complete read/write access to all of the values including the body.

Because your JavaScript code will be part of the request/response path, it must be lean, mean, and self-contained. It cannot make calls to other web services and it cannot access other AWS resources. It must run within 128 MB of memory, and complete within 50 ms.

To get started, you will simply create a new Lambda function, set your distribution as the trigger, and choose the new Edge runtime:

Then you write your code as usual; Lambda will take care of the behind-the-scenes work of getting it to the edge locations.

Interested?
I believe that this cool new processing model will lead to the creation of some very cool new applications and development tools. I can’t wait to see what you come up with!

We are launching a limited preview of Lambda@Edge today and are taking applications now. If you have a relevant use case and are ready to try this out, please apply here.

You can find out more on December 16th by joining our webinar. Register here.

Jeff;

 

Blox – New Open Source Scheduler for Amazon EC2 Container Service

by Jeff Barr | on | in EC2 Container Service | | Comments

Back in 2014 I talked about Amazon ECS and showed you how it helps you to build, run, and scale Docker-based applications. I talked about the three scheduling options (automated, manual, and custom)  and described how a scheduler works to assign tasks to instances.

At the time  that I wrote that post, your custom scheduler had to call the ListContainerInstances and DescribeContainerInstances functions on a frequent basis in order to discover the current state of the cluster. A few weeks ago we simplified the process of tracking the state of each cluster by adding support for Amazon CloudWatch Events (read Monitor Cluster State with Amazon ECS Event Stream to learn more about how this works).

With this new event stream in place, we want to make it even easier for you to create custom schedulers.

Today we are launching Blox. This new open source project includes a service that consumes the event stream, uses it to track the state of the cluster, and makes the state accessible via a set of REST APIs. The package also includes a daemon scheduler that runs one copy of a task on each container instance in a cluster. This one-per-container model supports workloads that process logs and collect metrics.

Here’s a block diagram (no pun intended):

This is an open source project; we are looking forward to your pull requests and feature proposals.

To learn more, read Introducing Blox From Amazon EC2 Container Service or register for our webinar on December 14th.

Jeff;

 

New – AWS Personal Health Dashboard – Status You Can Relate To

by Jeff Barr | on | in AWS re:Invent | | Comments

We launched the AWS Service Health Dashboard way back in 2008! Back then, the AWS Cloud was relatively new, and the Service Health Dashboard was a good way for our customers to check on the status of each service (compare the simple screen shot in that blog post to today’s Service Health Dashboard to see how much AWS has grown in just 8 years).

While the current dashboard is good at displaying the overall status of each AWS service, it is actually impersonal. When you pay it a visit, you are probably more concerned about the status of the AWS services and resources that you are using than you are about the overall status of AWS.

New Personal Health Dashboard
In order to provide you with additional information that is of direct interest to you, we are launching the AWS Personal Health Dashboard today.

As the name indicates, this dashboard gives you a personalized view into the performance and availability of the AWS services that you are using, along with alerts that are automatically triggered by changes in the health of the services. It is designed to be the single source of truth with respect to your cloud resource, and should give you more visibility into any issues that might affect you.

You will see a notification icon in the Console menu when your dashboard contains an item of interest to you. Click on it to see a summary:

Clicking on Open issues displays issues that might affect your AWS infrastructure (this is all test data, by the way):

Clicking on an item will give you more information, including guidance on how to remediate the issue:

The dashboard also gives you a heads-up in advance of scheduled activities:

As well as other things that should be of interest to you:

But Wait, There’s More
You can also use CloudWatch Events to automate your response to alerts and notification of scheduled activities. For example, you could respond to a notification of an impending maintenance event on a critical EC2 instance by proactively moving to a fresh instance.

If your organization subscribes to AWS Business Support or AWS Enterprise Support, you also have access to the new AWS Health API. You can use this API to integrate your existing in-house or third-party IT Management tools with the information in the Personal Health Dashboard.

Jeff;

AWS Batch – Run Batch Computing Jobs on AWS

by Jeff Barr | on | in AWS Batch, AWS re:Invent | | Comments

I entered college in the fall of 1978. The Computer Science department at Montgomery College was built around a powerful (for its time) IBM 370/168 mainframe. I quickly learned how to use the keypunch machine to prepare my card decks, prefacing the actual code with some cryptic Job Control Language (JCL) statements that set the job’s name & priority, and then invoked the FORTRAN, COBOL, or PL/I compiler. I would take the deck to the submission window, hand it to the operator in exchange for a job identifier, and then come back several hours later to collect the printed output and the card deck. I studied that printed output with care, and was always shocked to find that after my jobs spent several hours waiting for its turn to run, the actual run time was just a few seconds. As my fellow students and I quickly learned, jobs launched by the school’s IT department ran at priority 4 while ours ran at 8; their jobs took precedence over ours.  The goal of the entire priority mechanism was to keep the expensive hardware fully occupied whenever possible. Student productivity was assuredly secondary to efficient use of resources.

Batch Computing Today
Today, batch computing remains important! Easier access to compute power has made movie studios, scientists, researchers, numerical analysts, and others with an insatiable appetite for compute cycles hungrier than ever. Many organizations have attempted to feed these needs by building in-house compute clusters powered by open source or commercial job schedulers. Once again, priorities come in to play and there never seems to be enough compute power to go around. Clusters are expensive to build and to maintain, and are often comprised of a large array of identical, undifferentiated processors, all of the same vintage and built to the same specifications.

We believe that cloud computing has the potential to change the batch computing model for the better, with fast access to many different types of EC2 instances, the ability to scale up and down in response to changing needs, and a pricing model that allows you to bid for capacity and to obtain it as economically as possible. In the past, many AWS customers have built their own batch processing systems using EC2 instances, containers, notifications, CloudWatch monitoring, and so forth. This turned out to be a very common AWS use case and we decided to make it even easier to achieve.

Introducing AWS Batch
Today I would like to tell you about a new set of fully-managed batch capabilities. AWS Batch allows batch administrators, developers, and users to have access to the power of the cloud without having to provision, manage, monitor, or maintain clusters. There’s nothing to buy and no software to install. AWS Batch takes care of the undifferentiated heavy lifting and allows you to run your container images and applications on a dynamically scaled set of EC2 instances. It is efficient, easy to use, and designed for the cloud, with the ability to run massively parallel jobs that take advantage of the elasticity and selection provided by Amazon EC2 and EC2 Spot and can easily and securely interact with other other AWS services such as Amazon S3, DynamoDB, and SNS.

Let’s start by taking a look at some important AWS Batch terms and concepts (if you are already doing batch computing, many of these terms will be familiar to you, and still apply). Here goes:

Job – A unit of work (a shell script, a Linux executable, or a container image) that you submit to AWS Batch. It has a name, and runs as a containerized app on EC2 using parameters that you specify in a Job Definition. Jobs can reference other jobs by name or by ID, and can be dependent on the successful completion of other jobs.

Job Definition – Specifies how Jobs are to be run. Includes an AWS Identity and Access Management (IAM) role to provide access to AWS resources, and also specifies both memory and CPU requirements. The definition can also control container properties, environment variables, and mount points. Many of the specifications in a Job Definition can be overridden by specifying new values when submitting individual Jobs.

Job Queue – Where Jobs reside until scheduled onto a Compute Environment. A priority value is associated with each queue.

Scheduler – Attached to a Job Queue, a Scheduler decides when, where, and how to run Jobs that have been submitted to a Job Queue. The AWS Batch Scheduler is FIFO-based, and is aware of dependencies between jobs. It enforces priorities, and runs jobs from higher-priority queues in preference to lower-priority ones when the queues share a common Compute Environment. The Scheduler also ensures that the jobs are run in a Compute Environment of an appropriate size.

Compute Environment – A set of managed or unmanaged compute resources that are used to run jobs. Managed environments allow you to specify desired instance types at several levels of detail. You can set up Compute Environments that use a particular type of instance, a particular model such as c4.2xlarge or m4.10xlarge, or simply specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with a percentage value for bids on the Spot Market and a target set of VPC subnets. Given these parameters and constraints, AWS Batch will efficiently launch, manage, and terminate EC2 instances as needed. You can also launch your own Compute Environments. In this case you are responsible for setting up and scaling the instances in an Amazon ECS cluster that AWS Batch will create for you.

A Quick Tour
You can access AWS Batch from the AWS Management Console, AWS Command Line Interface (CLI), or via the AWS Batch APIs. Let’s take a quick console tour!

The Status Dashboard displays my Jobs, Job Queues, and Compute Environments:

I need a place to run my Jobs, so I will start by selecting Compute environments and clicking on Create environment.  I begin by choosing to create a Managed environment, give it a name, and choosing the IAM roles (these were created automatically for me):

Then I set up the provisioning model (On-Demand or Spot), choose the desired instance families (or specific types), and set the size of my Compute Environment (measured in vCPUs):

I wrap up by choosing my VPC, the desired subnets for compute resources, and the security group that will be associated with those resources:

I click on Create and my first Compute Environment (MainCompute) is ready within seconds:

Next, I need a Job Queue to feed work to my Compute Environment. I select Queues and click on Create Queue to set this up. I accept all of the defaults, connect the Job Queue to my new Compute Environment, and click on Create queue:

Again, it is available within seconds:

Now I can set up a Job Definition. I select Job definitions and click on Create, then set up my definition (this is a very simple job; I am sure you can do better). My job runs the sleep command, needs 1 vCPU, and fits into 128 MB of memory:

I can also pass in environment variables, disable privileged access, specify the user name for the process, and arrange to make file systems available within the container:

I click on Save and my Job Definition is ready to go:

Now I am ready to run my first Job! I select Jobs and click on Submit job:

I can also override many aspect of the job, add additional tags, and so forth. I’ll everything as-is and click on Submit:

And there it is:

I can also submit jobs by specifying the Ruby, Python, Node, or Bash script that implements the job. For example:

The command line equivalents to the operations that I used in the console include create-compute-environment, describe-compute-environments, create-job-queue, describe-job-queues, register-job-definition, submit-job, list-jobs, and describe-jobs.

I expect to see the AWS Batch APIs used in some interesting ways. For example,  imagine a Lambda function that is invoked when a new object (a digital X-Ray, a batch of seismic observations, or a 3D scene description) is uploaded to an S3 bucket. The function can examine the object, extract some metadata, and then use the SubmitJob function to submit one or more Jobs to process the data, with updated data stored in Amazon DynamoDB and notifications sent to Amazon Simple Notification Service (SNS) along the way.

Pricing & Availability
AWS Batch is in Preview today in the US East (Northern Virginia) Region. In addition to regional expansion, we have many other interesting features on the near-term AWS Batch roadmap. For example, you will be able to use an AWS Lambda function as a Job.

There’s no charge for the use of AWS Batch; you pay only for the underlying AWS resources that you consume.

If you’d like to learn more we have a webinar coming December 12th. Register here.

Jeff;