AWS Blog

Reduce DDoS Risks Using Amazon Route 53 and AWS Shield

by Jeff Barr | on | in AWS Shield, Route 53 | | Comments

In late October of 2016 a large-scale cyber attack consisting of multiple denial of service attacks targeted a well-known DNS provider. The attack, consisting of a flood of DNS lookups from tens of millions of IP addresses, made many Internet sites and services unavailable to users in North America and Europe. This Distributed Denial of Service (DDoS) attack was believe to have been executed using a botnet consisting of a multitude of Internet-connected devices such as printers, camera, residential network gateways, and even baby monitors. These devices had been infected with the Mirai malware and generated several hundreds of gigabytes of traffic per second. Many corporate and educational networks simply do not have the capacity to absorb a volumetric attack of this size.

In the wake of this attack and others that have preceded it, our customers have been asking us for recommendations and best practices that will allow them to build systems that are more resilient to various types of DDoS attacks. The short-form answer involves a combination of scale, fault tolerance, and mitigation (the AWS Best Practices for DDoS Resiliency white paper goes in to far more detail) and makes use of Amazon Route 53 and AWS Shield (read AWS Shield – Protect Your Applications from DDoS Attacks to learn more).

Scale – Route 53 is hosted at numerous AWS edge locations, creating a global surface area capable of absorbing large amounts of DNS traffic. Other edge-based services, including Amazon CloudFront and AWS WAF, also have a global surface area and are also able to handle large amounts of traffic.

Fault Tolerance – Each edge location has many connections to the Internet. This allows for diverse paths and helps to isolate and contain faults. Route 53 also uses shuffle sharding and anycast striping to increase availability. With shuffle sharding, each name server in your delegation set corresponds to a unique set of edge locations. This arrangement increases fault tolerance and minimizes overlap between AWS customers. If one name server in the delegation set is not available, the client system or application will simply retry and receive a response from a name server at a different edge location. Anycast striping is used to direct DNS requests to an optimal location. This has the effect of spreading load and reducing DNS latency.

Mitigation – AWS Shield Standard protects you from 96% of today’s most common attacks. This includes SYN/ACK floods, Reflection attacks, and HTTP slow reads. As I noted in my post above, this protection is applied automatically and transparently to your Elastic Load Balancers, CloudFront distributions, and Route 53 resources at no extra cost. Protection (including deterministic packet filtering and priority based traffic shaping) is deployed to all AWS edge locations and inspects all traffic with just microseconds of overhead, all in a totally transparent fashion. AWS Shield Advanced includes additional DDoS mitigation capability, 24×7 access to our DDoS Response Team, real time metrics and reports, and DDoS cost protection.

To learn more, read the DDoS Resiliency white paper and learn about Route 53 anycast.

Jeff;

 

New – AWS OpsWorks for Chef Automate

by Jeff Barr | on | in AWS OpsWorks, AWS re:Invent | | Comments

AWS OpsWorks helps you to configure and run applications using Chef. You use a Domain Specific Language (DSL) to write cookbooks that define your application’s architecture and the configuration of each component. The Chef server is an essential part of the configuration process. It stores all of the cookbooks and tracks state information for each of the instances (nodes in Chef terminology).

Because the Chef server is in the critical path when newly launched instances are configured, it must be reliable. Many OpsWorks and Chef users install and maintain this important architectural component themselves. In production-scale environments, this leaves them to handle backups, restores, version upgrades, and so forth.

New AWS OpsWorks for Chef Automate
Early this month we launched AWS OpsWorks for Chef Automate from the AWS re:Invent stage. You can launch the Chef Automate server with just 3 clicks and start using it within minutes. You can use community cookbooks from Chef Supermarket and community tools such as Test Kitchen and Knife.

You can use Chef Automate to manage your infrastructure throughout the life-cycle of your application’s infrastructure. For example, newly launched EC2 instances can automatically connect to the Chef server and run a specified recipe by using an unattended association script (read Adding Nodes Automatically in AWS OpsWorks for Chef Automate to learn more). The registration script can be used to register EC2 instances created dynamically through an Auto Scaling Group and to register on-premises servers.

Take a Look
Let’s launch a Chef Automate server from the OpsWorks Console. Click on Go to OpsWorks for Chef Automate to get started.

Click on Create Chef Automate server, give your server a name, choose a region, and select a suitable EC2 instance type:

Choose one of your SSH key pairs, or opt out of SSH:

Finally, configure your network (VPC), IAM, maintenance window, and backup settings:

Click on Next, review your settings, and then click on Launch! The launch process takes less than 20 minutes. During that time you can download the sign-in credentials for your Chef Automate dashboard along with a Starter Kit:

You can see all of your Chef Automate servers at a glance:

Click on the server name (BorkBorkBork here), and then on Open Chef Automate dashboard, then enter your credentials to log in:

And here’s the dashboard:

You can see and manage your nodes:

Manage your workflows:

And much more!

Behind the scenes, the launch process invokes a AWS CloudFormation template. The template creates an EC2 instance, an Elastic IP Address, and a Security Group.

Available Now
You can launch AWS OpsWorks for Chef Automate today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) Regions. Pricing is based on the number of nodes and the number of hours that they are connected to the server; see the Chef Automate Pricing page for more info. As part of the AWS Free Tier, you can use up to 10 nodes at no charge for 12 months.

Jeff;

New – Daily Aggregated Price List notifications

by Tara Walker | on | | Comments

Last year, AWS customers and partners were asking for a systematic way to retrieve prices for AWS services.  We answered the need by launching the AWS Price List API last December with pricing for thirteen (13) AWS services. The API provides pricing data in JSON and CSV format for download and enables customers to query for the prices of AWS services.  The Price List API also allows customers to receive price change updates via Amazon SNS notifications.

Expanding the AWS Price List API

Over last three months we have expanded the AWS Price List API service to provide pricing data for all of the AWS services.  Now customers that are doing cost analysis around building cloud-based solutions or moving on-premises workloads to the cloud have easier access to the comprehensive price list of AWS services. This will enable customer and partners to have greater control over the budgeting, forecasting and planning of their cloud solutions.
In addition to the expansion of the AWS Price List API, customers can sign up to receive notification around price cuts, new services and instance types. Upon subscribing to receive notifications, you can customize the timing of the receipt of the updates to be once a day or every time a price update occurs. If you choose to be notified once a day, the SNS notification will include all price changes applied during that day. Here’s a sample of an email notification that would be received upon subscribing to the price list API:

Let’s do a quick review of how to use the Price List API. First to access pricing information via the Price List API, you would download two types of files: Offer index file and Offer file. The Offer index file is available as a JSON file and lists the supported AWS services and the URL for the associated service Offer file. The Offer file lists the products and prices for a single AWS service and is available as either CSV or JSON file format. Both can be accessed via simple download links for an easy method to obtain the pricing data files.

You can get started today by using the price list API to obtain pricing data for any of the AWS services. With the expansion of the AWS Price List API to include all of the AWS Services, subscribing to the price list API is a great way to keep up to date on the latest AWS cuts prices and get introductions to new services.

To learn more about the AWS Price List API and get to more information on how to subscribe to API notifications, check out the previous AWS Blog post introducing the service and the Using the AWS Price List API documentation.

– Tara

Look Before You Leap – December 31, 2016 Leap Second on AWS

by Jeff Barr | on | in Amazon EC2, Amazon RDS, Announcements | | Comments

If you are counting down the seconds before 2016 is history, be sure to add one at the very end!

The next leap second (the 27th so far) will be inserted on December 31, 2016 at 23:59:60 UTC. This will keep Earth time (Coordinated Universal Time) close to mean solar time and means that the last minute of the year will have 61 seconds.

The information in our last post (Look Before You Leap – The Coming Leap Second and AWS), still applies, with a few nuances and new developments:

AWS Adjusted Time – We will spread the extra second over the 24 hours surrounding the leap second (11:59:59 on December 31, 2016 to 12:00:00 on January 1, 2017). AWS Adjusted Time and Coordinated Universal time will be in sync at the end of this time period.

Microsoft Windows – Instances that are running Microsoft Windows AMIs supplied by Amazon will follow AWS Adjusted Time.

Amazon RDS – The majority of Amazon RDS database instances will show “23:59:59” twice. Oracle versions 11.2.0.2, 11.2.0.3, and 12.1.0.1 will follow AWS Adjusted Time. For Oracle versions 11.2.0.4 and 12.1.0.2 contact AWS Support for more information.

Need Help?
If you have any questions about this upcoming event, please contact AWS Support or post in the EC2 Forum.

Jeff;

 

EC2 Systems Manager – Configure & Manage EC2 and On-Premises Systems

by Jeff Barr | on | in Amazon EC2, EC2 Systems Manager, Windows | | Comments

Last year I introduced you to the EC2 Run Command and showed you how to use it to do remote instance management at scale, first for EC2 instances and then in hybrid and cross-cloud environments. Along the way we added support for Linux instances, making EC2 Run Command a widely applicable and incredibly useful administration tool.

Welcome to the Family
Werner announced the EC2 Systems Manager at AWS re:Invent and I’m finally getting around to telling you about it!

This a new management service that include an enhanced version of EC2 Run Command along with eight other equally useful functions. Like EC2 Run Command it supports hybrid and cross-cloud environments composed of instances and services running Windows and Linux. You simply open up the AWS Management Console, select the instances that you want to manage, and define the tasks that you want to perform (API and CLI access is also available).

Here’s an overview of the improvements and new features:

Run Command – Now allows you to control the velocity of command executions, and to stop issuing commands if the error rate grows too high.

State Manager – Maintains a defined system configuration via policies that are applied at regular intervals.

Parameter Store – Provides centralized (and optionally encrypted) storage for license keys, passwords, user lists, and other values.

Maintenance Window -Specify a time window for installation of updates and other system maintenance.

Software Inventory – Gathers a detailed software and configuration inventory (with user-defined additions) from each instance.

AWS Config Integration – In conjunction with the new software inventory feature, AWS Config can record software inventory changes to your instances.

Patch Management – Simplify and automate the patching process for your instances.

Automation – Simplify AMI building and other recurring AMI-related tasks.

Let’s take a look at each one…

Run Command Improvements
You can now control the number of concurrent command executions. This can be useful in situations where the command references a shared, limited resource such as an internal update or patch server and you want to avoid overloading it with too many requests.

This feature is currently accessible from the CLI and from the API. Here’s a CLI example that limits the number of concurrent executions to 2:

$ aws ssm send-command \
  --instance-ids "i-023c301591e6651ea" "i-03cf0fc05ec82a30b" "i-09e4ed09e540caca0" "i-0f6d1fe27dc064099" \
  --document-name "AWS-RunShellScript" \
  --comment "Run a shell script or specify the commands to run." \
  --parameters commands="date" \
  --timeout-seconds 600 --output-s3-bucket-name "jbarr-data" \
  --region us-east-1 --max-concurrency 2

Here’s a more interesting variant that is driven by tags and tag values by specifying --targets instead of --instance-ids:

$ aws ssm send-command \
  --targets "Key=tag:Mode,Values=Production" ... 

You  can also stop issuing commands if they are returning errors, with the option to specify either a maximum number of errors or a failure rate:

$ aws ssm send-command --max-errors 5 ... 
$ aws ssm send-command --max-errors 5% ...

State Manager
State Manager helps to keep your instances in a defined state, as defined by a document. You create the document, associate it with a set of target instances, and then create an association to specify when and how often the document should be applied. Here’s a document that updates the message of the day file:

And here’s the association (this one uses tags so that it applies to current instances and to others that are launched later and are tagged in the same way):

Specifying targets using tags makes the association future-proof, and allows it to work as expected in dynamic, auto-scaled environments. I can see all of my associations, and I can run the new one by selecting it and clicking on Apply Association Now:

Parameter Store
This feature simplifies storage and management for license keys, passwords, and other data that you want to distribute  to your instances. Each parameter has a type (string, string list, or secure string), and can be stored in encrypted form. Here’s how I create a parameter:

And here’s how I reference the parameter in a command:

Maintenance Window
This feature allows specification of a time window for installation of updates and other system maintenance. Here’s how I create a weekly time window that opens for four hours every Saturday:

After I create the window I need to assign a set of instances to it. I can do this by instance Id or by tag:

And  then I need to register a task to perform during the maintenance window. For example, I can run a Linux shell script:

Software Inventory
This feature collects information about software and settings for a set of instances. To access it, I click on Managed Instances and Setup Inventory:

Setting up the inventory creates an association between an AWS-owned document and a set of instances. I simply choose the targets, set the schedule, and identify the types of items to be inventoried, then click on Setup Inventory:

After the inventory runs, I can select an instance and then click on the Inventory tab in order to inspect the results:

The results can be filtered for further analysis. For example, I can narrow down the list of AWS Components to show only development tools and libraries:

I can also run inventory-powered queries across all of the managed instances. Here’s how I can find Windows Server 2012 R2 instances that are running a version of .NET older than 4.6:

AWS Config Integration
The results of the inventory can be routed to AWS Config  and allow you to track changes to the applications, AWS components, instance information, network configuration, and Windows Updates over time. To access this information, I click on Managed instance information above the Config timeline for the instance:

The three lines at the bottom lead to the inventory information. Here’s the network configuration:

Patch Management
This feature helps you to keep the operating system on your Windows instances up to date. Patches are applied during maintenance windows that you define, and are done with respect to a baseline. The baseline specifies rules for automatic approval of patches based on classification and severity, along with an explicit list of patches to approve or reject.

Here’s my baseline:

Each baseline can apply to one or more patch groups. Instances within a patch group have a Patch Group tag. I named my group Win2016:

Then I associated the value with the baseline:

The next step is to arrange to apply the patches during a maintenance window using the AWS-ApplyPatchBaseline document:

I can return to the list of Managed Instances and use a pair of filters to find out which instances are in need of patches:

Automation
Last but definitely not least, the Automation feature simplifies common AMI-building and updating tasks. For example, you can build a fresh Amazon Linux AMI each month using the AWS-UpdateLinuxAmi document:

Here’s what happens when this automation is run:

Available Now
All of the EC2 Systems Manager features and functions that I described above are available now and you can start using them today at no charge. You pay only for the resources that you manage.

Jeff;

 

AWS Cost Explorer Update – Reserved Instance Utilization Report

by Jeff Barr | on | in AWS Cost Explorer | | Comments

Cost Explorer is a tool that helps you to manage your AWS spending using reporting and analytics tools (read The New Cost Explorer for AWS to learn more). You can sign up with a single click and then visualize your AWS costs, analyze trends, and look at spending patterns. You can look at your spending through a set of predefined views (by service, by linked account, daily, and so forth). You can drill in to specific areas of interest and you can also set up custom filters.

Enterprise-scale AWS customers invariably take advantage of the cost savings (up to 75% when compared to On-Demand) provided by Reserved Instances. These customers commonly have thousands of Reserved Instance (RI) subscriptions and want to make sure that they are making great use of them.

New Reserved Instance Utilization Report
Today we are adding a new Reserved Instance Utilization report to Cost Explorer.  It gives you the power to track and manage aggregate RI utilization across your entire organization, even as your usage grows to thousands of subscriptions spread across linked accounts. You can also look at aggregate usage or individual RI usage going back up to one year. You can also look at aggregate usage or individual RI usage going back up to one year and define an RI utilization threshold to monitor your actual usage against it. If you find an RI subscription that is tracking below your predefined utilization target, you can drill down and find the account owner, instance type, and unused hours. You also have access to all of the existing filtering functions provided by Cost Explorer.

I don’t happen to own thousands of Reserved Instances, so I’ll use some sample screen shots and test data to show you what the report looks like and how you can use it! The new report is available in daily and monthly flavors from the Cost Explorer menu (this menu includes three reports that I defined myself):

Here’s the  Daily RI Utilization report:

The RI are displayed in descending order of RI Utilization; the least-used RI’s are at the top of the list by default. I can see utilization over time and detailed information about the RI with a click:

I can filter by instance type or other attributes in order to focus on particular RI’s:

I can also set the desired time range. Here’s how I can verify that I made good use of my d2.8xlarge RI’s last month by filtering on an instance type and setting the time range:

I can set the Utilization Target to any desired percentage and it will be shown on the graph as a reference line. Here, I can see that utilization dipped below 80% in June:

From there I can switch to the daily view and zoom in (click and drag) to learn more:

As I mentioned earlier, you can also filter on linked accounts, regions, and other aspects of each RI.

Now Available
This report is available now and you can start using it today!

Jeff;

 

Amazon ECS – Support for Windows Containers (Beta)

by Jeff Barr | on | in EC2 Container Service | | Comments

I sincerely hope that you are familiar with container-based computing and that you have at least a basic understanding of the value of containerization. As I noted back in 2014, packaging your cloud-based application as a collection of containers, each specified declaratively, gives you a laundry list of benefits including consistency between your development and production environments, a distributed application platform as an architectural base, development efficiency, and operational efficiency.

We launched Amazon EC2 Container Service in late 2014 with support for Linux containers. So far this year we have added support for Application Load Balancing, IAM Roles for ECS tasks, Service Auto Scaling, the Amazon Linux Container Image, and the Blox Open Source Scheduler.

Support for Windows Containers
Today we are continuing our string of ECS launches by adding beta-level support for Windows containers. You can now start to containerize and test your Windows applications while we finalize this feature ahead of production use.

To get started, you simply specify the Windows Server 2016 Base with Containers AMI when you create your cluster (you cannot mix Linux and Windows in the same cluster).

You can also use the Windows Containers AWS CloudFormation Template as a starting point. The template runs in a VPC and creates a Windows-powered cluster with a configurable number of container instances. It also creates IAM Roles, an Application Load Balancer, security groups, an Amazon ECS task definition, an Amazon ECS service, and an Auto Scaling policy. You can use the template as-is or you can modify it for your own use. I opened up the template in the CloudFormation Designer in order to see how all of the parts fit together. I rearranged the items in order to make the structure a bit more apparent; here’s what it looks like:

You can read Windows Containers (Beta) to learn more about support for Windows Containers.

Things to Know
The Windows Server Docker images are fairly large (approximately 9 GiB). Your Windows container instances will require more storage space than your Linux container instance, so plan accordingly. Due to the size of the images, downloading and extracting the contents on initial use can take up 15 minutes. If you use IAM Roles for tasks (read Windows IAM Roles for Tasks to learn more), this time can double. For a full list of other issues and caveats, read Windows Container Caveats.

Jeff;

 

Amazon EFS Update – On-Premises Access via Direct Connect

by Jeff Barr | on | in Amazon EC2, Amazon Elastic File System, AWS Direct Connect, AWS re:Invent | | Comments

I introduced you to Amazon Elastic File System last year (Amazon Elastic File System – Shared File Storage for Amazon EC2) and announced production readiness earlier this year (Amazon Elastic File System – Production-Ready in Three Regions). Since the launch earlier this year, thousands of AWS customers have used it to set up, scale, and operate shared file storage in the cloud.

Today we are making EFS even more useful with the introduction of simple and reliable on-premises access via AWS Direct Connect. This has been a much-requested feature and I know that it will be useful for migration, cloudbursting, and backup. To use this feature for migration, you simply attach an EFS file system to your on-premises servers, copy your data to it, and then process it in the cloud as desired, leaving your data in AWS for the long term.  For cloudbursting, you would copy on-premises data to an EFS file system, analyze it at high speed using a fleet of Amazon Elastic Compute Cloud (EC2) instances, and then copy the results back on-premises or visualize them in Amazon QuickSight.

You’ll get the same file system access semantics including strong consistency and file locking, whether you access your EFS file systems from your on-premises servers or from your EC2 instances (of course, you can do both concurrently). You will also be able to enjoy the same multi-AZ availability and durability that is part-and-parcel of EFS.

In order to take advantage of this new feature, you will need to use Direct Connect to set up a dedicated network connection between your on-premises data center and an Amazon Virtual Private Cloud. Then you need to make sure that your filesystems have mount targets in subnets that are reachable via the Direct Connect connection:

You also need to add a rule to the mount target’s security group in order to allow inbound TCP and UDP traffic to port 2049 (NFS) from your on-premises servers:

After you create the file system, you can reference the mount targets by their IP addresses, NFS-mount them on-premises, and start copying files. The IP addresses are available from within the AWS Management Console:

The Management Console also provides you with access to step-by-step directions! Simply click on the On-premises mount instructions:

And follow along:

This feature is available today at no extra charge in the US East (Northern Virginia), US West (Oregon), EU (Ireland), and US East (Ohio) Regions.

Jeff;

 

New – Amazon Cognito Groups and Fine-Grained Role-Based Access Control

by Tara Walker | on | in Amazon Cognito | | Comments

One of the challenges in building applications has been around user authentication and management.  Let’s face it, not many developers want to build yet another user identification and authentication system for their application nor would they want to cause a user to create yet another account unless needed.  Amazon Cognito makes it simpler for developers to manage user identities, authentication, and permissions in order to access their applications’ data and back end systems.  Now, if only there were a service feature to make it even easier for developers to assign different permissions to different users of their applications.

Today we are excited to announce Cognito User Pools support for groups and Cognito Federated Identities support for fine-grained Role-Based Access Control (RBAC).  With Groups support in Cognito, developers can easily customize users’ app experience by creating groups which represent different user types and app usage permissions.  Developers have the ability to add users and remove users from groups and manage group permissions for sets of users.

Speaking of permissions, support for fine-grained Role-Based Access Control (RBAC) in Cognito Federated Identities allows developers to now assign different IAM roles to different authenticated users.  Previously, Amazon Cognito only supported one IAM role for all authenticated users.  With fine-grained RBAC, a developer can map federated users to different IAM roles; this functionality is available for both user authentication using existing identity providers like Facebook or Active Directory and using Cognito User Pools.

Groups in Cognito User Pools

The best way to examine the new Cognito group feature is take a walkthrough of creating a new group in the Amazon Cognito console and adding users to the different group types.

Cognito - UserPoolCreateGroup

 

After selecting my user pool, TestAppPool, I see the updated menu item; Users and groups.  Upon selection of the menu option, a panel is presented with tabs for both User and Groups.  To create my new group, I select the Create group button.

A dialog box will open to allow for the creation of my group.  Here I will create a group for admin users named AdminGroup.  I will fill in the name for the group, provide a description for the group, and setting the order of precedence the group is ready to be created.  Note that setting the numerical precedence a group determine which group permission is prioritized and therefore utilized for users that have been assigned to multiple groups.  The lower the numerical precedence the higher the prioritization of the group to be used by the user.  Since this is my AdminGroup, I will give this group the precedence of zero (0).  After I click the Create group button, I have successfully created my user pool group.

Cognito - CreateGroupDialog-s

Now all that is left to do is add my user(s) to the group. In my test app pool, I have two (2) users; TestAdminUser and TestUnregisteredUser as shown below. I will add my TestAdminUser to my newly created group.

Cognito - Users

To add my user to the AdminGroup user pool group, I simply go into the Groups tab and select my AdminGroup.  Once the AdminGroup details screen is shown, a click of the Add users button will bring up a dialog box displaying users within my user pool.  Adding a user to this group is a straightforward process, which only requires me to selecting the plus symbol next to the username desired to be added.  Once I receive the confirmation that the user has been added to the group, the process is complete.

As you can see from the walkthrough is easy for a developer to create groups in user pools. Groups can be created and managed groups in a user pool from the AWS management console, the APIs, and the CLI. As a developer you can create, read, update, delete, and list the groups for a user pool using AWS Credentials. Each user pool can contain up to 25 groups.  Additionally, you can add users and remove users from groups within a user pool, and you can use groups to control permissions to access your resources in AWS by assigning an AWS IAM roles for the groups.  You can also use Amazon Cognito combined with Amazon API Gateway to control permissions to your own back end resources.

Cognito - AddUsersAdminGroup - small

 

Fine-grained Role-Based Access Control in Cognito Federated Identities

Let’s now dig into the Cognito Federated Identities’ feature, fine-grained Role-Based Access Control, which we will refer to going forward as RBAC. Before we dive into to RBAC, let do a quick review of features of Cognito Federated Identities.  Cognito Identity assigns users a set of temporary, limited privilege credentials to access the AWS resources from your application without having to use AWS account credentials. The permissions for each user are controlled through AWS IAM roles that you create.

At this point let’s journey into RBAC by doing another walkthrough in the management console.  Once in the console and selected the Cognito service, we will now select Federated Identities.  I think it would be best to show Cognito user pools and Federated Identities in action while examining RBAC, so I am going to create a new identity pool that utilizes Cognito user pools as its authentication provider.  To create a new pool, I will first enter a name for my identity pool and select the Enable access to unauthenticated identities checkbox.  Then under Authentication Providers, I will select the Cognito tab so that I can enter my TestAppPool user pool ID and the app client ID.  Please note, that you must have created an app (app client) within your Cognito user pool in order to obtain the app client ID and to allow the app leveraging the Cognito identity pool to access the associated user pool.

Cognito - CreateIdentityPool

Now that we have created our identity pool, let’s assign role-based access for the Cognito user pool authentication method.  The simplest way to assign different roles is by defining rules in a Cognito identity pool.  Each rule specifies a user attribute or as noted in the console, a claim.  A claim is simply a value in a token for that attribute that will be matched by the rule and associated to a specific IAM role.

In order to truly show the benefit of RBAC, I will need a role for our Test App that gives users that are in the Engineering department access to put objects in S3 and access DynamoDB.  In order to create this role, I first have to create a policy with PutObject access to S3 and GetItem, Query, Scan, and BatchGetItem access to DynamoDB. Let’s call this policy, TestAppEngineerPolicy.  After constructing the aforementioned policy, I will create an IAM role named, EngineersRole, which will leverage this policy.

At this point we have a role with fine grained access to AWS resources, so let’s return to our Cognito identity pool.  Click Edit identity pool and drop down the section for Authentication providers.  Since the authentication provider for our identity pool is a Cognito user pool, we will select the Cognito tab.  Since we are establishing fine-grained RBAC for the federated identity, I will focus my attention to the Authenticated role selection section of the Authentication provider to define a rule.  In this section, click the drop down and select the option Choose role with rules.

Cognito - AuthenticationProvider

We will now setup rule with a claim (an attribute), a value to match, and the specific IAM role, EngineersRole.. For our example, the rule I am creating will assign our specific IAM role, i.e. EngineersRole, to any users authenticated in our Cognito user pool with a department attribute set as ‘Engineering’. Please note: The department attribute that we are basing our rule on, is a custom attribute that I created in our user pool, TestAppPool, as shown in graphic below.

Cognito - CustomAttributes - small

Now that we have that cleared up, let’s focus back on the creation of our rule.  For the claim, I will type the aforementioned custom attribute, department.  This rule will be applicable when the value of department is equal to the string “Engineering”, therefore, in the Match Type field I will select the Equals match type.  Finally, I will type the actual string value, “Engineering”, for the attribute value that should be matched in the rule.  If a user has a matching value for the department attribute, they can assume the EngineersRole IAM role when they get credentials. After completing this and clicking the Save Changes button, I have successfully created rule that allows for users that are authenticated with our Cognito user pool who are in the Engineering department to have different permissions than other authenticated users using the application.

Cognito - AuthenticationRoleSelection

Since we’ve completed our walkthrough of setting up a rule to assign different roles in a Cognito identity pool, let us discuss some key points to remember about fine-grained RBAC.  Firstly, rules are defined with an order and the IAM role for the first matching rule will be applied. Secondly to set up RBAC, you can define rules or leverage the roles passed via the ID token that was assigned by the user pool. For each authentication provider configured in your identity pool, there is a maximum of a total of 25 rules that can be created. Additionally, user permissions are controlled via AWS IAM roles that you create.

Pricing and Availability

Developers can get started right away to take advantage of these exciting new features.  Learn more about these new features and the other benefits of leveraging the Amazon Cognito service by visiting our developer resources page.

Great news is that there is no additional cost for using groups within a user pool. You pay only for Monthly Active Users (MAUs) after the free tier. Also remember that using the Cognito Federated Identities feature for controlling user permissions and generating unique identifiers is always free with Amazon Cognito.  See the Amazon Cognito pricing page for more information.

Tara

New – AWS Application Discovery Service Console

by Jeff Barr | on | in AWS Application Discovery Service, AWS Management Console | | Comments

AWS Application Discovery Service helps you to plan your migration to the cloud. As a central component of the AWS Cloud Adoption Framework, it simplifies the process of automating the process of discovering and collecting important information about your system (read New – AWS Application Discovery Service – Plan Your Cloud Migration to learn more).

There are two different data collection options. You can install a lightweight agent on your physical servers or VMs, or you can run the Agentless Discovery Connnector in your VMWare environment. Either way, AWS Application Discovery Service collects the following information:

  • Installed applications and packages.
  • Running applications and processes.
  • TCP v4 and v6 connections.
  • Kernel brand and version.
  • Kernel configuration.
  • Kernel modules.
  • CPU and memory usage.
  • Process creation and termination events.
  • Disk and network events.
  • NIC information.
  • Use of DNS, DHCP, and Active Directory.

The lightweight agent also collects information about TCP listening ports and associated processes; we will add this feature to the Agentless Discovery Connector sometime soon.

The information is collected, stored locally for optional review, and then uploaded to the cloud across a secure connection on port 443. It is processed and correlated, and then stored in a repository in encrypted form. You can then use the information to help you to choose the applications that you would like to migrate.

New Application Discovery Service Console
When I first wrote about this service, the processed, correlated information was available in XML and CSV formats for use with analysis and migration tools. Today we are launching a new Application Discovery Service Console that is designed to simplify the entire cloud migration process. It helps you to install the agent, discover the applications, map application dependencies, and measure application performance.

Let’s take a tour! The landing page gives you an overview of the service, with a listing of the benefits and features:

Then you choose your data collection option (agent on the servers or VMs, or agentless in your VMware environment). You can click on Learn more for detailed setup instructions.

With the agents and connectors (you can use both together) set up and ready to go, you can start discovery from selected agents/connectors by clicking on Start data collection:

You can see the servers as they are discovered:

You can select one or more servers and group them into a named application, again with a couple of clicks:

You can add one or more tags to each server:

You can see all of the detailed information for each server including network connections, processes, and processes that are producing or consuming network traffic:

And:

You can see a list of the applications (each one running on one or more servers):

You can also learn more about each application:

With this information at hand, you will be ready to plan and execute your migration to the AWS Cloud! To learn more, read the Application Discovery Service User Guide.

Jeff;

PS – Our Application Discovery Service Partners would love to help you with your cloud migration.