AWS Blog

Look Before You Leap – December 31, 2016 Leap Second on AWS

by Jeff Barr | on | in Amazon EC2, Amazon RDS, Announcements | | Comments

If you are counting down the seconds before 2016 is history, be sure to add one at the very end!

The next leap second (the 27th so far) will be inserted on December 31, 2016 at 23:59:60 UTC. This will keep Earth time (Coordinated Universal Time) close to mean solar time and means that the last minute of the year will have 61 seconds.

The information in our last post (Look Before You Leap – The Coming Leap Second and AWS), still applies, with a few nuances and new developments:

AWS Adjusted Time – We will spread the extra second over the 24 hours surrounding the leap second (11:59:59 on December 31, 2016 to 12:00:00 on January 1, 2017). AWS Adjusted Time and Coordinated Universal time will be in sync at the end of this time period.

Microsoft Windows – Instances that are running Microsoft Windows AMIs supplied by Amazon will follow AWS Adjusted Time.

Amazon RDS – The majority of Amazon RDS database instances will show “23:59:59” twice. Oracle versions 11.2.0.2, 11.2.0.3, and 12.1.0.1 will follow AWS Adjusted Time. For Oracle versions 11.2.0.4 and 12.1.0.2 contact AWS Support for more information.

Need Help?
If you have any questions about this upcoming event, please contact AWS Support or post in the EC2 Forum.

Jeff;

 

EC2 Systems Manager – Configure & Manage EC2 and On-Premises Systems

by Jeff Barr | on | in Amazon EC2, EC2 Systems Manager, Windows | | Comments

Last year I introduced you to the EC2 Run Command and showed you how to use it to do remote instance management at scale, first for EC2 instances and then in hybrid and cross-cloud environments. Along the way we added support for Linux instances, making EC2 Run Command a widely applicable and incredibly useful administration tool.

Welcome to the Family
Werner announced the EC2 Systems Manager at AWS re:Invent and I’m finally getting around to telling you about it!

This a new management service that include an enhanced version of EC2 Run Command along with eight other equally useful functions. Like EC2 Run Command it supports hybrid and cross-cloud environments composed of instances and services running Windows and Linux. You simply open up the AWS Management Console, select the instances that you want to manage, and define the tasks that you want to perform (API and CLI access is also available).

Here’s an overview of the improvements and new features:

Run Command – Now allows you to control the velocity of command executions, and to stop issuing commands if the error rate grows too high.

State Manager – Maintains a defined system configuration via policies that are applied at regular intervals.

Parameter Store – Provides centralized (and optionally encrypted) storage for license keys, passwords, user lists, and other values.

Maintenance Window -Specify a time window for installation of updates and other system maintenance.

Software Inventory – Gathers a detailed software and configuration inventory (with user-defined additions) from each instance.

AWS Config Integration – In conjunction with the new software inventory feature, AWS Config can record software inventory changes to your instances.

Patch Management – Simplify and automate the patching process for your instances.

Automation – Simplify AMI building and other recurring AMI-related tasks.

Let’s take a look at each one…

Run Command Improvements
You can now control the number of concurrent command executions. This can be useful in situations where the command references a shared, limited resource such as an internal update or patch server and you want to avoid overloading it with too many requests.

This feature is currently accessible from the CLI and from the API. Here’s a CLI example that limits the number of concurrent executions to 2:

$ aws ssm send-command \
  --instance-ids "i-023c301591e6651ea" "i-03cf0fc05ec82a30b" "i-09e4ed09e540caca0" "i-0f6d1fe27dc064099" \
  --document-name "AWS-RunShellScript" \
  --comment "Run a shell script or specify the commands to run." \
  --parameters commands="date" \
  --timeout-seconds 600 --output-s3-bucket-name "jbarr-data" \
  --region us-east-1 --max-concurrency 2

Here’s a more interesting variant that is driven by tags and tag values by specifying --targets instead of --instance-ids:

$ aws ssm send-command \
  --targets "Key=tag:Mode,Values=Production" ... 

You  can also stop issuing commands if they are returning errors, with the option to specify either a maximum number of errors or a failure rate:

$ aws ssm send-command --max-errors 5 ... 
$ aws ssm send-command --max-errors 5% ...

State Manager
State Manager helps to keep your instances in a defined state, as defined by a document. You create the document, associate it with a set of target instances, and then create an association to specify when and how often the document should be applied. Here’s a document that updates the message of the day file:

And here’s the association (this one uses tags so that it applies to current instances and to others that are launched later and are tagged in the same way):

Specifying targets using tags makes the association future-proof, and allows it to work as expected in dynamic, auto-scaled environments. I can see all of my associations, and I can run the new one by selecting it and clicking on Apply Association Now:

Parameter Store
This feature simplifies storage and management for license keys, passwords, and other data that you want to distribute  to your instances. Each parameter has a type (string, string list, or secure string), and can be stored in encrypted form. Here’s how I create a parameter:

And here’s how I reference the parameter in a command:

Maintenance Window
This feature allows specification of a time window for installation of updates and other system maintenance. Here’s how I create a weekly time window that opens for four hours every Saturday:

After I create the window I need to assign a set of instances to it. I can do this by instance Id or by tag:

And  then I need to register a task to perform during the maintenance window. For example, I can run a Linux shell script:

Software Inventory
This feature collects information about software and settings for a set of instances. To access it, I click on Managed Instances and Setup Inventory:

Setting up the inventory creates an association between an AWS-owned document and a set of instances. I simply choose the targets, set the schedule, and identify the types of items to be inventoried, then click on Setup Inventory:

After the inventory runs, I can select an instance and then click on the Inventory tab in order to inspect the results:

The results can be filtered for further analysis. For example, I can narrow down the list of AWS Components to show only development tools and libraries:

I can also run inventory-powered queries across all of the managed instances. Here’s how I can find Windows Server 2012 R2 instances that are running a version of .NET older than 4.6:

AWS Config Integration
The results of the inventory can be routed to AWS Config  and allow you to track changes to the applications, AWS components, instance information, network configuration, and Windows Updates over time. To access this information, I click on Managed instance information above the Config timeline for the instance:

The three lines at the bottom lead to the inventory information. Here’s the network configuration:

Patch Management
This feature helps you to keep the operating system on your Windows instances up to date. Patches are applied during maintenance windows that you define, and are done with respect to a baseline. The baseline specifies rules for automatic approval of patches based on classification and severity, along with an explicit list of patches to approve or reject.

Here’s my baseline:

Each baseline can apply to one or more patch groups. Instances within a patch group have a Patch Group tag. I named my group Win2016:

Then I associated the value with the baseline:

The next step is to arrange to apply the patches during a maintenance window using the AWS-ApplyPatchBaseline document:

I can return to the list of Managed Instances and use a pair of filters to find out which instances are in need of patches:

Automation
Last but definitely not least, the Automation feature simplifies common AMI-building and updating tasks. For example, you can build a fresh Amazon Linux AMI each month using the AWS-UpdateLinuxAmi document:

Here’s what happens when this automation is run:

Available Now
All of the EC2 Systems Manager features and functions that I described above are available now and you can start using them today at no charge. You pay only for the resources that you manage.

Jeff;

 

AWS Cost Explorer Update – Reserved Instance Utilization Report

by Jeff Barr | on | in AWS Cost Explorer | | Comments

Cost Explorer is a tool that helps you to manage your AWS spending using reporting and analytics tools (read The New Cost Explorer for AWS to learn more). You can sign up with a single click and then visualize your AWS costs, analyze trends, and look at spending patterns. You can look at your spending through a set of predefined views (by service, by linked account, daily, and so forth). You can drill in to specific areas of interest and you can also set up custom filters.

Enterprise-scale AWS customers invariably take advantage of the cost savings (up to 75% when compared to On-Demand) provided by Reserved Instances. These customers commonly have thousands of Reserved Instance (RI) subscriptions and want to make sure that they are making great use of them.

New Reserved Instance Utilization Report
Today we are adding a new Reserved Instance Utilization report to Cost Explorer.  It gives you the power to track and manage aggregate RI utilization across your entire organization, even as your usage grows to thousands of subscriptions spread across linked accounts. You can also look at aggregate usage or individual RI usage going back up to one year. You can also look at aggregate usage or individual RI usage going back up to one year and define an RI utilization threshold to monitor your actual usage against it. If you find an RI subscription that is tracking below your predefined utilization target, you can drill down and find the account owner, instance type, and unused hours. You also have access to all of the existing filtering functions provided by Cost Explorer.

I don’t happen to own thousands of Reserved Instances, so I’ll use some sample screen shots and test data to show you what the report looks like and how you can use it! The new report is available in daily and monthly flavors from the Cost Explorer menu (this menu includes three reports that I defined myself):

Here’s the  Daily RI Utilization report:

The RI are displayed in descending order of RI Utilization; the least-used RI’s are at the top of the list by default. I can see utilization over time and detailed information about the RI with a click:

I can filter by instance type or other attributes in order to focus on particular RI’s:

I can also set the desired time range. Here’s how I can verify that I made good use of my d2.8xlarge RI’s last month by filtering on an instance type and setting the time range:

I can set the Utilization Target to any desired percentage and it will be shown on the graph as a reference line. Here, I can see that utilization dipped below 80% in June:

From there I can switch to the daily view and zoom in (click and drag) to learn more:

As I mentioned earlier, you can also filter on linked accounts, regions, and other aspects of each RI.

Now Available
This report is available now and you can start using it today!

Jeff;

 

Amazon ECS – Support for Windows Containers (Beta)

by Jeff Barr | on | in EC2 Container Service | | Comments

I sincerely hope that you are familiar with container-based computing and that you have at least a basic understanding of the value of containerization. As I noted back in 2014, packaging your cloud-based application as a collection of containers, each specified declaratively, gives you a laundry list of benefits including consistency between your development and production environments, a distributed application platform as an architectural base, development efficiency, and operational efficiency.

We launched Amazon EC2 Container Service in late 2014 with support for Linux containers. So far this year we have added support for Application Load Balancing, IAM Roles for ECS tasks, Service Auto Scaling, the Amazon Linux Container Image, and the Blox Open Source Scheduler.

Support for Windows Containers
Today we are continuing our string of ECS launches by adding beta-level support for Windows containers. You can now start to containerize and test your Windows applications while we finalize this feature ahead of production use.

To get started, you simply specify the Windows Server 2016 Base with Containers AMI when you create your cluster (you cannot mix Linux and Windows in the same cluster).

You can also use the Windows Containers AWS CloudFormation Template as a starting point. The template runs in a VPC and creates a Windows-powered cluster with a configurable number of container instances. It also creates IAM Roles, an Application Load Balancer, security groups, an Amazon ECS task definition, an Amazon ECS service, and an Auto Scaling policy. You can use the template as-is or you can modify it for your own use. I opened up the template in the CloudFormation Designer in order to see how all of the parts fit together. I rearranged the items in order to make the structure a bit more apparent; here’s what it looks like:

You can read Windows Containers (Beta) to learn more about support for Windows Containers.

Things to Know
The Windows Server Docker images are fairly large (approximately 9 GiB). Your Windows container instances will require more storage space than your Linux container instance, so plan accordingly. Due to the size of the images, downloading and extracting the contents on initial use can take up 15 minutes. If you use IAM Roles for tasks (read Windows IAM Roles for Tasks to learn more), this time can double. For a full list of other issues and caveats, read Windows Container Caveats.

Jeff;

 

Amazon EFS Update – On-Premises Access via Direct Connect

by Jeff Barr | on | in Amazon EC2, Amazon Elastic File System, AWS Direct Connect, AWS re:Invent | | Comments

I introduced you to Amazon Elastic File System last year (Amazon Elastic File System – Shared File Storage for Amazon EC2) and announced production readiness earlier this year (Amazon Elastic File System – Production-Ready in Three Regions). Since the launch earlier this year, thousands of AWS customers have used it to set up, scale, and operate shared file storage in the cloud.

Today we are making EFS even more useful with the introduction of simple and reliable on-premises access via AWS Direct Connect. This has been a much-requested feature and I know that it will be useful for migration, cloudbursting, and backup. To use this feature for migration, you simply attach an EFS file system to your on-premises servers, copy your data to it, and then process it in the cloud as desired, leaving your data in AWS for the long term.  For cloudbursting, you would copy on-premises data to an EFS file system, analyze it at high speed using a fleet of Amazon Elastic Compute Cloud (EC2) instances, and then copy the results back on-premises or visualize them in Amazon QuickSight.

You’ll get the same file system access semantics including strong consistency and file locking, whether you access your EFS file systems from your on-premises servers or from your EC2 instances (of course, you can do both concurrently). You will also be able to enjoy the same multi-AZ availability and durability that is part-and-parcel of EFS.

In order to take advantage of this new feature, you will need to use Direct Connect to set up a dedicated network connection between your on-premises data center and an Amazon Virtual Private Cloud. Then you need to make sure that your filesystems have mount targets in subnets that are reachable via the Direct Connect connection:

You also need to add a rule to the mount target’s security group in order to allow inbound TCP and UDP traffic to port 2049 (NFS) from your on-premises servers:

After you create the file system, you can reference the mount targets by their IP addresses, NFS-mount them on-premises, and start copying files. The IP addresses are available from within the AWS Management Console:

The Management Console also provides you with access to step-by-step directions! Simply click on the On-premises mount instructions:

And follow along:

This feature is available today at no extra charge in the US East (Northern Virginia), US West (Oregon), EU (Ireland), and US East (Ohio) Regions.

Jeff;

 

New – Amazon Cognito Groups and Fine-Grained Role-Based Access Control

by Tara Walker | on | in Amazon Cognito | | Comments

One of the challenges in building applications has been around user authentication and management.  Let’s face it, not many developers want to build yet another user identification and authentication system for their application nor would they want to cause a user to create yet another account unless needed.  Amazon Cognito makes it simpler for developers to manage user identities, authentication, and permissions in order to access their applications’ data and back end systems.  Now, if only there were a service feature to make it even easier for developers to assign different permissions to different users of their applications.

Today we are excited to announce Cognito User Pools support for groups and Cognito Federated Identities support for fine-grained Role-Based Access Control (RBAC).  With Groups support in Cognito, developers can easily customize users’ app experience by creating groups which represent different user types and app usage permissions.  Developers have the ability to add users and remove users from groups and manage group permissions for sets of users.

Speaking of permissions, support for fine-grained Role-Based Access Control (RBAC) in Cognito Federated Identities allows developers to now assign different IAM roles to different authenticated users.  Previously, Amazon Cognito only supported one IAM role for all authenticated users.  With fine-grained RBAC, a developer can map federated users to different IAM roles; this functionality is available for both user authentication using existing identity providers like Facebook or Active Directory and using Cognito User Pools.

Groups in Cognito User Pools

The best way to examine the new Cognito group feature is take a walkthrough of creating a new group in the Amazon Cognito console and adding users to the different group types.

Cognito - UserPoolCreateGroup

 

After selecting my user pool, TestAppPool, I see the updated menu item; Users and groups.  Upon selection of the menu option, a panel is presented with tabs for both User and Groups.  To create my new group, I select the Create group button.

A dialog box will open to allow for the creation of my group.  Here I will create a group for admin users named AdminGroup.  I will fill in the name for the group, provide a description for the group, and setting the order of precedence the group is ready to be created.  Note that setting the numerical precedence a group determine which group permission is prioritized and therefore utilized for users that have been assigned to multiple groups.  The lower the numerical precedence the higher the prioritization of the group to be used by the user.  Since this is my AdminGroup, I will give this group the precedence of zero (0).  After I click the Create group button, I have successfully created my user pool group.

Cognito - CreateGroupDialog-s

Now all that is left to do is add my user(s) to the group. In my test app pool, I have two (2) users; TestAdminUser and TestUnregisteredUser as shown below. I will add my TestAdminUser to my newly created group.

Cognito - Users

To add my user to the AdminGroup user pool group, I simply go into the Groups tab and select my AdminGroup.  Once the AdminGroup details screen is shown, a click of the Add users button will bring up a dialog box displaying users within my user pool.  Adding a user to this group is a straightforward process, which only requires me to selecting the plus symbol next to the username desired to be added.  Once I receive the confirmation that the user has been added to the group, the process is complete.

As you can see from the walkthrough is easy for a developer to create groups in user pools. Groups can be created and managed groups in a user pool from the AWS management console, the APIs, and the CLI. As a developer you can create, read, update, delete, and list the groups for a user pool using AWS Credentials. Each user pool can contain up to 25 groups.  Additionally, you can add users and remove users from groups within a user pool, and you can use groups to control permissions to access your resources in AWS by assigning an AWS IAM roles for the groups.  You can also use Amazon Cognito combined with Amazon API Gateway to control permissions to your own back end resources.

Cognito - AddUsersAdminGroup - small

 

Fine-grained Role-Based Access Control in Cognito Federated Identities

Let’s now dig into the Cognito Federated Identities’ feature, fine-grained Role-Based Access Control, which we will refer to going forward as RBAC. Before we dive into to RBAC, let do a quick review of features of Cognito Federated Identities.  Cognito Identity assigns users a set of temporary, limited privilege credentials to access the AWS resources from your application without having to use AWS account credentials. The permissions for each user are controlled through AWS IAM roles that you create.

At this point let’s journey into RBAC by doing another walkthrough in the management console.  Once in the console and selected the Cognito service, we will now select Federated Identities.  I think it would be best to show Cognito user pools and Federated Identities in action while examining RBAC, so I am going to create a new identity pool that utilizes Cognito user pools as its authentication provider.  To create a new pool, I will first enter a name for my identity pool and select the Enable access to unauthenticated identities checkbox.  Then under Authentication Providers, I will select the Cognito tab so that I can enter my TestAppPool user pool ID and the app client ID.  Please note, that you must have created an app (app client) within your Cognito user pool in order to obtain the app client ID and to allow the app leveraging the Cognito identity pool to access the associated user pool.

Cognito - CreateIdentityPool

Now that we have created our identity pool, let’s assign role-based access for the Cognito user pool authentication method.  The simplest way to assign different roles is by defining rules in a Cognito identity pool.  Each rule specifies a user attribute or as noted in the console, a claim.  A claim is simply a value in a token for that attribute that will be matched by the rule and associated to a specific IAM role.

In order to truly show the benefit of RBAC, I will need a role for our Test App that gives users that are in the Engineering department access to put objects in S3 and access DynamoDB.  In order to create this role, I first have to create a policy with PutObject access to S3 and GetItem, Query, Scan, and BatchGetItem access to DynamoDB. Let’s call this policy, TestAppEngineerPolicy.  After constructing the aforementioned policy, I will create an IAM role named, EngineersRole, which will leverage this policy.

At this point we have a role with fine grained access to AWS resources, so let’s return to our Cognito identity pool.  Click Edit identity pool and drop down the section for Authentication providers.  Since the authentication provider for our identity pool is a Cognito user pool, we will select the Cognito tab.  Since we are establishing fine-grained RBAC for the federated identity, I will focus my attention to the Authenticated role selection section of the Authentication provider to define a rule.  In this section, click the drop down and select the option Choose role with rules.

Cognito - AuthenticationProvider

We will now setup rule with a claim (an attribute), a value to match, and the specific IAM role, EngineersRole.. For our example, the rule I am creating will assign our specific IAM role, i.e. EngineersRole, to any users authenticated in our Cognito user pool with a department attribute set as ‘Engineering’. Please note: The department attribute that we are basing our rule on, is a custom attribute that I created in our user pool, TestAppPool, as shown in graphic below.

Cognito - CustomAttributes - small

Now that we have that cleared up, let’s focus back on the creation of our rule.  For the claim, I will type the aforementioned custom attribute, department.  This rule will be applicable when the value of department is equal to the string “Engineering”, therefore, in the Match Type field I will select the Equals match type.  Finally, I will type the actual string value, “Engineering”, for the attribute value that should be matched in the rule.  If a user has a matching value for the department attribute, they can assume the EngineersRole IAM role when they get credentials. After completing this and clicking the Save Changes button, I have successfully created rule that allows for users that are authenticated with our Cognito user pool who are in the Engineering department to have different permissions than other authenticated users using the application.

Cognito - AuthenticationRoleSelection

Since we’ve completed our walkthrough of setting up a rule to assign different roles in a Cognito identity pool, let us discuss some key points to remember about fine-grained RBAC.  Firstly, rules are defined with an order and the IAM role for the first matching rule will be applied. Secondly to set up RBAC, you can define rules or leverage the roles passed via the ID token that was assigned by the user pool. For each authentication provider configured in your identity pool, there is a maximum of a total of 25 rules that can be created. Additionally, user permissions are controlled via AWS IAM roles that you create.

Pricing and Availability

Developers can get started right away to take advantage of these exciting new features.  Learn more about these new features and the other benefits of leveraging the Amazon Cognito service by visiting our developer resources page.

Great news is that there is no additional cost for using groups within a user pool. You pay only for Monthly Active Users (MAUs) after the free tier. Also remember that using the Cognito Federated Identities feature for controlling user permissions and generating unique identifiers is always free with Amazon Cognito.  See the Amazon Cognito pricing page for more information.

Tara

New – AWS Application Discovery Service Console

by Jeff Barr | on | in AWS Application Discovery Service, AWS Management Console | | Comments

AWS Application Discovery Service helps you to plan your migration to the cloud. As a central component of the AWS Cloud Adoption Framework, it simplifies the process of automating the process of discovering and collecting important information about your system (read New – AWS Application Discovery Service – Plan Your Cloud Migration to learn more).

There are two different data collection options. You can install a lightweight agent on your physical servers or VMs, or you can run the Agentless Discovery Connnector in your VMWare environment. Either way, AWS Application Discovery Service collects the following information:

  • Installed applications and packages.
  • Running applications and processes.
  • TCP v4 and v6 connections.
  • Kernel brand and version.
  • Kernel configuration.
  • Kernel modules.
  • CPU and memory usage.
  • Process creation and termination events.
  • Disk and network events.
  • NIC information.
  • Use of DNS, DHCP, and Active Directory.

The lightweight agent also collects information about TCP listening ports and associated processes; we will add this feature to the Agentless Discovery Connector sometime soon.

The information is collected, stored locally for optional review, and then uploaded to the cloud across a secure connection on port 443. It is processed and correlated, and then stored in a repository in encrypted form. You can then use the information to help you to choose the applications that you would like to migrate.

New Application Discovery Service Console
When I first wrote about this service, the processed, correlated information was available in XML and CSV formats for use with analysis and migration tools. Today we are launching a new Application Discovery Service Console that is designed to simplify the entire cloud migration process. It helps you to install the agent, discover the applications, map application dependencies, and measure application performance.

Let’s take a tour! The landing page gives you an overview of the service, with a listing of the benefits and features:

Then you choose your data collection option (agent on the servers or VMs, or agentless in your VMware environment). You can click on Learn more for detailed setup instructions.

With the agents and connectors (you can use both together) set up and ready to go, you can start discovery from selected agents/connectors by clicking on Start data collection:

You can see the servers as they are discovered:

You can select one or more servers and group them into a named application, again with a couple of clicks:

You can add one or more tags to each server:

You can see all of the detailed information for each server including network connections, processes, and processes that are producing or consuming network traffic:

And:

You can see a list of the applications (each one running on one or more servers):

You can also learn more about each application:

With this information at hand, you will be ready to plan and execute your migration to the AWS Cloud! To learn more, read the Application Discovery Service User Guide.

Jeff;

PS – Our Application Discovery Service Partners would love to help you with your cloud migration.

Welcome to the Newest AWS Heroes (Winter 2016)

by Ana Visneski | on | in AWS Community Heroes | | Comments

AWS Community Heroes are members of the AWS Community that share their knowledge and demonstrate outstanding enthusiasm for AWS. They do this in a variety of ways including user groups, social media, meetups and workshops. Today we extend a Happy Holiday welcome to the last of the 2016 cohort of AWS Heroes:

In November all the AWS Community Heroes were invited to reInvent and got a chance to join us for a private event for Heroes on Monday evening. The final two Heroes of the 2016 cohort were surprised with an invitation on Monday morning of reInvent week to join the Hero community. They were both able to join us at the event on short notice and were able to meet the other Heroes.

 

Ayumi Tada

AyumiAyumi Tada works at Honda Motor Co. in Japan as an IT infrastructure strategist, promoting the utilization of cloud computing technologies. She also promotes cloud utilization in the CAE/HPC area at JAMA (Japan Automobile Manufacturers Association).

Previously, she worked at Honda R&D as an IT System Administrator, focused on using cloud for High Performance Computing (HPC), including an engineering simulation system (Computer Aided Engineering / CAE), and introduced the use case of HPC on AWS at re:Invent 2014. Currently, she is promoting cloud utilization in a wide range of Enterprise applications.

Ayumi is a member of JAWS-UG (Japan AWS User Group). JAWS-UG was started in 2010, and has over 50+ branches, 100+ leaders, 300+ meetup events per year, and 4000+ members. She is a one of the launch leads of new JAWS branches for HPC specialists and for beginners. She is also a one of the organizers of the JAWS for women branch and participates in other local branches including Kumamoto & JAWS for Enterprises (E-JAWS) meetup events.

Ayumi has an AWS Certified Solutions Architect – Associate certification, she is a Career Development Adviser through the National Career Development Centers’ international partner organization, and she has a BS in Electrical & Electronic Engineering and Information Engineering from Waseda University.

Shimon Tolts

ShimonShimon Tolts has been fascinated by computers since he was eight. When he got his first PC, he immediately started tearing it apart to understand how the different parts were connected to each other. Later, Linux and open source software also had a strong influence, and Shimon started his first company at the age of 15, providing web hosting on top of Linux servers in the pre-cloud era.

During his military service, Shimon served as a Computer Crimes Investigator & Forensics Analyst at the Center Unit for Special Investigations, helping him succeed in a role at Intel Security following his service.

In 2013 Shimon joined ironSource, to establish the R&D infrastructure division. One of the most innovative solutions developed was a Big Data pipeline that was used to stream hundreds of billions of monthly events from different ironSource divisions into Redshift in near real-time. After receiving requests for his solution by the tech community, this solution was released publicly as ATOM DATA.

Shimon leads the Israeli AWS user group and is a regular speaker at Big Data conferences, from AWS Summits to Pop-up Lofts.

 

-Ana

AWS Webinars – January 2017 (Bonus: December Recap)

by Jeff Barr | on | in Webinars | | Comments

Have you had time to digest all of the announcements that we made at AWS re:Invent? Are you ready to debug with AWS X-Ray, analyze with Amazon QuickSight, or build conversational interfaces using Amazon Lex? Do you want to learn more about AWS Lambda, set up CI/CD with AWS CodeBuild, or use Polly to give your applications a voice?

January Webinars
In our continued quest to provide you with training and education resources, I am pleased to share the webinars that we have set up for January. These are free, but they do fill up and you should definitely register ahead of time. All times are PT and each webinar runs for one hour:

January 16:

January 17:

January 18::

January 19:

January 20

December Webinar Recap
The December webinar series is already complete; here’s a quick recap with links to the recordings:

December 12:

December 13:

December 14:

December 15:

December 16:

Jeff;

PS – If you want to get a jump start on your 2017 learning objectives, the re:Invent 2016 Presentations and re:Invent 2016 Videos are just a click or two away.

Expanding the AWS Blog Team – Welcome Ana, Tina, Tara, and the Localization Team

by Jeff Barr | on | in Announcements | | Comments

I wrote my first post for this blog back in 2004, and have published over 2,700 more since then, including 52 last month! Given the ever-increasing pace of AWS innovation, and the amount of cool stuff that we have to share with you, we are expanding our blogging team. Please give a warm welcome to Ana, Tina, and Tara:

Ana Visneski (@acvisneski) was the first official blogger for the United States Coast Guard. While there she focused on search & rescue coordination and also led the team that established a social media presence. Ana is a graduate of the University of Washington Communications Leadership Program, and was the first to complete both the Master of Communication in Digital Media (MCDM) and Master of Communication in Communities and Networks (MCCN) degree programs. Ana works with our guest posters, tracks our metrics, and manages the ticketing system that we use to coordinate our activities.

Tina Barr (@tinathebarr) is a Recruiting Coordinator for the AWS Commercial Sales organization. In order to provide a great first impression for her candidates, she began to read the AWS Customer Success Stories and developed a special interest in startups. In addition to her recruiting duties, Tina writes the AWS Hot Startups (September, October, November) posts each month. Tina earned a Bachelor’s degree in Community Health from Western Washington University and has always enjoyed writing.

Tara Walker (@taraw) is an AWS Technical Evangelist. Her background includes time as a developer and software engineer at multiple high-tech and media companies. With a focus on IoT, mobile, gaming, serverless architectures, and cross-platform development, Tara loves to dive deep into the latest and greatest technical topics and build compelling demos. Tara has a Bachelor’s degree from Georgia State University and is currently working on a Master’s degree in Computer Science from Georgia Institute of Technology. Like me, Tara will focus on writing posts for upcoming AWS launches.

I am thrilled to be working with these three talented and creative new members of our blogging team, and am looking forward to seeing what they come up with.

AWS Blog Localization Team
Many of the posts on this blog have been translated into Japanese and Korean for AWS customers who are most comfortable in those languages. A big thank-you is due to those who are doing this work:

  • JapaneseTatsuji Ishibashi (Product Marketing Manager for AWS Japan) manages and reviews the translation work that is done by a team of Solution Architects in Japan and the AWS Localization Team, and publishes the content on the AWS Japan Blog.
  • KoreanChanny Yun (Technical Evangelist for AWS Korea) translates posts for the AWS Korea Blog.
  • Mandarin Chinese – Our colleagues in China are translating important announcements for the AWS China Blog.

Jeff;