AWS Blog
New – Cross-Account Snapshot Sharing for Amazon Aurora
Amazon Aurora is a high-performance, MySQL-compatible database engine. Aurora combines the speed and availability of high-end commercial databases with the simplicity and cost-effective of open source databases (see my post, Amazon Aurora – New Cost-Effective MySQL-Compatible Database Engine for Amazon RDS, to learn more). Aurora shares some important attributes with the other database engines that are available for Amazon RDS including easy administration, push-button scalability, speed, security, and cost-effectiveness.
You can create a snapshot backup of an Aurora cluster with just a couple of clicks. After you have created a snapshot, you can use it to restore your database, once again with a couple of clicks.
Share Snapshots
Today we are giving you the ability to share your Aurora snapshots. You can share them with other AWS accounts and you can also make them public. These snapshots can be used to restore the database to an Aurora instance running in a separate AWS account in the same Region as the snapshot.
There are several primary use cases for snapshot sharing:
Separation of Environments – Many AWS customers use separate AWS accounts for their development, test, staging, and production environments. You can share snapshots between these accounts as needed. For example, you can generate the initial database in your staging environment, snapshot it, share the snapshot with your production account, and then use it to create your production database. Or, should you encounter an issue with your production code or queries, you can create a snapshot of your production database and then share it with your test account for debugging and remediation.
Partnering – You can share database snapshots with selected partners on an as-needed basis.
Data Dissemination -If you are running a research project, you can generate snapshots and then share them publicly. Interested parties can then create their own Aurora databases using the snapshots, using your work and your data as a starting point.
To share a snapshot, simply select it in the RDS Console and click on Share Snapshot. Then enter the target AWS account (or click on Public to share the snapshot publicly) and click on Add:

You can share manually generated, unencrypted snapshots with other AWS accounts or publicly. You cannot share automatic snapshots or encrypted snapshots.
The shared snapshot becomes visible in the other account right away:

Public snapshots are also visible (select All Public Snapshots as the Filter):

Available Now
This feature is available now and you can start using it today.
X1 Instances for EC2 – Ready for Your Memory-Intensive Workloads
Many AWS customers are running memory-intensive big data, caching, and analytics workloads and have been asking us for EC2 instances with ever-increasing amounts of memory.
Last fall, I first told you about our plans for the new X1 instance type. Today, we are announcing availability of this instance type with the launch of the x1.32xlarge instance size. This instance has the following specifications:
- Processor: 4 x Intel™ Xeon E7 8880 v3 (Haswell) running at 2.3 GHz – 64 cores / 128 vCPUs.
- Memory: 1,952 GiB with Single Device Data Correction (SDDC+1).
- Instance Storage: 2 x 1,920 GB SSD.
- Network Bandwidth: 10 Gbps.
- Dedicated EBS Bandwidth: 10 Gbps (EBS Optimized by default at no additional cost).
The Xeon E7 processor supports Turbo Boost 2.0 (up to 3.1 GHz), AVX 2.0, AES-NI, and the very interesting (to me, anyway) TSX-NI instructions. AVX 2.0 (Advanced Vector Extensions) can improve performance on HPC, database, and video processing workloads; AES-NI improves the speed of applications that make use of AES encryption. The new TSX-NI instructions support something cool called transactional memory. The instructions allow highly concurrent, multithreaded applications to make very efficient use of shared memory by reducing the amount of low-level locking and unlocking that would otherwise be needed around each memory access.
If you are ready to start using the X1 instances in the US East (Northern Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), or Asia Pacific (Sydney) Regions, please request access and we’ll get you going as soon as possible. We have plans to make the X1 instances available in other Regions and in other sizes before too long.
3-year Partial Upfront Reserved Instance Pricing starts at $3.970 per hour in the US East (Northern Virginia) Region; see the EC2 Pricing page for more information. You can purchase Reserved Instances and Dedicated Host Reservations today; Spot bidding is on the near-term roadmap.
Here are some screen shots of an x1.32xlarge in action. lscpu shows that there are 128 vCPUs spread across 4 sockets:

On bootup, the kernel reports on the total accessible memory:

The top command shows a huge number of running processes and lots of memory:

Ready for Enterprise-Scale SAP Workloads
The X1 instances have been certified by SAP for production workloads. They meet the performance bar for SAP OLAP and OLTP workloads backed by SAP HANA.
You can migrate your on-premises deployments to AWS and you can also start fresh. Either way, you can run S/4HANA, SAP’s next-generation Business Suite, as well as earlier versions.
Many AWS customers are currently running HANA in scale-out fashion across multiple R3 instances. Many of these workloads can now be run on a single X1 instance. This configuration will be simpler to set up and less expensive to run. As I mention below, our updated SAP HANA Quick Start will provide you with more information on your configuration options.
Here’s what SAP HANA Studio looks like when run on an X1 instance:

You have several interesting options when it comes to disaster recovery (DR) and high availability (HA) when you run your SAP HANA workloads on an X1 instance. For example:
- Auto Recovery – Depending on your RPO (Recovery Point Objective) and RTO (Recovery Time Objective), you may be able to use a single instance in concert with EC2 Auto Recovery.
- Hot Standby – You can run X1 instances in 2 Availability Zones and use HANA System Replication to keep the spare instance in sync.
- Warm Standby / Manual Failover – You can run a primary X1 instance and a smaller secondary instance configured to persist only to permanent storage. In the event that a failover is necessary, you stop the secondary instance, modify the instance type to X1, and reboot. This unique, AWS-powered option will give you quick recovery while keeping costs low.
We have updated our HANA Quick Start as part of today’s launch. You can get SAP HANA running in a new or existing VPC within an hour using a well-tested configuration:
The Quick Start will help you to configure the instance and the associated storage, install the requisite operating system packages, and to install SAP HANA.
We have also released a SAP HANA Migration Guide. It will help you to migrate your existing on-premises or AWS-based SAP HANA workloads to AWS.
— Jeff;I Love My Amazon WorkSpace!
Early last year my colleague Steve Mueller stopped by my office to tell me about an internal pilot program that he thought would be of interest to me. He explained that they were getting ready to run Amazon WorkSpaces on the Amazon network and offered to get me on the waiting list. Of course, being someone that likes to live on the bleeding edge, I accepted his offer.
Getting Started
Shortly thereafter I started to run the WorkSpaces client on my office desktop, a fairly well-equipped PC with two screens and plenty of memory. At that time I used the desktop during the working day and a separate laptop when I was traveling or working from home. Even though I used Amazon WorkDocs to share my files between the two environments, switching between them caused some friction. I had distinct sets of browser tabs, bookmarks, and the like. No matter how much I tried, I could never manage to keep the configurations of my productivity apps in sync across the environments.
After using the WorkSpace at the office for a couple of weeks, I realized that it was just as fast and responsive as my desktop. Over that time, I made the WorkSpace into my principal working environment and slowly severed my ties to my once trusty desktop.
I work from home two or three days per week. My home desktop has two large screens, lots of memory, a top-notch mechanical keyboard, and runs Ubuntu Linux. I run VirtualBox and Windows 7 on top of Linux. In other words, I have a fast, pixel-rich environment.
Once I was comfortable with my office WorkSpace, I installed the client at home and started using it there. This was a giant leap forward and a great light bulb moment for me. I was now able to use my fast, pixel-rich home environment to access my working environment.
At this point you are probably thinking that the combination of client virtualization and server virtualization must be slow, laggy, or less responsive than a local device. That’s just not true! I am an incredibly demanding user. I pound on the keyboard at a rapid-fire clip, I keep tons of windows open, alt-tab between them like a ferret, and I am absolutely intolerant of systems that get in my way. My WorkSpace is fast and responsive and makes me even more productive.
Move to Zero Client
A few months in to my WorkSpaces journey, Steve IM’ed me to talked about his plan to make some Zero Client devices available to members of the pilot program. I liked what he told me and I agreed to participate. He and his sidekick Michael Garza set me up with a Dell Zero Client and two shiny new monitors that had been taking up space under Steve’s desk. At this point my office desktop had no further value to me. I unplugged it, saluted it for its meritorious service, and carried it over to the hardware return shelf in our copy room. I was now all-in, and totally dependent on, my WorkSpace and my Zero Client.
The Zero Client is a small, quiet device. It has no fans and no internal storage. It simply connects to the local peripherals (displays, keyboard, mouse, speakers, and audio headset) and to the network. It produces little heat and draws far less power than a full desktop.
During this time I was also doing quite a bit of domestic and international travel. I began to log in to my WorkSpace from the road. Once I did this, I realized that I now had something really cool—a single, unified working environment that spanned my office, my home, and my laptop. I had one set of files and one set of apps and I could get to them from any of my devices. I now have a portable desktop that I can get to from just about anywhere.
The fact that I was using a remote WorkSpace instead of local compute power faded in to the background pretty quickly. One morning I sent the team an email with the provocative title “My WorkSpace has Disappeared!” They read it in a panic, only to realize that I had punked them, and that I was simply letting them know that I was able to focus on my work, and not on my WorkSpace. I did report a few bugs to them, none of which were serious, and all of which were addressed really quickly.
Dead Laptop
The reality of my transition became apparent late last year when the hard drive in my laptop failed one morning. I took it in to our IT helpdesk and they replaced the drive. Then I went back up to my office, reinstalled the WorkSpaces client, and kept on going. I installed no other apps and didn’t copy any files. At this point the only personal items on my laptop are the registration code for the WorkSpace and my stickers! I do still run PowerPoint locally, since you can never know what kind of connectivity will be available at a conference or a corporate presentation.
I also began to notice something else that made WorkSpaces different and better. Because laptops are portable and fragile, we all tend to think of the information stored on them as transient. In the dark recesses of our minds we know that one day something bad will happen and we will lose the laptop and its contents. Moving to WorkSpaces takes this worry away. I know that my files are stored in the cloud and that losing my laptop would be essentially inconsequential.
It Just Works
To borrow a phrase from my colleague James Hamilton, WorkSpaces just works. It looks, feels, and behaves just like a local desktop would.
Like I said before, I am demanding user. I have two big monitors, run lots of productivity apps, and keep far too many browser windows and tabs open. I also do things that have not been a great fit for virtual desktops up until now. For example:
Image Editing – I capture and edit all of the screen shots for this blog (thank you, Snagit).
Audio Editing – I use Audacity to edit the AWS Podcasts. This year I plan to use the new audio-in support to record podcasts on my WorkSpace.
Music – I installed the Amazon Music player and listen to my favorite tunes while blogging.
Video – I watch internal and external videos.
Printing – I always have access to the printers on our corporate network. When I am at home, I also have access to the laser and ink jet printers on my home network.
Because the WorkSpace is running on Amazon’s network, I can download large files without regard to local speed limitations or bandwidth caps. Here’s a representative speed test (via Bandwidth Place):

Sense of Permanence
We transitioned from our pilot WorkSpaces to our production environment late last year and are now provisioning WorkSpaces for many members of the AWS team. My WorkSpace is now my portable desktop.
After having used WorkSpaces for well over a year, I have to report that the biggest difference between it and a local environment isn’t technical. Instead, it simply feels different (and better). There’s a strong sense of permanence—my WorkSpace is my environment, regardless of where I happen to be. When I log in, my environment is always as I left it. I don’t have to wait for email to sync or patches to install, as I did when I would open up my laptop after it had been off for a week or two.
Now With Tagging
As enterprises continue to evaluate, adopt, and deploy WorkSpaces in large numbers, they have asked us for the ability to track usage for cost allocation purposes. In many cases they would like to see which WorkSpaces are being used by each department and/or project. Today we are launching support for tagging of WorkSpaces. The WorkSpaces administrator can now assign up to 10 tags (key/value pairs) to each WorkSpace using the AWS Management Console, AWS Command Line Interface (CLI), or the WorkSpaces API. Once tagged, the costs are visible in the AWS Cost Allocation Report where they can be sliced and diced as needed for reporting purposes.
Here’s how the WorkSpaces administrator can use the Console to manage the tags for a WorkSpace:

Tags are available today in all Regions where WorkSpaces is available: US East (Northern Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Sydney).
Learning More
If you have found my journey compelling and would like to learn more, here are some resources to get you started:
- Amazon WorkSpaces home page – Complete technical and pricing information.
- Amazon WorkSpaces Testimonials – Learn how Yamaha, Endemol Shine Nederland, and the Louisiana Department of Corrections are using WorkSpaces.
- Deploying Amazon WorkSpaces at Scale with Johnson & Johnson – A detailed presentation from one of our enterprise customers.
- How Amazon.com is Moving to Amazon WorkSpaces – A detailed recounting of our journey, including a lot of information on how we decided on our networking configuration. I have a brief appearance at the end as the “customer.”
- AHEAD Desktop As a Service – Learn how AWS Partner AHEAD is delivering Desktops as a Service (DaaS) using WorkSpaces. You can also register and watch their recorded webinar.
Request a Demo
If you and your organization could benefit from Amazon WorkSpaces and would like to learn more, please get in touch with our team at [email protected].
Welcome to the Newest AWS Community Heroes (Spring 2016)
I would like to extend a warm welcome to the newest AWS Community Heroes:
- Ryan Kroonenburg
- Aleksandar Nenov
- Markus Ostertag
- Cliff Lu
The Heroes share their knowledge and demonstrate their enthusiasm for AWS via social media, blog posts, user groups, and workshops. Let’s take a look at their bios to learn more.
Ryan Kroonenburg
Ryan is a UK-based Solutions Architect and the founder of A Cloud Guru, a community which is dedicated to teaching all aspects of the AWS platform. Together with his brother Sam, Ryan has taught AWS to over 50,000 students. They also designed one of the first Serverless Learning Management Systems and help to organize serverless conferences and meetups all over the world. Ryan holds a Bachelor’s degree in Accounting & Finance from Australia’s Curtin University. He has also earned multiple IT certifications (ITILv3, MCITP, and MSSQL DBA to name a few) and four AWS certificates.
You can connect with Ryan on LinkedIn or follow him on Twitter.
Aleksandar Nenov
Aleksandar is a Senior IT professional. He focuses on cloud operations and managed AWS services. He has a deep understanding of AWS from the business, technical, and service management perspectives. He has been using AWS since 2009, when he began to help dozens of small and large businesses move their operations to the AWS cloud. Aleksandar is the CEO and founder of CLOUDWEBOPS, the first APN Consulting Partner in Serbia and Southeast Europe. He created AWS User Group Serbia and has grown it to over 300 members, and has inspired AWS enthusiasts in Bulgaria, Bosnia, and Herzegovina to create similar groups of their own.
His proudest achievement to date is his direct involvement in AWSome Day SEE, a free introductory event hosted by AWS experts. The event drew over 250 participants from Serbia, along with guests from Croatia, Slovenia, Bulgaria, Macedonia, Bosnia, and Herzegovina.
Connect with Aleksandar on LinkedIn to learn more.
Markus Ostertag
As Head of Development at Team Internet AG in Munich, Markus explores new ways to take advantage of highly scalable platforms to make ad-tech, real-time bidding, and online marketing more efficient. He leverages the cloud to solve scale and performance problems, and enjoys working with cutting-edge technologies. While working on his degree in Computer Science at the Technical University of Munich, Markus ran a large German movie review site and enjoyed his first contact with AWS, all the way back in 2008. Since then, he has focused on sharing his knowledge with other companies and users.
Markus co-founded and still runs the AWS User Group in Munich (over 800 members and growing). He speaks frequently on tech and cloud topics at meetups, universities, and conferences (including AWS re:Invent) and the 2015 AWS Berlin Summit.
You can connect with Markus on LinkedIn or follow him on Twitter.
Cliff Lu
Cliff is a senior architect with 104 Corp. His work there drives business agility, scalability, and cost-effectiveness. He’s been a Solutions Architect and a DevOps Evangelist at several enterprises and startups in Taiwan. He specializes in service migration, and has built tools and designed patterns that facilitate cloud adoption. Cliff has earned several AWS certifications and presented the 2015 Taiwan Recap at re:Invent 2015.
Cliff has served as the organizer of AWS User Group Taiwan since 2014. The group currently boasts over 4,500 members and has been meeting regularly since 2012.
You can read Cliff’s blog or connect with him on LinkedIn.
Welcome Aboard
Please join me in welcoming our newest AWS Community Heroes!
EC2 Run Command Update – Manage & Share Commands and More
The EC2 Run Command allows you to manage your EC2 instances in a convenient, scalable fashion (see my blog post, New EC2 Run Command – Remote Instance Management at Scale, for more information).
Today we are making several enhancements to this feature:
Document Management and Sharing -You can now create custom command documents and share them with other AWS accounts or publicly with all AWS users.
Additional Predefined Commands – You can use some new predefined commands to simplify your administration of Windows instances.
Open Sourced Agent – The Linux version of the on-instance agent is now available in open source form on GitHub.
Document Management and Sharing
You can now manage and share the command documents that you execute via Run Command. This will allow you to add additional rigor to your administrative procedures by reducing variability and removing a source of errors. You can also take advantage of command documents that are created and shared by other AWS users.
This feature was designed to support several scenarios that our customers have shared with us. Some customers wanted to create documents in one account and then share them with other accounts that are part of the same organization. Others wanted to package up common tasks and share them with the broader community. AWS partners wanted to share documents that would encapsulate common setup and administrative tasks specific to the offerings.
Here’s how you can see your documents, public documents, and documents that have been shared with you:

You can click on a document to learn more about what it does:

And to find out what parameters it accepts:

You can also examine the document before you run it (this is a highly recommended best practice, especially for documents that have been shared with you):

You can create a new command (I used a simplified version of the built-in AWS-RunShellScript command):

Finally, you can share a document that you have uploaded and tested. You can share it publicly or with specific AWS accounts:

Read about Creating Your Own Command to learn more about this feature.
Additional Predefined Commands
Many AWS customers use Run Command to maintain and administer EC2 instances that are running Microsoft Windows. We have added four new commands designed to simplify and streamline some common operations:
AWS-ListWindowsInventory – Collect on-instance inventory information (operating system, installed applications, and installed updates). Results can be directed to an S3 bucket.
AWS-FindWindowsUpdates – List missing Windows updates.
AWS-InstallMissingWindowsUpdates – Install missing Windows updates.
AWS-InstallSpecificWindowsUpdates – Install a specific set of Windows updates, identified by Knowledge Base (KB) IDs.
Open Sourced Agent
The Linux version of the on-instance Simple Systems Manager (SSM) agent is now available on GitHub at https://github.com/aws/amazon-ssm-agent .
You are welcome to submit pull requests for this code (see CONTRIBUTING.md for more info).
Available Now
The features described above are available now and you can start using them today in the US East (Northern Virginia), US West (Oregon), Europe (Ireland), US West (Northern California), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), and South America (Brazil) Regions.
To learn more, read Managing Amazon EC2 Instances Remotely (Windows) and Managing Amazon EC2 Instances Remotely (Linux).
— Jeff;New – AWS Application Discovery Service – Plan Your Cloud Migration
Back in the mid-1980’s, I was working on a system that was deployed on Wall Street. Due to a multitude of project constraints, I had to do most of my debugging on-site, spending countless hours in a data center high above Manhattan. The data center occupied an entire floor of the high-rise.
Close to the end of my time there, I was treated to an informal tour of the floor. Due to incremental procurement of hardware and software over several decades, the floor was almost as interesting as Seattle’s Living Computer Museum. Virtually every known brand and model of hardware was present, all wired together in an incomprehensibly complex whole, held together by tribal knowledge and a deeply held fear of updates and changes.
Today, many AWS customers are taking a long, hard look at legacy environments such as the one I described above and are putting plans in place to migrate large parts of it to the AWS Cloud!
Application Discovery Service
The new AWS Application Discovery Service (first announced at the AWS Summit in Chicago) is designed to help you to dig in to your existing environments, identify what’s going on, and provide you with the information and visibility that you need to have in order to successfully migrate existing applications to the cloud.
This service is an important part of the AWS Cloud Adoption Framework. The framework helps our customers to plan for their journey. Among other things, it outlines a series of migration steps:
- Evaluate current IT estate.
- Discover and plan.
- Build.
- Run.
The Application Discovery Service focuses on step 2 of the journey by automating a process that would be slow, tedious, and complex if done manually.
The Discovery Agent
To get started, you simply install the small, lightweight agent on your source hosts. The agent unobtrusively collects the following system information:
- Installed applications and packages.
- Running applications and processes.
- TCP v4 and v6 connections.
- Kernel brand and version.
- Kernel configuration.
- Kernel modules.
- CPU and memory usage.
- Process creation and termination events.
- Disk and network events.
- TCP and UDP listening ports and the associated processes.
- NIC information.
- Use of DNS, DHCP, and Active Directory.
The agent can be run either offline or online. When run offline, it collects the information listed above and stores it locally so that you can review it. When run online, it uploads the information to the Application Discovery Service across a secure connection on port 443. The information is processed and correlated, then stored in a repository for access via a new set of CLI commands and API functions. The repository stores all of the discovered, correlated information in a secure form.
The agent can be run on Ubuntu 14, Red Hat 6-7, CentOS 6-7, and Windows (Server 2008 R2, Server 2012, Server 2012 R2). We plan to add additional options over time so be sure to let us know what you need.
Application Discovery Service CLI
The Application Discovery Service includes a CLI that you can use to query the information collected by the agents. Here’s a sample:
describe-agents – List the set of running agents.
start-data-collection – Initiate the data collection process.
list-servers – List the set of discovered hosts.
list-connections – List the network connections made by a discovered host. This command (and several others that I did not list) gives you the power to identify and map out application dependencies.
Application Discovery Service APIs
The uploaded information can be accessed and annotated using some new API functions:
ListConfigurations – Search the set of discovered hosts for servers, processes, or connections.
DescribeConfigurations – Retrieve detailed information about a discovered host.
CreateTags – Add tags to a discovered host for classification purposes.
DeleteTags – Remove tags from a discovered host.
ExportConfigurations – Export the discovered information in CSV form for offline processing and visualization using analysis and migration tools from our Application Discovery Service Partners.
The application inventory and the network dependencies will help you to choose the applications that you would like to migrate, while also helping you to determine the appropriate priority for each one.
Available Now
The AWS Application Discovery Service is available now via our APN Partners and AWS Professional Services. To learn more, read the Application Discovery Service User Guide and the Application Discovery Service API Reference.
AWS Week in Review – April 25, 2016
Let’s take a quick look at what happened in AWS-land last week:
New & Notable Open Source
- dynamodb-lambda-autoscale autoscales DynamodB using an AWS Lambda function.
- chaos-lambda randomly terminates EC2 instances within an Auto Scaling Group during business hours.
- local-node-lambda lets you run Lambda functions locally.
- node-lambda is a command line tool to locally run and then deploy Node.js applications to Lambda.
- patrol-rules-aws is a set of rules implemented using lambda-cfn that monitors AWS infrastructure for best practices, security, and compliance.
- checkall runs commands against every EC2 instance within an account.
- lambda-dynamodb-local is a container-based local runtime for Lambda & DynamoDB.
- aws-lambda-rdbms-integration integrates Lambda with relational databases.
- AWSTrycorder is a cross-account data collector for AWS.
- ssh-everywhere integrates ssh and tmux with the AWS CLI to create tmux sessions with a pane for each EC2 instance.
New SlideShare Presentations
- AWS Summit Manila:
New Customer Success Stories
- ACTi Corporation.
- Bandai Namco Studios.
- Duolingo.
- Gannett.
- GE Oil & Gas.
- HERE.
- Kellogg’s.
- RWE Czech Republic.
Upcoming Events
- May 5 – Live Event (Palo Alto, CA) – AWS Big Data Meetup – Machine Learning in the Cloud.
- May – AWS Partner Webinars.
- AWS Zombie Microservices Roadshow.
Help Wanted
Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.
— Jeff;
GE Oil & Gas – Digital Transformation in the Cloud
GE Oil & Gas is a relatively young division of General Electric, the product of a series of acquisitions made by parent company General Electric starting in the late 1980s. Today GE Oil &Gas is pioneering the digital transformation of the company. In the guest post below, Ben Cabanas, the CTO of GE Transportation and formerly the cloud architect for GE Oil & Gas, talks about some of the key steps involved in a major enterprise cloud migration, the theme of his recent presentation at the 2016 AWS Summit in Sydney, Australia.
You may also want to learn more about Enterprise Cloud Computing with AWS.
— Jeff;Challenges and Transformation
GE Oil & Gas is at the forefront of GE’s digital transformation, a key strategy for the company going forward. The division is also operating at a time when the industry is facing enormous competitive and cost challenges, so embracing technological innovation is essential. As GE CIO Jim Fowler has noted, today’s industrial companies have to become digital innovators to thrive.
Moving to the cloud is a central part of this transformation for GE. Of course, that’s easier said than done for a large enterprise division of our size, global reach, and role in the industry. GE Oil & Gas has more than 45,000 employees working across 11 different regions and seven research centers. About 85 percent of the world’s offshore oil rigs use our drilling systems, and we spend $5 billion annually on energy-related research and development—work that benefits the entire industry. To support all of that work, GE Oil & Gas has about 900 applications, part of a far larger portfolio of about 9,000 apps used across GE. A lot of those apps may have 100 users or fewer, but are still vital to the business, so it’s a huge undertaking to move them to the cloud.

Our cloud journey started in late 2013 with a couple of goals. We wanted to improve productivity in our shop floors and manufacturing operations. We sought to build applications and solutions that could reduce downtime and improve operations. Most importantly, we wanted to cut costs while improving the speed and agility of our IT processes and infrastructure.
Iterative Steps
Working with AWS Professional Services and Sogeti, we launched the cloud initiative in 2013 with a highly iterative approach. In the beginning, we didn’t know what we didn’t know, and had to learn agile as well as how to move apps to the cloud. We took steps that, in retrospect, were crucial in supporting later success and accelerated cloud adoption. For example, we sent more than 50 employees to Seattle for training and immersion in AWS technologies so we could keep critical technical IP in-house. We built foundational services on AWS, such as monitoring, backup, DNS, and SSO automation that, after a year or so, fostered the operational maturity to speed the cloud journey. In the process, we discovered that by using AWS, we can build things at a much faster pace than what we could ever accomplish doing it internally.

Moving to AWS has delivered both cost and operational benefits to GE Oil & Gas.
We architected for resilience, and strove to automate as much as possible to reduce touch times. Because automation was an overriding consideration, we created a “bot army” that is aligned with loosely coupled microservices to support continuous development without sacrificing corporate governance and security practices. We built in security at every layer with smart designs that could insulate and protect GE in the cloud, and set out to measure as much as we could—TCO, benchmarks, KPIs, and business outcomes. We also tagged everything for greater accountability and to understand the architecture and business value of the applications in the portfolio.
Moving Forward
All of these efforts are now starting to pay off. To date, we’ve realized a 52 percent reduction in TCO. That stems from a number of factors, including the bot-enabled automation, a push for self-service, dynamic storage allocation, using lower-cost VMs when possible, shutting off compute instances when they’re not needed, and moving from Oracle to Amazon Aurora. Ultimately, these savings are a byproduct of doing the right thing.
The other big return we’ve seen so far is an increase in productivity. With more resilient, cloud-enabled applications and a focus on self-service capability, we’re getting close to a “NoOps” environment, one where we can move away from “DevOps” and “ArchOps,” and all the other “ops,” using automation and orchestration to scale effectively without needing an army of people. We’ve also seen a 50 percent reduction in “tickets” and a 98 percent reduction in impactful business outages and incidents—an unexpected benefit that is as valuable as the cost savings.

For large organizations, the cloud journey is an extended process. But we’re seeing clear benefits and, from the emerging metrics, can draw a few conclusions. NoOps is our future, and automation is essential for speed and agility—although robust monitoring and automation require investments of skill, time, and money. People with the right skills sets and passion are a must, and it’s important to have plenty of good talent in-house. It’s essential to partner with business leaders and application owners in the organization to minimize friction and resistance to what is a major business transition. And we’ve found AWS to be a valuable service provider. AWS has helped move a business that was grounded in legacy IT to an organization that is far more agile and cost efficient in a transformation that is adding value to our business and to our people.
— Ben Cabanas, Chief Technology Officer, GE Transportation
Register Now – AWS DevDay in San Francisco
I am a firm believer in the value of continuing education. These days, the half-life on knowledge of any particular technical topic seems to be less than a year. Put another way, once you stop learning your knowledge base will be just about obsolete within 2 or 3 years!
In order to make sure that you stay on top of your field, you need to decide to learn something new every week. Continuous learning will leave you in a great position to capitalize on the latest and greatest languages, tools, and technologies. By committing to a career marked by lifelong learning, you can be sure that your skills will remain relevant in the face of all of this change.
Keeping all of this in mind, I am happy to be able to announce that we will be holding an AWS DevDay in San Francisco on June 21st.The day will be packed with technical sessions, live demos, and hands-on workshops, all focused on some of today’s hottest and most relevant topics. If you attend the AWS DevDay, you will also have the opportunity to meet and speak with AWS engineers and to network with the AWS technical community.
Here are the tracks:
- Serverless – Build and run applications without having to provision, manage, or scale infrastructure. We will demonstrate how you can build a range of applications from data processing systems to mobile backends to web applications.
- Containers – Package your application’s code, configurations, and dependencies into easy-to-use building blocks. Learn how to run Docker-enabled applications on AWS.
- IoT – Get the most out of connecting IoT devices to the cloud with AWS. We will highlight best practices using the cloud for IoT applications, connecting devices with AWS IoT, and using AWS endpoints.
- Mobile – When developing mobile apps, you want to focus on the activities that make your app great and not the heavy lifting required to build, manage, and scale the backend infrastructure. We will demonstrate how AWS helps you easily develop and test your mobile apps and scale to millions of users.
We will also be running a series of hands-on workshops that day:
- Zombie Apocalypse Workshop: Building Serverless Microservices.
- Develop a Snapchat Clone on AWS.
- Connecting to AWS IoT.
Registration and Location
There’s no charge for this event, but space is limited and you need to register quickly in order to attend.
All sessions will take place at the AMC Metreon at 135 4th Street in San Francisco.
— Jeff;
Hot Startups on AWS – April 2016 – Robinhood, Dubsmash, Sharethrough
Continuing with our focus on hot AWS-powered startups (see Hot Startups on AWS – March 2016 for more info), this month I would like to tell you about:
- Robinhood – Free stock trading to democratize access to financial markets.
- Dubsmash – Bringing joy to communication through video.
- Sharethrough – An all-in-one native advertising platform.
Robinhood
The founders of Robinhood graduated from Stanford and then moved to New York to build trading platforms for some of the largest financial institutions in the world. After seeing that these institutions charged investors up to $10 to place trades that cost almost nothing, they moved back to California with the goal of democratizing access to the markets and empowering personal investors.
Starting with the idea that a technology-driven brokerage could operate with significantly less overhead than a traditional firm, they built a self-serve service that allows customers to sign up in less than 4 minutes. To date, their customers have transacted over 3 billion dollars while saving over $100 million dollars in commissions.
After a lot of positive pre-launch publicity, Robinhood debuted with a waiting list of nearly a million people. Needless to say, they had to pay attention to scale from the very beginning. Using 18 distinct AWS services, a beginning team of just two DevOps people built the entire system. They use AWS Identity and Access Management (IAM) to regulate access to services and to data, simplifying their all-important compliance efforts. The Robinhood data science team uses Amazon Redshift to help identify possible instances of fraud and money laundering. Next on the list is international expansion, with plans to make use of multiple AWS Regions.
Dubsmash
The founders of Dubsmash had previously worked together to create several video-powered applications. As the cameras in smartphones continued to improve, they saw an opportunity to create a platform that would empower people to express themselves visually. Starting simple, they built their first prototype in a couple of hours. The functionality was minimal: play a sound, select a sound, record a video, and share. The initial response was positive and they set out to build the actual product.
The resulting product, Dubsmash, allows users to combine video with popular sound bites and to share the videos online – with a focus on modern messaging apps. The founders began working on the app in the summer of 2014 and launched the first version the following November. Within a week it reached the top spot in the German App Store. As often happens, early Dubsmash users have put the app to use in intriguing and unanticipated ways. For example, Eric Bruce uses Dubsmash to create entertaining videos of him and his young son Jack to share with Priscilla (Eric’s wife / Jack’s mother) (read Watch A Father and His Baby Son Adorably Master Dubsmash to learn more).
Dubsmash uses Amazon Simple Storage Service (S3) for video storage, with content served up through Amazon CloudFront. They have successfully scaled up from their MVP and now handle requests from millions of users. To learn more about their journey, read their blog post, How to Serve Millions of Mobile Clients with a Single Core Server.
Sharethrough
Way back in 2008, a pair of Stanford graduate students were studying the concept of virality and wanted to create ads that would deserve your attention rather than simply stealing it. They created Sharethrough, an all-in-one native advertising platform for publishers, app developers, and advertisers. Today the company employs more than 170 people and serves over 3 billion native ad impressions per month.
Sharethrough includes a mobile-first content-driven platform designed to engage users with quality content that is integrated into the sites where it resides. This allows publishers to run premium ads and to maintain a high-quality user experience. They recently launched an AI-powered guide that helps to maximize the effectiveness of ad headlines.
Sharethrough’s infrastructure is hosted on AWS, where they make use of over a dozen high-bandwidth services including Kinesis and Dynamo, for the scale of the technical challenges they face. Relying on AWS allows them to focus on their infrastructure-as-code approach, utilizing tools like Packer and Terraform for provisioning, configuration and deployment. Read their blog post (Ops-ing with Packer and Terraform) to learn more.
— Jeff;
