Posted:
As part of our constant improvements to the Google Cloud Platform console we’ve recently updated our Google Compute Engine quotas page. Now you can easily see quota consumption levels and sort to find your most-used resources. This gives you a head start on determining and procuring any additional capacity you need so you hit fewer speed bumps on your road to growth and success.
We’ve also improved the process of requesting more quota, which can be initiated directly from the quotas page by clicking on the “Request increase” button. We’ve added additional checks to the request form that help speed up our response processing time; now most requests are completed in minutes. With these changes, we’re making it even easier to do more with Cloud Platform.

You can access your console at https://console.cloud.google.com and learn more about how GCP can help you build better applications faster on the https://cloud.google.com web page.

Posted by Roy Peterkofsky, Product Manager

Posted:
Containers are all the rage right now. There are scores of best practices papers and tutorials out there, and "Intro to Containers" sessions at just about every conference even tangentially related to cloud computing. You may have read through the Docker docs, launched an NGINX Docker container, and read through Miles Ward’s Introduction to containers and Kubernetes piece. Still, containers can be a hard concept to internalize, especially if you have an existing application that you’re considering containerizing.

To help you through this conceptual hurdle, I’ve written a four-part series of blog posts that gives you a hands-on introduction to building, updating, and using containers for something familiar: running a Minecraft server. You can check them out here:


In the first part of the series, you’ll learn how to create a container image that includes everything a Minecraft server needs, use that image on Google Compute Engine to run the server, and make it accessible from your Minecraft client. You’ll use the Docker command-line tools to build, test, and run the container, as well as to push the image up into the Google Container Registry for use with a container-optimized instance.


Next, you'll work through the steps needed to separate out storage from the container and learn how to make regular backups of your game. If you’ve ever made a mistake in Minecraft, you know how critical being able to restore world state can be! As Minecraft is always more fun when it’s customized, you'll also learn how to update the container image with modifications you make to the server.properties file.

Finally, you’ll take the skills that you’ve learned and apply them to making something fun and slightly absurd: Minecraft Roulette. This application allows you to randomly connect to one of several different Minecraft worlds using a single IP as your entry point. As you work through this tutorial, you’ll learn the basics of Kubernetes, an open source container orchestrator.

By the end of the series, you’ll have grasped the basics of containers and Kubernetes, and will be set to go out and containerize your own application. Plus, you’ll have had the excuse to play a little Minecraft. Enjoy!

This blog post is not approved by or associated with Mojang or Minecraft.

Posted by Julia Ferraioli, Senior Developer Advocate, Google Cloud Platform

Posted:
When you write applications that run on Google Compute Engine instances, you might want to connect them to Google Cloud Storage, Google BigQuery, and other Google Cloud Platform services. Those services use OAuth2, the global standard for authorization, to help ensure that only the right callers can make the right calls. Unfortunately, OAuth2 has traditionally been hard to use. It often requires specialized knowledge and a lot of boilerplate auth setup code just to make an initial API call.

Today, with Application Default Credentials (ADC), we're making things easier. In many cases, all you need is a single line of auth code in your app:

Credential credential =  GoogleCredential.getApplicationDefault();

If you're not already familiar with auth concepts, including 2LO, 3LO, and service accounts, you may find this introduction useful.

ADC takes all that complexity and packages it behind a single API call. Under the hood, it makes use of:
  • 2-legged vs. 3-legged OAuth (2LO vs. 3LO) -- OAuth2 includes support for user-owned data, where the user, the API provider, and the application developer all need to participate in the authorization dance. Most Cloud APIs don't deal with user-owned data, and therefore can use much simpler two-party flows between the API provider and the application developer.
  • gcloud CLI -- while you're developing and debugging your app, you probably already use the gcloud command-line tool to explore and manage Cloud Platform resources. ADC lets your application piggyback on the auth flows in gcloud, so you only have to set up your credentials once.
  • service accounts -- if your application runs on Google App Engine or Google Compute Engine, it automatically has access to the built-in "service account", that helps the API provider to trust that the API calls are coming from a trusted source. ADC lets your application benefit from that trust.

You can find more about Google Application Default Credentials here. This is available for Java, Python, Node.js, Ruby, and Go. Libraries for PHP and .Net are in development.

- Posted by Vijay Subramani, Technical Program Manager, Google Cloud Platform

Posted:
Local SSD has shown fantastic price-performance throughout beta, and today we are excited to announce that it's now generally available to Google Compute Engine customers in all regions around the world.

The Local SSD feature lets customers attach between 1 and 4 SSD partitions of 375 GB to any full core VM and have dedicated use of those partitions. And, it provides an extremely high number of IOPS (680k random 4K read IOPS). Unlike Persistent Disk, Local SSD doesn’t have redundancy. This is ideal for highly demanding applications that provide their own replication (such as many modern databases and Hadoop), as well as for scratch space for intense computational applications. Local SSD is also a great supplement for memory due to high IOPS and low price per GB.

Local SSD is competitively priced at $0.218/GB/month. For those used to buying Local SSD attached to VMs, this comes to $0.0003/GB/hour. If re-calculated as price per-IOPS, it comes to the extremely low price of $0.00048/read-IOPS/month.

We’ve had a fantastic response to the beta of Local SSD. For example, Cloud Harmony has shown that Google Compute Engine Local SSD achieves the highest number of 4K random IOPS across all of the VM-storage combinations tested. And it’s even better if normalized for price!

In addition, we’ve already received positive feedback from our customers. Aerospike achieved RAM-like performance and 15x cost advantage with the use of Local SSD as a supplement for RAM on their NoSQL database. And Gyazo.com is using Local SSD as a supplement for RAM for MongoDB, which works out perfectly due to low price and high performance of Local SSD. Isshu Rakusai from Gyazo.com said: “Local SSD on Google Compute Engine is very fast. It has allowed us to decrease our RAM usage, and made us more cost efficient.”

Learn more in the local SSD beta announcement or product documentation. And try our top IOPS with your VM.

- Posted by Kirill Tropin, Product Manager

Posted:
Today at Atmosphere Live, I spoke about how Google is helping developers realize the promise of cloud computing by providing on-demand access to world-class technology at an affordable price.

We believe that compute — the core of any cloud workload — should be simple and fast to provision, scale without effort, and be priced in accordance with Moore’s Law. In March of this year we set a new standard for economics in the public cloud when we brought the price of core infrastructure, including compute & storage, in line with where it should be.

And, as predicted by Moore’s Law, we can now lower prices again. Effective immediately, we are cutting prices of Google Compute Engine by approximately 10% for all instance types in every region. These cuts are a result of increased efficiency in our data centers as well as falling hardware costs, allowing us to pass on lower prices to our customers.
Old and new prices for all our Compute Engine instance types

Using Compute Engine doesn’t just lower costs; it makes developers more productive, agile and efficient. Many development teams spend about 80% of time on what we call “fix and fiddle,” such as managing systems, fixing bugs and just keeping the lights on. Only 20% of time is spent how it should be — building new products or systems that will be platforms for growth.

With Compute Engine and the rest of Cloud Platform, it doesn’t have to be this way. A small company like Snapchat can reach a global audience with just a few people on their development and operations team. Workiva, which processes financial reports for 60% of the Fortune 500, can focus on solving the needs of their users rather than managing infrastructure. And, this past World Cup, Coca Cola and Cloud Platform partner CI&T built and ran the Happiness Flag campaign in just a few weeks with the help of Google Compute Engine. The campaign solicited over three million contributions from fans in more than 200 countries.

We've made a lot of progress in the past year and look forward to what's coming next. Tune in to Google Cloud Platform Live on November 4th to learn more about where we’re headed.

-Posted by Urs Hölzle, Senior Vice President, Technical Infrastructure

Posted:
We continue to make improvements to Google Cloud Platform and deliver new and better capabilities to enable our developers to add resilience, performance and robustness to their applications on Google Compute Engine.

In June, we unveiled Google’s HTTP load balancing to the world. Since then, many Cloud Platform customers have signed up and experienced its performance. Customers who are using it for their production site experience a significant performance benefit. With countless hours of real-world testing and feedback from you, we are now ready to open it for preview to all customers. As part of the launch, we have added HTTP load balancing to the Developers Console and also added some new commands in gcloud to let you easily administer and monitor HTTP load balancing.

Built on top of the same frontend infrastructure as Google’s own services such as search, Gmail and YouTube, Google’s HTTP load balancing can:

  • load balance HTTP-based traffic over instances in multiple Compute Engine regions
  • intelligently select the optimal path between your users and your instances by using network proximity and backend capacity information,
  • expose your entire app via a single global external IP address, resulting in much simplified DNS setup,
  • filter out TCP SYN flood attacks,
  • route user requests to different backend groups based on host and URL path prefix, and
  • allow you to administer and monitor via RESTful API, Cloud SDK and the newly added UI in Developers Console.

Please review the latest documentation about features and region-free pricing. Also, our YouTube video from Google I/O in June shares some history of Google’s network infrastructure which gave rise to load balancing on Cloud Platform.

Thank you for your continued support. Happy load balancing!

-Posted by Gary Ling, Product Manager

Posted:
Ezakus, a leading data management platform, relies on Hadoop to process 600 million digital touch points raised by 40 million users and mobile users.

Fast growth created challenges in managing Ezakus’s existing Hadoop installation, so they tested different alternatives for running Hadoop. Their benchmarks found that Hadoop on Google Compute Engine provided processing speed that was three to four times better than the next-best cloud provider.

“Our benchmark tests used the Cloudera Hadoop distribution”, said Olivier Gardinetti, CTO. “We were careful to use identical infrastructure - the same logical CPU count, the same mem capacity and so forth. We also ran each test several times to ensure that outliers weren't skewing the results.”

When using MapReduce for basic stats processing of 20,469,283 entries along their browsing history over 1 month, Compute Engine computed the stats in 1 minute and 3 seconds, four times faster than the alternative tested. When more complex queries were run in a second test, Compute Engine computed in 7 minutes and 47 seconds, 3 times faster than the closest alternative which ran at 23 minutes and 31 seconds.

Ezakus can now provide more performance and predictions and serve more clients, “because we can more easily deploy all the servers in a very short time,” said Gardinetti. To learn more about their migration to Google Cloud Platform and subsequent results for their business, read the case study here.

-Posted by Ori Weinroth, Product Marketing Manager

Posted:
Every software company today needs a place to store their code and collaborate with teammates. Today we are announcing a solution that can scale with your business. GitLab Community Server is great way to get the benefits of collaborative development for your team wherever you want it. While GitLab already provides simple application installers, we wanted to take it one step further.

Today, we’re announcing Click to Deploy for the GitLab Community Server built on the following open source stack:
  • Nginx, a fast, minimal web server
  • Unicorn, Ruby on Rails hosting server
  • Redis, scalable caching service
  • PostgreSQL, popular SQL database

Get your own, dedicated code collaboration server today!

Learn more about running the GitLab Community Server on Google Compute Engine at https://developers.google.com/cloud/gitlab.

-Posted by Brian Lynch, Solutions Architect

GitLab is a registered trademark of GitLab B.V.. All other trademarks cited here are the property of their respective owners.

Posted:
Today’s guest post is by Florian Leibert, Mesosphere Co-Founder & CEO. Prior to Mesosphere, he was an engineering lead at Twitter where he helped introduced Mesos to Twitter where it now runs every new service. He then went on to help build the analytics stack at Airbnb on Mesos. He is the main author of Chronos, an Apache Mesos framework for managing and scheduling ETL systems.

Mesosphere enables users to manage their datacenter or cloud as if it were one large machine. It does this by creating a single, highly-elastic pool of resources from which all applications can draw, creating sophisticated clusters out of raw compute nodes (whether physical machines or virtual machines). These Mesosphere clusters are highly available and support scheduling of diverse workloads on the same cluster, such as those from Marathon, Chronos, Hadoop, and Spark. Mesosphere is based on the open source Apache Mesos distributed systems kernel used by customers like Twitter, Airbnb, and Hubspot to power internet scale applications. Mesosphere makes it possible to develop and deploy applications faster with less friction, operate them at massive scale with lower overhead, and enjoy higher levels of resiliency and resource efficiency with no code changes.

We’re collaborating with Google to bring together Mesosphere, Kubernetes and Google Cloud Platform to make it even easier for our customers to run applications and containers at scale. Today, we are excited to announce that we’re bringing Mesosphere to the Google Cloud Platform with a web app that enables customers to deploy Mesosphere clusters in minutes. In addition, we are also incorporating Kubernetes into Mesos to manage the deployment of Docker workloads. Together, we provide customers with a commercial-grade, highly-available and production-ready compute fabric.

With our new web app, developers can literally spin up a Mesosphere cluster on Cloud Platform in just a few clicks, using either standard or custom configurations. The app automatically installs and configures everything you need to run a Mesosphere cluster, including the Mesos kernel, Zookeeper and Marathon, as well as OpenVPN so you can log into your cluster. Also, we’re excited that this functionality will soon be incorporated into the Google Cloud Platform dashboard via the click-to-deploy feature. There is no cost for using this service beyond the charges for running the configured instances on your Google Cloud Platform account. To get started with our web app, simply login with your Google credentials and spin up a Mesos cluster.




We are also incorporating Kubernetes into Mesos and our Mesosphere ecosystem to manage the deployment of Docker workloads. Our combined compute fabric can run anywhere, whether on Google Cloud Platform, your own datacenter, or another cloud provider. You can schedule Docker containers side by side on the same Mesosphere cluster as other Linux workloads such as data analytics tasks like Spark and Hadoop and more traditional tasks like shell scripts and jar files.



Whether you are running massive, internet scale workloads like many of our customers, or you are just getting started, we think the combination of Mesos, Kubernetes, and Google Cloud Platform will help you build your apps faster, deploy them more efficiently, and run them with less overhead. We look forward to working with Google to make Cloud Platform the best place to run traditional Mesosphere workloads, such as Marathon, Chronos, Hadoop, or Spark—or newer Kubernetes workloads. And they can all be run together while sharing resources on the same cluster using Mesos. Please take Mesosphere for Google Cloud Platform for a test drive and let us know what you think.


- Contributed by Florian Leibert, Mesosphere Co-Founder & CEO

Posted:
If you’re starting out today, there are a number of development stacks to choose from. From the original LAMP (Linux, Apache, MySQL, PHP) to the myriad of other choices, there is a development stack to match your language and experience. For the NodeJS fans out there, the MEAN stack is a great option. Wouldn’t it be awesome if you could launch your favorite development stack with the click of a button?

Today, we’re announcing the first Click to Deploy development stack on Google Compute Engine. MEAN provides you with the best of open source software today:

  • MongoDB, a leading NoSQL database
  • Express Web Framework, a minimal and flexible node.js web application framework
  • AngularJS, an extensible Javascript framework for responsive applications
  • NodeJS, a platform built on Chrome’s JavaScript runtime for server-side Javascript

With a single button click, you can launch a complete MEAN development stack ready for development! Click to Deploy for MEAN handles all software installs and setting up a sample app for you to get started.

So, get out and click to deploy your MEAN development stack today!

Learn more about running the MEAN development stack on Google Compute Engine at https://developers.google.com/cloud/mean.

-Posted by Brian Lynch,  Solutions Architect

MEAN.io, is a registered trademark of Linnovate Technologies Ltd, Inc. All other trademarks cited here are the property of their respective owners.

Posted:
Accessing a Google Compute Engine VM instance via Secure Shell (SSH) is a common developer task, but when you’re configuring or managing your application, doing so can take you out of the context of what you’re currently working on. Worse yet, a problem might occur when you don’t have your normal computer handy and you have to try to access the VM from a different device without your developer tools installed.

So, we asked ourselves how we could make it quicker for you to access your Compute Engine VMs in any situation. Last month we introduced the ability to SSH into a VM using the gcloud compute command in the Google Cloud SDK which worked out-of-the-box across all major operating systems. However, we wanted to simplify it even further.
Screen Shot 2014-07-24 at 2.39.17 PM.png
Our answer was simple: make it possible for you to SSH directly to your VM without leaving the Developers Console in your browser. We put forward a few ground rules such as it had to be secure and we shouldn’t require an extension or additional software downloads.

The result? Recently we rolled out the ability for anyone with edit access to your project to open an SSH connection and terminal session from directly within the Developers Console website with no additional installations. To keep your session secure, we ensure private keys are never transmitted over the wire, and that all SSH traffic is encrypted before leaving your browser.

Opening up a session is easy. From inside the Developers Console, all you need to do is open up your project, navigate to the VM instances tab under COMPUTE > COMPUTE ENGINE and then click the SSH button. A new window will appear with the connection progress displayed. This works with all the current versions of most web browsers (Google Chrome, Mozilla Firefox and Microsoft Internet Explorer 11) with no additional download required

We also support a common case where only one “frontend” VM on the project has an external IP address,and the rest of the “backend” VMs are not routable from public Internet. To make SSHing into those instances possible from the browser, we support agent forwarding -- you can SSH into the instance with external IP from Cloud Console and then “ssh -A” into the non-external IP instances using their IP address on the private network.

We’ve also tried to pack in a couple of extra goodies in for you. First we keep your connection safe and secure by using only https, generate a private key for each session and never transmit it over the wire, and encrypt all your SSH data before it leaves the browser (that is SSH encryption in addition to https). Clicking under the gear icon, you can change to a light theme if you prefer, navigate easily back to the instance details page in the console in case you closed it, or start a new connection to the same instance in case you need multiple connections.
Screen Shot 2014-07-24 at 2.44.25 PM.png
As more of your developer workflow moves into the web browser, we’re committed to helping bridge the gap between command line and the web browser as seamlessly as possible. We’re interested in hearing more ways we can do so for you. As you can see under the gear icon, we’ve also included a way for you to send us your feedback -- please send us your thoughts.

-Posted by Cody Bratt, Product Manager

Posted:
We’ve continued to ship features and tools to make it easier to build your application on Google Compute Engine. In addition, Compute Engine played a key role in a number of recent customer success stories - including CI&T and Coca Cola, Screenz and ABC’s Rising Star, AllTheCooks, and Fastly and Brightcove. Here are a few more updates for Google Compute Engine we wanted to share.

New Zones in US and Asia
We've added a third zone to both us-central1 and asia-east1 regions, making it easier to use Compute Engine to run systems like MongoDB that use a quorum-based architecture for high availability. The new zones, us-central1-f and asia-east1-c, both support transparent maintenance right out of the gate.

SSD Persistent Disk is generally available
On June 16th, we announced the limited preview of SSD-backed persistent disks, which gives you great price and performance for high-IOPS workloads. On June 25th at Google I/O, we made SSD persistent disks generally available in all Google Compute Engine zones. For a great overview of Google Cloud Platform’s block storage options, including how to decide which one is best suited for your use case, watch this video by our storage guru, Jay Judkowitz. Visit the docs pages to find additional details, including instructions on how to use persistent disks with Compute Engine. Finally, this whitepaper gives you a great overview of best practices for using persistent disks.

Easier image creation from persistent disk
Speaking of persistent disks, we've made it easier for developers to create custom images right from their root persistent disks. You can now specify an existing persistent disk as the source for your Images:insert API call or gcutil addimage CLI command. To get the full scoop, be sure to check out the image creation documentation. Image creation from persistent disk makes it possible to create custom images for your Windows instances too.

-Posted by Scott Van Woudenberg, Product Manager

Posted:
If you saw our post about Cassandra hitting 1 million writes per second on Google Compute Engine, then you know we’re getting serious about open source NoSQL. We’re making it easier to run the software you love at the scale you need with the reliability of Google Compute Platform. With over a dozen different virtual machine types, and the great price for performance of persistent disks, we think Google Compute Engine is a fantastic place for Apache Cassandra.

Today, we’re making it even easier to launch a dedicated Apache Cassandra cluster on Google Compute Engine. All it takes is one click after some basic information such as the size of the cluster. In a matter of minutes, you get a complete Cassandra cluster deployed and configured.

Each node is automatically configured for the cloud including:
  • Configured with the GoogleCloudSnitch for Google Cloud Platform awareness
  • Writes tuned for Google Persistent Disk
  • JVM tuned to perform on Google Compute Engine instances

The complete set of tuning parameters can be found on the Click to Deploy help page.

So, get out and click to deploy your Cassandra cluster today!

Learn more about running Apache Cassandra on Google Compute Engine at https://developers.google.com/cloud/cassandra.

-Posted by Brian Lynch, Solutions Architect

Cassandra, is registered trademarks of Apache, Inc. All other trademarks cited here are the property of their respective owners.

Posted:
Today’s guest blog comes from Jim Totton, Vice President and General Manager, Platform Business Unit at Red Hat

Red Hat Enterprise Linux Atomic Host is now available as a technology preview on Google Compute Engine for customers participating in the Red Hat Enterprise Linux 7 Atomic special interest group (SIG). The inaugural SIG is focused on application containers and encompasses the technologies that are required to create, deploy and manage application containers.

Google and Red Hat actively collaborate on container technologies, approaches and best practices. Both companies are committed to standards for container management, interoperability and orchestration. As a gateway to the open hybrid cloud, application containers enable new possibilities for customers and software providers, including application portability, deployment choice and hyperscale and resilient architectures - whether on-premise or in the cloud.

At Red Hat Summit in April, Red Hat announced our vision for Linux Containers and expanded the Red Hat Enterprise Linux 7 High Touch Beta program to include Red Hat Enterprise Linux Atomic Host – a secure, lightweight and minimal footprint operating system optimized to run Linux Containers. Moving forward, Red Hat will work closely with these participants, with assistance from Google, to support them as they explore application containers. This will help us both gather important requirements and feedback on use cases for these technologies and enable the hybrid cloud for our joint customers.

We also announced today that Red Hat and Google are collaborating to tackle the challenge of how to manage application containers at scale, across hundreds or thousands of hosts. Red Hat will be joining the Kubernetes community and actively contributing code. Earlier today on our blog, we wrote:

Red Hat is embracing the Google Kubernetes project and plans to work to enable it with container management capabilities in our products and offerings. This will enable Red Hat customers to take advantage of cluster management capabilities in Kubernetes, to orchestrate Docker containers across multiple hosts, running on-premise, on Google Cloud Platform or in other public or private clouds. As part of this collaboration, Red Hat will become core committers to the Kubernetes project. This supports Red Hat’s open hybrid cloud strategy that uses open source to enable application portability across on-premise datacenters, private clouds and public cloud environments.

Both Google and Red Hat recognize the importance of delivering containerized applications that are secure, supported and exhibit a chain of trust. Red Hat's Container Certification program, launched in March 2014, supports this commitment and is designed to help deliver containerized applications that “work as intended” to trusted destinations within the hybrid cloud for software partners and end-customers.

Follow the Red Hat Enterprise Linux Blog to stay informed about Red Hat’s work on technologies required to create, deploy, and manage application containers.

-Contributed by Jim Totton, Vice President and General Manager, Platform Business Unit at Red Hat

Posted:
With over a dozen different virtual machine types, and the great price for performance of persistent disks, we think Google Compute Engine is a great place for running MongoDB. We are excited to be working with MongoDB to bring together documentation, resources, and partners to help you get up and running, to build, and to scale your application.

For self-managed production deployments, you have a range of software management options, such as Puppet, Chef, Salt, and Ansible. For fully-managed deployments, you can work with partners such as MongoLab.

But when you are just getting a project started, you want to get a proving ground up and running quickly. You'd also like to to find the documentation you need from one place. You'll be able to find resources you need to deploy MongoDB on Compute Engine at http://cloud.google.com/solutions/mongodb.

With Click to Deploy MongoDB available in the Developer Console, you'll be able to bring up a Compute Engine cluster running MongoDB in a few minutes. Through the web interface, you can choose how many MongoDB server and arbiter nodes, their associated virtual machine types, and the size of your data disk volumes.

We'll continue to add features to the Click to Deploy infrastructure and look to implement best practices for MongoDB on Google Compute Engine. Here's one key tip you can find in the MongoDB on Google Compute Engine solutions paper:

Put your MongoDB journal files and data files on the same disk 
It is a common recommendation to separate components onto different storage devices. However, persistent disk already stripes data across a very large number of volumes, so there is no need to do it yourself. 
MongoDB journal data is small and putting it on its own disk means either creating a small disk with insufficient performance or creating a large disk that goes mostly unused. Put your MongoDB journal files on the same disk as your data. Otherwise putting your MongoDB journal files on a small persistent disk will dramatically decrease performance of database writes.

Check it all out at http://cloud.google.com/solutions/mongodb.

-Posted by Matt Bookman, Solutions Architect

MongoDB, is registered trademarks of MongoDB, Inc. All other trademarks cited here are the property of their respective owners.

Posted:
Earlier this week, we announced the limited preview of SSD persistent disk. SSD persistent disk gives you the same great features as Standard persistent disk but is backed by SSD so you get much faster IO performance—up to 30 IOPS per GB. On a per GB basis, this is 20x more write IOPS and 100x more read IOPS than Standard PD! Also, while performance on Standard persistent disk is generally very consistent, SSD persistent disk is designed to perform even more consistently; it is designed to provide 30 IOPS per GB if you follow best practices.

SSD persistent disk costs only $0.325 per GB per month. Like Standard persistent disk, there is no additional cost for I/O. With SSD persistent disk, there is no need to plan for IOPS or to balance cost concerns with performance goals.

SSD persistent disk has two major benefits for customers:
  1. You can run your databases and file servers on Google Cloud Platform faster than ever before and move ever more demanding applications into Google Cloud Platform.
  2. You now have a simple and cost effective solution for handling small amounts of data that need high IO. When your application is I/O constrained, SSD persistent disk gives you up to 59% lower costs for read-heavy workloads, and up to 92% lower costs for write-heavy workloads.
SSD persistent disk can help build faster and more cost effective cloud applications. "Our data suggests that storage represents 20% of overall cloud spend and up to 70% of the costs of a heavily loaded database tier,” said Sebastian Stadil, CEO of Scalr. “The extreme performance offered by the new SSD PDs, when coupled with their very low price and absence of fees per IO consumed, have made us accelerate our plans to move our workloads to Google Cloud Platform."

Please note the following details about using SSD persistent disk:
  1. This storage offering is targeted at highly transactional systems. Your MySQL, PostgreSQL, Mongo, Cassandra and Redis should work incredibly fast. But, this is not targeted at streaming large files sequentially. Use Standard persistent disk for bulk data or data that is read and written to in sequential streams and High IOPS PD for your heavily transactional storage with more random access.
  2. High IOPS counts 16KB IOs and lower as 1 IO. Larger IO sizes will count as multiple IOs. For example, a 128KB IO will count as 8 IOs.
  3. For larger volumes, VM IO capabilities will limit how much IO can be expected from the volume. VM limits today for the larger VMs are 10,000 read IOPS, 15,000 write IOPS, 180 MB/s read and 120 MB/s writes. These are numbers we are working hard to increase as we further optimize our infrastructure. Stay tuned for more here.
To learn more about SSD persistent disk, please read the documentation. If you’d like access to the limited preview, please request access here. As always, please keep in touch and give us feedback on how things are working either through our technical support, the Googe Compute Engine Discussion mailing list or the Google Compute Engine Stack Overflow forum.

-Posted by Jay Judkowitz, Senior Product Manager

Posted:
A couple of months ago, we shared some tips and tricks about bringing the power of Google Cloud Platform to your command-line using Google Cloud SDK and gcloud. Today we are announcing that gcloud family of command-line tools is being joined by gcloud compute, a new command-line interface for Google Compute Engine.

Here are some guiding principles that we focused on while developing this new command-line tool. We certainly hope it will help you to be more productive with Google Compute Engine!
installer.png
Great Windows support out-of-the-box
If you’re running on Windows, download and launch our new Google Cloud SDK installer. Once the installer completes, you should have gcloud compute available on your system.

Using gcloud compute you can easily connect to your VM instances, manage files and running processes on native Windows installations, as well as on Linux and Mac machines without any extra effort. For example, to SSH into your VM, simply run gcloud compute ssh INSTANCE_NAME --zone=us-central1-a.

cloud-sdk-ssh.gif
To execute a process remotely on your VM, add “--command“ flag, e.g. gcloud compute ssh INSTANCE_NAME --command="ps aux" --zone=us-central1-a.

To copy files to and from your instances, use copy-files command:
gcloud compute copy-files instance-1:file-1 instance-2:file-2 ... local-directory --zone=us-central1-a

gcloud compute copy-files file-1 file-2 ... instance:remote-directory --zone=us-central1-a


Finally, if you want to use SSH-based programs, like ssh or scp directly, run gcloud compute config-ssh, which will populate your per-user SSH configuration file with “Host” entries from each instance.

Improved scripting support
You can easily combine individual commands into actions which would require tens of button clicks in the graphical interface, or tens of lines when programming directly against APIs. Here is a simple example.

Run the following command to list all of your virtual machine instances:
gcloud compute instances list

Similarly, to delete a particular VM instance (say named my-old-vm), run the following command:
gcloud compute instances delete my-old-vm

Now, combine the previous two commands to delete all your VMs in us-central1-b zone:
gcloud compute instances delete $(gcloud compute instances list --zone=us-central1-b)

Integrated documentation and command-line help
If you want to read a summary of a particular command’s usage, simply append -h to any command (for example, gcloud compute instances list -h). Similarly, append --help to any command to see its detailed man page entry on Linux or Mac. The detailed command-line reference for all platforms is also available on Cloud SDK’s documentation page.
man.png
Flexible output formatting
The gcloud compute instances list example above prints detailed information about VM instances in YAML format by default. If you want to get the same data in JSON or in condensed, human-readable text formats (for example, to send it over the wire or to import it in a spreadsheet), simply pass the “--format=json“ or “--format=text“ flag to any list operation in gcloud compute:

gcloud compute firewall-rules list --format=text or
gcloud compute zones list --format=json

Furthermore, you could use regular expressions via --regexp to get information only about objects with specific names. For example, to return information about VM instances with names my-instance-1, my-instance-2, my-instance-3, run the following command:
gcloud compute instances list --regexp my-instance-.*

Finally, if you care just about certain fields in the output, you can select them by passing “--fields“ flag. For example, to see the disks attached to all your instances and their modes, run:
gcloud compute instances list --fields name disks[].source disks[].mode

Command suggestion and autocompletion
While gcloud command-line tools are great for scripting, they’re also comfortable for humans to use. On Linux and Mac, all gcloud commands can be autocompleted by pressing (e.g. type gcloud compute fi and press to have it expanded to gcloud compute firewall-rules). Similarly, you can type gcloud compute firewall-rules and press twice to see the operations which you can perform with firewall-rules. Also, if you mistype a command, gcloud will suggest a nearest match.

Posted:
Cloud usage monitoring and analysis has served an important role in helping users optimize their cloud services. However, many users have expressed a need to access more granular information about their Google Compute Engine usage. Today, we are making this possible with Compute Engine Usage Export, which enables you to easily export detailed reports about your usage data Usage Export makes it easy to obtain more granular insight into your Compute Engine usage at a resource level. For example, you can monitor exactly how long a virtual machine has been running or how much storage space a persistent disk uses on a daily basis.

Similar to Billing Export, Compute Engine Usage Export allows you to export a CSV file with the detailed usage data to a Google Cloud Storage bucket you specify. This CSV file can then be accessed through the Cloud Storage API, CLI tool, or Console.

The Usage Export feature will provide both daily reports that include usage data from the latest 24 hour period and monthly rollup reports that include monthly usage data up to the most current day.

This feature complements Billing Export to provide you with basic tools for monitoring, analyzing, and optimizing cost. You can link items between the two exports to easily identify charges for a particular resource.

Compute Engine Usage Export can be enabled from the Console as shown below.

The CSV files will be placed in a Cloud Storage bucket as shown in the Cloud Console example below. In this example, you can see one monthly rollup and three daily reports for May 2014.

The CSV file will contain line items for each resource during the time frame covered. In this example, you can see line items representing two different instances that have both run for the entire day in a daily report for May 1, 2014.

Report Date,MeasurementId,Quantity,Unit,Resource URI,ResourceId,Location
2014-05-01,com.google.cloud/services/compute-engine/VmimageN1Standard_1,86400,seconds,https://www.googleapis.com/compute/v1/projects/123456789/zones/us-central1-a/instances/testinstance1,13495613777116200000,us-central1-a
2014-05-01,com.google.cloud/services/compute-engine/VmimageN1Standard_1,86400,seconds,https://www.googleapis.com/compute/v1/projects/123456789/zones/us-central1-b/instances/testinstance2,17932867728944100000,us-central1-b

For those of you who’d like to see your usage data for May, we will automatically generate your full monthly usage report for May 2014 in the Cloud Storage bucket you have specified if you enable Compute Engine Usage Export by June 9, 2014.

To find out more about how to use the Compute Engine Usage Export, please check out the Usage Export documentation.

-Posted by Ken Sim, Product Manager

Posted:
Yesterday, Cloud Foundry demonstrated how you can use Cloud Foundry and the BOSH CPI with Google Compute Engine. Since Compute Engine became Generally Available in December, 2013, we've seen an ever increasing number of open source projects, partners, and other software vendors build in support of our platform.

Cloud Foundry's post covers using BOSH to deploy a Hadoop cluster on Compute Engine and manage it with Cloud Foundry. With Compute Engine's fast and consistent provisioning, Cloud Foundry was able to deploy a working Hadoop cluster in less than 3 minutes! So in a few short minutes, you are able to start your Hadoop processing. When combined with Compute Engine's sub-hour billing and sustained-use discounts, you have multiple options for keeping costs low.

- Posted by Eric Johnson, Program Manager


Posted:
Our guest blog post today comes from Brandon Philips, CTO at CoreOS, a new Linux distribution that has been rearchitected to provide features needed to run massive server deployments.

Google is an organization that fundamentally understands distributed systems, and it's no surprise that Compute Engine is a perfect base for your distributed applications running on CoreOS. The clustering features in CoreOS pair perfectly with VMs that boot quickly and have a super-fast network connecting them.

Google's wide variety of machine types allow you to create the most efficient cluster for your workloads. By setting machine metadata, CPU intensive or RAM hungry fleet units can be easily scheduled onto a subset of the cluster optimized for that workload.

CoreOS integrates easily with Google load balancers and replica pools to easily scale your applications across regions and zones. Using replica groups with CoreOS is easy; configure the project-level metadata to include a discovery URL and add as many machines as you need. CoreOS will automatically cluster new machines and fleet will begin utilizing them. If a single machine requires more specific configuration, additional cloud-config parameters can be specified during boot.

The largest advantage to running on a cloud platform is access to platform services that can be used in conjunction with your cloud instances. Running on Compute Engine allows you to connect your front-end and back-end services running on CoreOS to a fully managed Cloud Datastore or Cloud SQL database. Applications that store user-generated content on Google Cloud Storage can easily start worker instances on the CoreOS cluster to process items as they are uploaded.

CoreOS uses cloud-config to configure machines after boot and automatically cluster them. Automatic clustering is achieved with a unique discovery token obtained from discovery.etcd.io.
$ curl https://discovery.etcd.io/new

https://discovery.etcd.io/b97f446100a293c8107500e11c34864b

Place this new discovery token into your cloud-config document:
$ cat cloud-config.yaml
#cloud-config

coreos:
  etcd:
    # generate a new token for each unique cluster from 
    https://discovery.etcd.io/new
    discovery: https://discovery.etcd.io/b97f446100a293c8107500e11c34864b
    # multi-region and multi-cloud deployments need to use $public_ipv4
    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
  units:
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start

After generating your cloud-config, booting a 3-machine cluster can be done in a single command. Remember to substitute your unique project ID:
gcutil --project=<project-id> addinstance
--image=projects/coreos-cloud/global/images/coreos-beta-310-1-0-v20140508 
--persistent_boot_disk --zone=us-central1-a --machine_type=n1-standard-1 
--metadata_from_file=user-data:cloud-config.yaml core1 core2 core3

To show off fleet’s scheduling abilities, let’s submit and start a very simple Docker container that echoes a message. First, SSH onto one of the machines in the cluster. Remember to replace the project ID with your own:
$ gcutil --project=coreos ssh --ssh_user=core core1

Create a new unit file on disk that runs our container:
$ cat example.service

[Unit]
Description=MyApp
After=docker.service
Requires=docker.service

[Service]
RemainAfterExit=yes
ExecStart=/usr/bin/docker run busybox /bin/echo 'I was scheduled with fleet!'

To run this unit on your new cluster, submit it via fleetctl:
$ fleetctl start example.service
$ fleetctl list-units

UNIT             STATE    LOAD    ACTIVE SUB   DESC   MACHINE

example.service  launched loaded active exited MyApp b603fc4d.../10.240.246.57

The status of the registry container can easily be fetched via fleetctl:
$ fleetctl status example.service

● example.service - MyApp
   Loaded: loaded (/run/fleet/units/example.service; linked-runtime)
   Active: active (exited) since Thu 2014-05-22 20:27:54 UTC; 4s ago
  Process: 15789 ExecStart=/usr/bin/docker run busybox /bin/echo 
           I was scheduled with fleet! (code=exited, status=0/SUCCESS)
 Main PID: 15789 (code=exited, status=0/SUCCESS)


May 22 20:27:54 core-01 systemd[1]: Started MyApp.
May 22 20:27:57 core-01 docker[15789]: I was scheduled with fleet!

Using this fundamental tooling you can start building full distributed applications on top of CoreOS and Google Compute Engine. Checkout the CoreOS blog for more examples of using fleet, load balancers and more.
For a complete guide on running CoreOS on Google Compute Engine, head over to the docs. To get help or brag about your awesome CoreOS setup, join us on the mailing list or in IRC.