Posted:
We just wrapped up the Google Cloud Platform Roadshows, a series of developer events in 35 cities worldwide, where we reached nearly 4,500 developers spanning the globe, from Texas to Tel Aviv to Tokyo.

Now that the series is finished, we wanted to thank everyone for coming and share with you the slides and all our talks recorded live from our New York City event:


The Roadshow team sends huge thanks to everyone who attended and looks forward to seeing you next year. We'd love to hear from you in the meantime:


-Posted by Tom Van Waardhuizen, Program Manager

Posted:
Starting today, Google Cloud Monitoring Read API is generally available, allowing you to programmatically access metric data from your running services, such as CPU usage or disk IO. For example, you can use Cloud Monitoring Read API with Nagios to plug in to your existing alerting/event framework, or use it with Graphite to combine the data with your existing graphs. Third party providers can also use the API to integrate Google Cloud Platform metrics into their own monitoring services.

Cloud Monitoring Read API allows you to query current and historical metric data for up to the past 30 days. Also, you can use labels to filter data to more specific metrics (e.g. zones). Currently Cloud Monitoring Read API supports reading metric time series data from the following Cloud Platform services:
  • Google Compute Engine - 13 metrics
  • Google Cloud SQL - 12 metrics
  • Google Cloud Pub/Sub - 14 metrics

Our documentation provides a full list of supported metrics. Over time we will be adding support for more Cloud Platform services metrics and enhancing the metrics for existing services. You can see an example of usage and try these metrics for yourself on our getting started page. For samples and libraries, click here.

Example: getting CPU usage time series data
GET \
https://www.googleapis.com/cloudmonitoring/v2beta1/ \  # Access API
projects/YOUR_PROJECT_NAME/ \                          # For YOUR_PROJECT_NAME
timeseries/ \                                          # get time series of points
compute.googleapis.com%2Finstance%2Fcpu%2Fusage_time?\ # of CPU usage
youngest=2014-07-11T10%3A29%3A53.108Z& \           # with this latest timestamp
key={YOUR_API_KEY}                                     # using this API key
Your feedback is important!
We look forward to receiving feedback and suggestions at [email protected].

-Posted by Amir Hermelin, Product Manager

Posted:
Whether it’s the next viral game, social sharing app or hit SaaS application, the velocity of your innovation is driven by the productivity of your dev team. This week at Google I/O we talked about several new tools that enable developers to understand, diagnose and improve their systems in production.

Cloud Debugger
Today the state of the art of debugging for cloud applications isn’t much more than writing out diagnostic messages and spelunking the logs for them. When the right data is not being written to the logs, developers have to make a code change and redeploy the application to production. That is the last thing you want to do when investigating an issue in production. Traditional debuggers aren’t well suited for cloud-based services for two reasons. First, it is difficult to know which process to attach to. Second, stopping a process in production makes it hard to reproduce an issue and gives your end-users a bad experience.

The Cloud Debugger completely changes this model. It allows developers to start where they know best - in the code. By simply setting a watchpoint on a line of code, the next time a request on any of your servers hits that line of code, you get a snapshot of all the local variables, parameters, instance variables and a full stack trace. This works no matter how many instances you are running in production. There is zero setup up time and no complex configuration to enable. The debugger is ideal for use in production. There is no overhead for enabling the debugger on a project and when a watchpoint is hit very little noticeable performance impact is seen by your users.
Screen Shot 2014-06-19 at 10.34.21 AM.png
Cloud Trace
Performance is an important feature of your service which directly correlates with end user satisfaction and retention. No one intends to build a slow service, but it can be extremely difficult to isolate the root cause of sluggishness when it happens. Especially when the issue hits only a fraction of your users.

Cloud Trace helps you visualize and understand the time spent by your application for request processing. This enables you to quickly identify and fix performance bottlenecks. You can even compare performance from release to release with a detailed report. You can leave Cloud Trace enabled in production because it has very little performance overhead.

In this screenshot, you can see we have investigated a particularly slow trace and we see a detailed breakdown of where the time is being spent. It looks like the problem could be these numerous sequential calls to Datastore, so maybe we should consider batching them.
Short Waterfall.png
So we go update our service to batch the Datastore calls, and deploy the updated service. Now we can use Cloud Trace to verify the fix.
Selection_081.png
As a developer, you can easily produce a report that shows the performance change in your service from one release to another. In the following report, the blue graph shows the performance without datastore batching and the orange graph shows the performance after releasing the change to use datastore batching. The X-axis of the graph represents the time taken (logarithmic scale) to service requests, and the left shift of the orange graph shows the significant performance gain due to Datastore batching.
Comparison.png
Cloud Monitoring, Powered by Stackdriver
Cloud Monitoring provides rich dashboards and alerting capabilities that help developers find and fix performance problems quickly.

With minimal configuration and no separate infrastructure to maintain, Cloud Monitoring provides you with deep visibility into your Cloud Platform services. For example, you can use Cloud Monitoring dashboards to diagnose cases where your customers are reporting slow response times or errors accessing your applications:

Likewise, you can create alerting policies so that you are notified when key metrics, such as latency or error rates, pass a given threshold in the future:

You can configure alerts for any metric in the system, including those related to the performance of Cloud SQL databases, App Engine modules and versions, Pub/Sub topics and subscriptions, and Compute Engine VMs. With Compute Engine VMs, you can create alerts for both core system metrics (CPU, memory, etc.) and application services running in the VMs (Apache, Cassandra, MongoDB, etc.).

You can also create dashboards that make it easier to correlate metrics across services. For example, it takes a few clicks to create a dashboard that tracks key metrics for an App Engine module that connects to a set of Redis VMs running on Compute Engine:

Finally, you can create endpoint checks to monitor availability and response times for your end-user facing services. Endpoint checks are performed by probes in Oregon, Texas, Virginia, Amsterdam, and Singapore, enabling monitoring of latency from each of these five regions.

SSH to your VM instantly
Sometimes it is inevitable to connect directly to a VM to debug or fix a production issue. We know this can be a bit of a pain, especially when you are on the road, so now you can do that from just about anywhere. With our new browser based SSH client you can quickly and securely connect to any of your VMs from the Console. No need to install any SDK or tools. The best part is, this works from any desktop device with most major web browsers.
Screen Shot 2014-06-15 at 7.19.10 AM.png
Ready for a Spin?
All of these features are just about ready for your applications. Stay tuned to this blog, we will post updates as they are more widely available.

-Posted by Brad Abrams, Group Product Manager

Posted:
Command-line tools are empowering: they deliver speed, control, traceability, scripting and automation capabilities to the developer’s workflow. Here are a few tips and tricks on how to start working with, and how to get more out of Google Cloud Platform’s command-line tools.

Simplified Tool Installation
For streamlined tool distribution, all Google Cloud Platform command-line tools are bundled in Google Cloud SDK, so you only need to obtain Cloud SDK to get command-line access to App Engine, Compute Engine, Cloud Storage, Cloud SQL, BigQuery and other products.

Here’s how to install Cloud SDK:

  • If you’re running on Windows, download Cloud SDK zip and launch install.bat script,
  • If you’re running on Linux or Mac OS run the following command in your shell/Terminal:
curl https://dl.google.com/dl/cloudsdk/release/install_google_cloud_sdk.bash | bash

When the installer completes, restart your shell/Terminal/Command Prompt window and you should be able to run the main Cloud SDK command-line tool, called gcloud.
.
Component Management
One of the new features installed with Cloud SDK is a component manager, accessible via gcloud components command group. The component manager allows you to add, remove and update Cloud Platform tools without ever needing to re-download the SDK.

Run gcloud components list to see the list of available or installed components, gcloud components update component-id to install or update a particular component or gcloud components --help for more information.

Tip: if you’re using more than one programming language for App Engine development, use Cloud SDK component manager to install multiple App Engine SDKs side-by-side. For example, if you wanted to add Python and Java support for App Engine, run gcloud components update gae-python gae-java.

Staying Up-to-Date
To ensure that you always have the latest versions of Cloud Platform tools, Cloud SDK will notify you if any updates are available when you run individual commands. For example, if Cloud SQL releases an update, the following message will be appended after running Cloud SQL commands:

$ gcloud sql instances list
my-developers-console-project:sql-instance

There are available updates for some Cloud SDK components. To install them, please run:
$ gcloud components update

You can then perform the update in place using the component manager by running gcloud components update, as described above.

Tip: if you want to stick with the current version of the tools that you have installed, you can disable these notifications by running gcloud config set --section component_manager disable_update_check.

Unified Authentication into the whole Google Cloud Platform
Google Cloud SDK provides a unified authentication model across all of the command-line tools. You only need to complete the authentication flow via gcloud auth login once, and you will be authenticated into all tools simultaneously.

Tip: Cloud SDK also allows you to use multiple accounts by repeating gcloud auth login flow. Run gcloud auth list to see all credentialed accounts, and gcloud config set account to set the active account.

Project Initialization and Push-to-Deploy
Every new project created on Google Developers Console now comes with a free private Git repository, where you can store your source code and use it for your App Engine app deployments via push-to-deploy.

To quickly initialize your local environment, run cloud init project-id. This command will set-up the version control system, clone the source code onto your machine and will set-up push-to-deploy, all in one command. Here’s a workflow example:


  1. Create “my-awesome-app” application in Google Developers Console.
  2. Install Cloud SDK and run gcloud init my-awesome-app.
  3. Change directory to my-awesome-app/default and add some code. (Have a look at some sample projects if you need inspiration.)
  4. Commit the code by running git commit -a -m “First commit for my awesome app” and deploy it by running git push origin master.
  5. Your app should now be serving live at https://my-awesome-app.appspot.com!

General Usage Tips
Command autocompletion

If you’re running on Mac or Linux and you’ve chosen to enable command-line autocompletion during Cloud SDK’s install, then you should be able to use the Tab key to complete the commands under gcloud in your shell or Terminal.

For example, typing “gclo + Tab + con + Tab + l + Tab” will expand to “gcloud config list“. Similarly, pressing the Tab key twice should show all possible options in ambiguous cases, e.g. typing “gclo + Tab + co + Tab + Tab” should show “config components”.

Tip: this works for flags too! Type “gcloud sql instances create - + Tab + Tab“ (don’t forget the dash at the end!) to see all possible flags that you can pass for a new Cloud SQL instance.

Interactive mode

To further simplify scripting and automation, Cloud SDK allows you to call various gcloud commands from Python scripts. To experiment with this, run gcloud interactive to start an interactive Python shell.

For example, to get the currently set default project from gcloud config list (without scraping the console output), run gcloud interactive to get into the interactive Python mode and paste the gcloud.config.list()[’core’][’project’] command. Python interpreter should return you the string containing the default project. You can then use a similar approach to get the currently set project from your Python scripts.

See the interactive mode documentation for more details.

Help and support
If you’re ever unsure about how to use one or another gcloud command, try appending “--help“ to the end of a command. This works at all command nesting levels, e.g.
  • gcloud --help
  • gcloud sql --help
  • gcloud sql backups --help
  • gcloud sql backups list --help
If you have any further questions post them on Stack Overflow using “gcloud” tag, send us a message at [email protected] (our discussions and support forum), or come and chat to us live on #gcloud channel at freenode IRC network.

Note that other Cloud Platform command-line tools are still being integrated into gcloud, so stay tuned and check back our blog soon for more good news about Cloud SDK!

-Posted by Manfred Zabarauskas, Product Manager

Posted:
Large-scale deployments on Google Compute Engine may consist of hundreds of dynamic and transient components such as instances, disks, networks, and firewall rules. To manage and debug these resources, we often have to look back in time to find answers to questions such as:
  • What was the distribution of instances across zones for the past month?
  • What were the instances with production tag T at time range X?
  • What instances used an external IP address X?
  • What was the aggregated size of disks by zone for the past 7 days?

Wouldn’t it be wonderful if there was a timeline to query using a familiar language such as SQL that has the interactive speed of BigQuery? Building on the Data Pipeline sample application, we have published the Cloud History Tool for Google Compute Engine pipeline that:
  1. Reads the current Compute Engine instance, disk, and operations data using the Compute Engine REST API.
  2. Transforms the data.
  3. Loads the data into BigQuery.
  4. Uses App Engine Cron Services to repeat steps 1 - 3 at scheduled intervals.
Viola! The timeline is now available in BigQuery ready for analysis. The sample also includes a SQL query cookbook to help you to get started with timeline analysis. For example, you can run the following query from the BigQuery Web Interface to find out which disks were not used and not attached to any instances at a given time:
SELECT D.name, D.zoneName
FROM [cloud_history.Disks] D
LEFT OUTER JOIN FLATTEN([cloud_history.Instances], disks) I
  ON D.name = I.disks.deviceName AND D.snapshotId = I.snapshotId
WHERE
  I.disks.deviceName IS NULL AND
  D.snapshotId = TIMESTAMP("YYYY-MM-DD hh:mm:ss")
To display your results graphically, follow this tutorial on “Updating Google Spreadsheets with data from Google BigQuery”. Here is a sample spreadsheet that shows the average number of Compute Engine instances deployed across zones over time.

The instructions for setting up the Cloud History Tool are available here. We invite you to try it out and extend it to suit your needs.

-Posted by Wally Yau, Solutions Architect