Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite
Today, we announced a collection of exciting new features in Google Slides—among these is support for Google Apps Script. Now you can use Apps Script for Slides to programmatically create and modify Slides, plus customize menus, dialog boxes and sidebars in the user interface.
Presentations have come a long way—from casting hand shadows over fires in caves to advances in lighting technology (magic lanterns) to, eventually, (in)famous 35mm slide shows of your Uncle Bob's endless summer vacation. More recently, we have presentation software—like Slides—and developers have been able to write applications to create or update them. This is made even easier with the new Apps Script support for Google Slides. In the latest G Suite Dev Show episode, we demo this new service, walking you through a short example that automatically creates a slideshow from a collection of images.
To keep things simple, the chosen images are already available online, accessible by URL. For each image, a new (blank) slide is added then the image is inserted. The key to this script are two lines of JavaScript (given an existing presentation and a link to each image):
var slide = presentation.appendSlide(SlidesApp.PredefinedLayout.BLANK); var image = slide.insertImage(link);
The first line of code adds a new slide while the other inserts an image on the new slide. Both lines are repeated for each image in the collection. While this initial, rudimentary solution works, the slide presentation created doesn't exactly fit the bill. It turns out that adding a few more lines make the application much more useful. See the video for all the details.
To get started, check the documentation to learn more about Apps Scripts for Slides, or check out the Translate and Progress Bar sample Add-ons. If you want to dig deeper into the code sample from our video, take a look at the corresponding tutorial. And, if you love watching videos, check out our Apps Script video library or other G Suite Dev Show episodes. If you wish to build applications with Google Slides outside of the Apps Script environment and want to use your own development tools, you can do so with the Slides (REST) API—check out its documentation and video library.
With all these options, we look forward to seeing the applications you build with Google Slides!
Originally posted by Paul McReynolds, Product Manager (@pauljmcr), Apps Script and Wesley Chun (@wescpy), Developer Advocate, G Suite on the G Suite Developers Blog
Apps Script is just as popular inside Google as it is among external users and developers. In fact, there are more than 70,000 weekly active scripts written by thousands of Googlers. One of our many uses for Apps Script at Google is to automate and monitor our internal issue tracker.
In spring of this year, we migrated our G Suite issue trackers to a new system based on our internal tracker. This carries a lot of benefits, including improving our ability to track how issues reported from outside of Google relate to bugs and features we're working on internally. We also have an internal Apps Script API that talks to our issue tracker, which we can now use to work with issues reported from outside of Google.
As soon as the migration was finished, we put Apps Script to work monitoring…itself. Now we have a script in place that monitors Apps Script issues as they are reported and upvoted on the public tracker. When we see an issue that's having widespread or sudden impact, the script generates an alert that we can then investigate. With the help of our large, active community of developers, and leveraging Apps Script, we're now able to identify and respond to issues more quickly.
There's no substitute for independent monitoring, and our Apps Script-based approach isn't the first or the last line of defense. Instead, this new script helps us catch anything that our monitoring systems miss by listening to what developers are saying on the tracker.
Please help us keep Apps Script humming! When you notice a problem, search the issue tracker for it and file an issue if it's new. Click the star to let us know you're affected and leave a comment with instructions to reproduce, along with any other relevant details. Those instructions and other details help us respond to the issues more effectively, so please be sure to include them.
Happy scripting!
Coca-Cola's core loyalty program launched in 2006 as MyCokeRewards.com. The "MCR.com" platform included the creation of unique product codes for every Coca-Cola, Sprite, Fanta, and Powerade product sold in 20oz bottles and cardboard "fridge-packs" purchasable at grocery stores and other retail outlets. Users could enter these product codes at MyCokeRewards.com to participate in promotional campaigns.
Fast-forward to 2016: Coke's loyalty programs are still hugely popular with millions of product codes having been entered for promotions and sweepstakes. However, mobile browsing went from non-existent in 2006 to over 50% share by the end of 2016. The launch of Coke.com as a mobile-first web experience (replacing MCR.com) was a response to these changes in browsing behavior. Thumb-entering 14-character codes into a mobile device could be a difficult enough user experience to impact the success of our programs. We want to provide our mobile audience the best possible experience, and recent advances in artificial intelligence opened new opportunities.
For years Coke attempted to use off-the-shelf optical character recognition (OCR) libraries and services to read product codes with little success. Our printing process typically uses low-resolution dot-matrix fonts with the cap or fridge-pack media running under the printhead at very high speeds. All of this translates into a low-fidelity string of characters that defeats off-the-shelf OCR offerings (and can sometimes be hard to read with the human eye as well). OCR is critical to simplifying the code-entry process for mobile users: they should be able to take a picture of a code and automatically have the purchase registered for a promotional entry. We needed a purpose-built OCR system to recognize our product codes.
Our research led us to a promising solution: Convolutional Neural Networks. CNNs are one of a family of "deep learning" neural networks that are at the heart of modern artificial intelligence products. Google has used CNNs to extract street address numbers from StreetView images. CNNs also perform remarkably well at recognizing handwritten digits. These number-recognition use-cases were a perfect proxy for the type of problem we were trying to solve: extracting strings from images that contain small character sets with lots of variance in the appearance of the characters.
In the past, developing deep neural networks like CNNs has been a challenge because of the complexity of available training and inference libraries. TensorFlow, a machine learning framework that was open sourced by Google in November 2015, is designed to simplify the development of deep neural networks.
TensorFlow provides high-level interfaces to different kinds of neuron layers and popular loss functions, which makes it easier to implement different CNN model architectures. The ability to rapidly iterate over different model architectures dramatically reduced the time required to build Coke's custom OCR solution because different models could be developed, trained, and tested in a matter of days. TensorFlow models are also portable: the framework supports model execution natively on mobile devices ("AI on the edge") or in servers hosted remotely in the cloud. This enables a "create once, run anywhere" approach for model execution across many different platforms, including web-based and mobile.
Any neural network is only as good as the data used to train it. We knew that we needed a large set of labeled product-code images to train a CNN that would achieve our performance goals. Our training set would be built in three phases:
The pre-launch training phase began by programmatically generating millions of simulated product-code images. These simulated images included variations in tilt, lighting, shadows, and blurriness. The prediction accuracy (i.e. how often all 14 characters were correctly predicted within the top-10 predictions) was at 50% against real-world images when the model was trained using only simulated images. This provided a baseline for transfer-learning: a model initially trained with simulated images was the foundation for a more accurate model that would be trained against real-world images.
The challenge now turned to enriching the simulated images with enough real-world images to hit our performance goals. We created a purpose-built training app for iOS and Android devices that "trainers" could use to take pictures of codes and label them; these labeled images were then transferred to cloud storage for training. We did a production run of several thousand product codes on bottle caps and fridge-packs and distributed these to multiple suppliers who used the app to create the initial real-world training set.
Even with an augmented and enriched training set, there is no substitute for images created by end-users in a variety of environmental conditions. We knew that scans would sometimes result in an inaccurate code prediction, so we needed to provide a user-experience that would allow users to quickly correct these predictions. Two components are essential to delivering this experience: a product-code validation service that has been in use since the launch of our original loyalty platform in 2006 (to verify that a predicted code is an actual code) and a prediction algorithm that performs a regression to determine a per-character confidence at each one of the 14 character positions. If a predicted code is invalid, the top prediction as well as the confidence levels for each character are returned to the user interface. Low-confidence characters are visually highlighted to guide the user to update characters that need attention.
This user interface innovation enables an active learning process: a feedback loop allows the model to gradually improve by returning corrected predictions to the training pipeline. In this way, our users organically improve the accuracy of the character recognition model over time.
To meet user expectations around performance, we established a few ambitious requirements for the product-code OCR pipeline:
We initially explored an architecture that used a single CNN for all product-code media. This approach created a model that was too large to be distributed to mobile apps and the execution time was longer than desired. Our applied-AI partners at Quantiphi, Inc. began iterating on different model architectures, eventually landing on one that used multiple CNNs.
This new architecture reduced the model size dramatically without sacrificing accuracy, but it was still on the high end of what we needed in order to support over-the-air updates to mobile apps. We next used TensorFlow's prebuilt quantization module to reduce the model size by reducing the fidelity of the weights between connected neurons. Quantization reduced the model size by a factor of 4, but a dramatic reduction in model size occurred when Quantiphi had a breakthrough using a new approach called SqueezeNet.
The SqueezeNet model was published by a team of researchers from UC Berkeley and Stanford in November of 2016. It uses a small but highly complex design to achieve accuracy levels on par with much larger models against popular benchmarks such as Imagenet. After re-architecting our character recognition models to use a SqueezeNet CNN, Quantiphi was able to reduce the model size of certain media types by a factor of 100. Since the SqueezeNet model was inherently smaller, a richer feature detection architecture could be constructed, achieving much higher accuracy at much smaller sizes compared to our first batch of models trained without SqueezeNet. We now have a highly accurate model that can be easily updated on remote devices; the recognition success rate of our final model before active learning was close to 96%, which translates into a 99.7% character recognition accuracy (just 3 misses for every 1000 character predictions).
Advances in artificial intelligence and the maturity of TensorFlow enabled us to finally achieve a long-sought proof-of-purchase capability. Since launching in late February 2017, our product code recognition platform has fueled more than a dozen promotions and resulted in over 180,000 scanned codes; it is now a core component for all of Coca-Cola North America's web-based promotions.
Moving to an AI-enabled product-code recognition platform has been valuable for two key reasons:
Our product-code recognition platform is the first execution of new AI-enabled capabilities at scale within Coca-Cola. We're now exploring AI applications across multiple lines of business, from new product development to ecommerce retail optimization.
Last month we announced that UK users can access apps for the Google Assistant on Google Home and their phones—and starting today, we're bringing Actions on Google to Australia. From Perth to Sydney, developers can start building apps for the Google Assistant, giving their users even more ways to get things done.
Similar to our launch in the UK, your English apps will appear in the local directory automatically. With that said, there are a few things to help make your app a true blue Aussie:
Our developer tools, documentation and simulator have all been updated to make it easy for you to create, test and deploy your app. So what are you waiting for?
UK and Aussie users are just the start, we'll continue to make the Actions on Google platform available in more languages over the coming year. If you have questions about internationalization, please reach out to us on Stackoverflow and Google+.
TensorFlow 1.3 introduces two important features that you should try out:
Below you can see how they fit in the TensorFlow architecture. Combined, they offer an easy way to create TensorFlow models and to feed data to them:
To explore these features we're going to build a model and show you relevant code snippets. The complete code is available here, including instructions for getting the training and test files. Note that the code was written to demonstrate how Datasets and Estimators work functionally, and was not optimized for maximum performance.
The trained model categorizes Iris flowers based on four botanical features (sepal length, sepal width, petal length, and petal width). So, during inference, you can provide values for those four features and the model will predict that the flower is one of the following three beautiful variants:
We're going to train a Deep Neural Network Classifier with the below structure. All input and output values will be float32, and the sum of the output values will be 1 (as we are predicting the probability for each individual Iris type):
float32
For example, an output result might be 0.05 for Iris Setosa, 0.9 for Iris Versicolor, and 0.05 for Iris Virginica, which indicates a 90% probability that this is an Iris Versicolor.
Alright! Now that we have defined the model, let's look at how we can use Datasets and Estimators to train it and make predictions.
Datasets is a new way to create input pipelines to TensorFlow models. This API is much more performant than using feed_dict or the queue-based pipelines, and it's cleaner and easier to use. Although Datasets still resides in tf.contrib.data at 1.3, we expect to move this API to core at 1.4, so it's high time to take it for a test drive.
At a high-level, the Datasets consists of the following classes:
Where:
To get started, let's first look at the dataset we will use to feed our model. We'll read data from a CSV file, where each row will contain five values-the four input values, plus the label:
The label will be:
To describe our dataset, we first create a list of our features:
feature_names = [ 'SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth']
When we train our model, we'll need a function that reads the input file and returns the feature and label data. Estimators requires that you create a function of the following format:
def input_fn(): ...<code>... return ({ 'SepalLength':[values], ..<etc>.., 'PetalWidth':[values] }, [IrisFlowerType])
The return value must be a two-element tuple organized as follows: :
Since we are returning a batch of input features and training labels, it means that all lists in the return statement will have equal lengths. Technically speaking, whenever we referred to "list" here, we actually mean a 1-d TensorFlow tensor.
To allow simple reuse of the input_fn we're going to add some arguments to it. This allows us to build input functions with different settings. The arguments are pretty straightforward:
input_fn
file_path
perform_shuffle
repeat_count
Here's how we can implement this function using the Dataset API. We will wrap this in an "input function" that is suitable when feeding our Estimator model later on:
def my_input_fn(file_path, perform_shuffle=False, repeat_count=1): def decode_csv(line): parsed_line = tf.decode_csv(line, [[0.], [0.], [0.], [0.], [0]]) label = parsed_line[-1:] # Last element is the label del parsed_line[-1] # Delete last element features = parsed_line # Everything (but last element) are the features d = dict(zip(feature_names, features)), label return d dataset = (tf.contrib.data.TextLineDataset(file_path) # Read text file .skip(1) # Skip header row .map(decode_csv)) # Transform each elem by applying decode_csv fn if perform_shuffle: # Randomizes input using a window of 256 elements (read into memory) dataset = dataset.shuffle(buffer_size=256) dataset = dataset.repeat(repeat_count) # Repeats dataset this # times dataset = dataset.batch(32) # Batch size to use iterator = dataset.make_one_shot_iterator() batch_features, batch_labels = iterator.get_next() return batch_features, batch_labels
Note the following: :
TextLineDataset
shuffle
map
decode_csv
That's an introduction to Datasets! Just for fun, we can now use this function to print the first batch:
next_batch = my_input_fn(FILE, True) # Will return 32 random elements # Now let's try it out, retrieving and printing one batch of data. # Although this code looks strange, you don't need to understand # the details. with tf.Session() as sess: first_batch = sess.run(next_batch) print(first_batch) # Output ({'SepalLength': array([ 5.4000001, ...<repeat to 32 elems>], dtype=float32), 'PetalWidth': array([ 0.40000001, ...<repeat to 32 elems>], dtype=float32), ... }, [array([[2], ...<repeat to 32 elems>], dtype=int32) # Labels )
That's actually all we need from the Dataset API to implement our model. Datasets have a lot more capabilities though; please see the end of this post where we have collected more resources.
Estimators is a high-level API that reduces much of the boilerplate code you previously needed to write when training a TensorFlow model. Estimators are also very flexible, allowing you to override the default behavior if you have specific requirements for your model.
There are two possible ways you can build your model using Estimators:
Here is the class diagram for Estimators:
We hope to add more pre-made Estimators in future releases.
As you can see, all estimators make use of input_fn that provides the estimator with input data. In our case, we will reuse my_input_fn, which we defined for this purpose.
my_input_fn
The following code instantiates the estimator that predicts the Iris flower type:
# Create the feature_columns, which specifies the input to our model. # All our input features are numeric, so use numeric_column for each one. feature_columns = [tf.feature_column.numeric_column(k) for k in feature_names] # Create a deep neural network regression classifier. # Use the DNNClassifier pre-made estimator classifier = tf.estimator.DNNClassifier( feature_columns=feature_columns, # The input features to our model hidden_units=[10, 10], # Two layers, each with 10 neurons n_classes=3, model_dir=PATH) # Path to where checkpoints etc are stored
We now have a estimator that we can start to train.
Training is performed using a single line of TensorFlow code:
# Train our model, use the previously function my_input_fn # Input to training is a file with training example # Stop training after 8 iterations of train data (epochs) classifier.train( input_fn=lambda: my_input_fn(FILE_TRAIN, True, 8))
But wait a minute... what is this "lambda: my_input_fn(FILE_TRAIN, True, 8)" stuff? That is where we hook up Datasets with the Estimators! Estimators needs data to perform training, evaluation, and prediction, and it uses the input_fn to fetch the data. Estimators require an input_fn with no arguments, so we create a function with no arguments using lambda, which calls our input_fn with the desired arguments: the file_path, shuffle setting, and repeat_count. In our case, we use our my_input_fn, passing it:
lambda: my_input_fn(FILE_TRAIN, True, 8)
lambda
file_path, shuffle setting,
my_input_fn,
FILE_TRAIN
True
8
Ok, so now we have a trained model. How can we evaluate how well it's performing? Fortunately, every Estimator contains an evaluate method:
evaluate
# Evaluate our model using the examples contained in FILE_TEST # Return value will contain evaluation_metrics such as: loss & average_loss evaluate_result = estimator.evaluate( input_fn=lambda: my_input_fn(FILE_TEST, False, 4) print("Evaluation results") for key in evaluate_result: print(" {}, was: {}".format(key, evaluate_result[key]))
In our case, we reach an accuracy of about ~93%. There are various ways of improving this accuracy, of course. One way is to simply run the program over and over. Since the state of the model is persisted (in model_dir=PATH above), the model will improve the more iterations you train it, until it settles. Another way would be to adjust the number of hidden layers or the number of nodes in each hidden layer. Feel free to experiment with this; please note, however, that when you make a change, you need to remove the directory specified in model_dir=PATH, since you are changing the structure of the DNNClassifier.
model_dir=PATH
DNNClassifier
And that's it! We now have a trained model, and if we are happy with the evaluation results, we can use it to predict an Iris flower based on some input. As with training, and evaluation, we make predictions using a single function call:
# Predict the type of some Iris flowers. # Let's predict the examples in FILE_TEST, repeat only once. predict_results = classifier.predict( input_fn=lambda: my_input_fn(FILE_TEST, False, 1)) print("Predictions on test file") for prediction in predict_results: # Will print the predicted class, i.e: 0, 1, or 2 if the prediction # is Iris Sentosa, Vericolor, Virginica, respectively. print prediction["class_ids"][0]
The preceding code specified FILE_TEST to make predictions on data stored in a file, but how could we make predictions on data residing in other sources, for example, in memory? As you may guess, this does not actually require a change to our predict call. Instead, we configure the Dataset API to use a memory structure as follows:
FILE_TEST
predict
# Let create a memory dataset for prediction. # We've taken the first 3 examples in FILE_TEST. prediction_input = [[5.9, 3.0, 4.2, 1.5], # -> 1, Iris Versicolor [6.9, 3.1, 5.4, 2.1], # -> 2, Iris Virginica [5.1, 3.3, 1.7, 0.5]] # -> 0, Iris Sentosa def new_input_fn(): def decode(x): x = tf.split(x, 4) # Need to split into our 4 features # When predicting, we don't need (or have) any labels return dict(zip(feature_names, x)) # Then build a dict from them # The from_tensor_slices function will use a memory structure as input dataset = tf.contrib.data.Dataset.from_tensor_slices(prediction_input) dataset = dataset.map(decode) iterator = dataset.make_one_shot_iterator() next_feature_batch = iterator.get_next() return next_feature_batch, None # In prediction, we have no labels # Predict all our prediction_input predict_results = classifier.predict(input_fn=new_input_fn) # Print results print("Predictions on memory data") for idx, prediction in enumerate(predict_results): type = prediction["class_ids"][0] # Get the predicted class (index) if type == 0: print("I think: {}, is Iris Sentosa".format(prediction_input[idx])) elif type == 1: print("I think: {}, is Iris Versicolor".format(prediction_input[idx])) else: print("I think: {}, is Iris Virginica".format(prediction_input[idx])
Dataset.from_tensor_slides() is designed for small datasets that fit in memory. When using TextLineDataset as we did for training and evaluation, you can have arbitrarily large files, as long as your memory can manage the shuffle buffer and batch sizes.
Dataset.from_tensor_slides()
Using a pre-made Estimator like DNNClassifier provides a lot of value. In addition to being easy to use, pre-made Estimators also provide built-in evaluation metrics, and create summaries you can see in TensorBoard. To see this reporting, start TensorBoard from your command-line as follows:
# Replace PATH with the actual path passed as model_dir argument when the # DNNRegressor estimator was created. tensorboard --logdir=PATH
The following diagrams show some of the data that TensorBoard will provide:
In this this blogpost, we explored Datasets and Estimators. These are important APIs for defining input data streams and creating models, so investing time to learn them is definitely worthwhile!
For more details, be sure to check out
But it doesn't stop here. We will shortly publish more posts that describe how these APIs work, so stay tuned for that!
Until then, Happy TensorFlow coding!
You can now use our developer-documentation style guide for open source documentation projects.
For some years now, our technical writers at Google have used an internal-only editorial style guide for most of our developer documentation. In order to better support external contributors to our open source projects, such as Kubernetes, AMP, or Dart, and to allow for more consistency across developer documentation, we're now making that style guide public.
If you contribute documentation to projects like those, you now have direct access to useful guidance about voice, tone, word choice, and other style considerations. It can be useful for general issues, like reminders to use second person, present tense, active voice, and the serial comma; it can also be great for checking very specific issues, like whether to write "app" or "application" when you want to be consistent with the Google Developers style.
The style guide is a reference document, so instead of reading through it in linear order, you can use it to look things up as needed. For matters of punctuation, grammar, and formatting, you can do a search-in-page to find items like "Commas," "Lists," and "Link text" in the left nav. For specific terms and phrases, you can look at the word list.
Keep an eye on the guide's release notes page for updates and developments, and send us your comments and suggestions via the Send Feedback link on each page of the guide—we want to hear from you as we continue to evolve the style guide.
For a virtual scene to be truly immersive, stunning visuals need to be accompanied by true spatial audio to create a realistic and believable experience. Spatial audio tools allow developers to include sounds that can come from any direction, and that are associated in 3D space with audio sources, thus completely enveloping the user in 360-degree sound.
Spatial audio helps draw the user into a scene and creates the illusion of entering an entirely new world. To make this possible, the Chrome Media team has created Songbird, an open source, spatial audio encoding engine that works in any web browser by using the Web Audio API.
The Songbird library takes in any number of mono audio streams and allows developers to programmatically place them in 3D space around the user. Songbird allows you to create immersive soundscapes, realistically reproducing reflection and reverb for the space you describe. Sounds bounce off walls and reflect off materials just as they would in real-life, capturing truly 360-degree sound. Songbird creates an ambisonic soundfield that can then be rendered in real-time for use in your application. We've partnered with the Omnitone project, which we blogged about last year, to add higher-order ambisonic support to Omnitone's binaural renderer to produce far more accurate sounding audio than ever before.
Songbird encapsulates Omnitone and with it, developers can now add interactive, full-sphere audio to any web based application. Songbird can scale to any order ambisonics, thereby creating a more realistic sound and higher performance than what is achievable through standard Web Audio API.
The implementation of Songbird is based on the Google spatial media specification. It expects mono input and outputs ambisonic (multichannel) ACN channel layout with SN3D normalization. Detailed documentation may be found here.
As the web emerges as an important VR platform for delivering content, spatial audio will play a vital role in users' embrace of this new medium. Songbird and Omnitone are key tools in enabling spatial audio on the web platform and establishing it as a preeminent platform for compelling VR experiences. Combining these audio experiences with 3D JavaScript libraries like three.js gives a glimpse into the future on the web.
This project was made possible through close collaboration with Google's Daydream and Web Audio teams. This collaboration allowed us to deliver similar audio capabilities to the web as are available to developers creating Daydream applications.
We look forward to seeing what people do with Songbird now that it's open source. Check out the code on GitHub and let us know what you think. Also available are a number of demos on creating full spherical audio with Songbird.
Launchpad Accelerator gives us an opportunity to work with and empower amazing developers, who are solving major challenges all around the world -- whether it's streamlining digital commerce across Africa, providing access to multimedia tools that support special needs education, or using AI to simplify business operations.
That's why we're doubling down on our efforts and opening up applications for the next class of the program to more countries for the first time starting today. Here's the full list of the new additions:
They'll be joined by our larger list of countries that are already part of the program, including: Argentina, Brazil, Chile, Colombia, Czech Republic, Hungary, India, Indonesia, Kenya, Malaysia, Mexico, Nigeria, Philippines, Poland, South Africa, Thailand, and Vietnam.
The application process for the equity-free program will end on October 2, 2017 at 9AM PST. Later in the year, the list of selected developers will be invited to the Google Developers Launchpad Space in San Francisco for 2 weeks of all-expense-paid training.
The training at Google HQ includes intensive mentoring from 20+ Google teams, and expert mentors from top technology companies and VCs in Silicon Valley. Participants receive equity-free support, credits for Google products, PR support and continue to work closely with Google back in their home country during the 6-month program. Hear from some alumnus about their experiences here.
Each startup that applies to the Launchpad Accelerator is considered holistically and with great care. Below are general guidelines behind our process to help you understand what we look for in our candidates.
All startups in the program must:
Additionally, we are interested in what kind of startup you are. We also consider:
We can't wait to hear from you and see how we can work together to improve your business.
We'd like to share with you some good news about an improvement in the data available via the Google Play Developer API. Starting Monday Aug 28, the API for Purchases.products and Purchases.subscriptions will be returning a couple of new values:
This additional data will be automatically returned to you in the JSON responses to your API calls. Please double check your integration to make sure this new field and value will not cause any problems for you.
To view all of the values returned by the APIs, check Purchases.products and Purchases.subscriptions reference pages.