Posted by Evan Hardesty Parker, Software Engineer
ARCore and Sceneform give developers simple yet powerful tools for creating augmented reality (AR) experiences. In our last update (version 1.6) we focused on making virtual objects appear more realistic within a scene. In version 1.7, we're focusing on creative elements like AR selfies and animation as well as helping you improve the core user experience in your apps.
Example of 3D face mesh application
ARCore's new Augmented Faces API (available on the front-facing camera) offers a high quality, 468-point 3D mesh that lets users attach fun effects to their faces. From animated masks, glasses, and virtual hats to skin retouching, the mesh provides coordinates and region specific anchors that make it possible to add these delightful effects.
You can get started in Unity or Sceneform by creating an ARCore session with the "front-facing camera" and Augmented Faces "mesh" mode enabled. Note that other AR features such as plane detection aren't currently available when using the front-facing camera. AugmentedFace extends Trackable, so faces are detected and updated just like planes, Augmented Images, and other trackables.
AugmentedFace
Trackable
// Create ARCore session that support Augmented Faces for use in Sceneform. public Session createAugmentedFacesSession(Activity activity) throws UnavailableException { // Use the front-facing (selfie) camera. Session session = new Session(activity, EnumSet.of(Session.Feature.FRONT_CAMERA)); // Enable Augmented Faces. Config config = session.getConfig(); config.setAugmentedFaceMode(Config.AugmentedFaceMode.MESH3D); session.configure(config); return session; }
Another way version 1.7 expands the AR creative canvas is by letting your objects dance, jump, spin and move around with support for animations in Sceneform. To start an animation, initialize a ModelAnimator (an extension of the existing Android animation support) with animation data from your ModelRenderable.
ModelAnimator
ModelRenderable
void startDancing(ModelRenderable andyRenderable) { AnimationData data = andyRenderable.getAnimationData("andy_dancing"); animator = new ModelAnimator(data, andyRenderable); animator.start(); }
In ARCore version 1.7 we also focused on helping you improve your user experience with a simplified workflow. We've integrated "ARCore Elements" -- a set of common AR UI components that have been validated with user testing -- into the ARCore SDK for Unity. You can use ARCore Elements to insert AR interactive patterns in your apps without having to reinvent the wheel. ARCore Elements also makes it easier to follow Google's recommended AR UX guidelines.
ARCore Elements includes two AR UI components that are especially useful:
We plan to add more to ARCore Elements over time. You can download the ARCore Elements app available in the Google Play Store to learn more.
ARCore version 1.7 also includes UX enhancements for the smartphone camera -- specifically, the experience of switching in and out of AR mode. Shared Camera access in the ARCore SDK for Java lets users pause an AR experience, access the camera, and jump back in. This can be particularly helpful if users want to take a picture of the action in your app.
More details are available in the Shared Camera developer documentation and Java sample.
For AR experiences to capture users' imaginations they need to be both immersive and easily accessible. With tools for adding AR selfies, animation, and UI enhancements, ARCore version 1.7 can help with both these objectives.
You can learn more about these new updates on our ARCore developer website.
Posted by Ashish Shah, Product Manager, Google AR & VR
The magic of augmented reality is in the way it blends the digital and the physical worlds. For AR experiences to feel truly immersive, digital objects need to look realistic -- as if they were actually there with you, in your space. This is something we continue to prioritize as we update ARCore and Sceneform, our 3D rendering library for Java developers.
Today, with the release of ARCore 1.6, we're bringing further improvements to help you build more realistic and compelling experiences, including better plane boundary tracking and several lighting improvements in Sceneform.
With 250M devices now supporting ARCore, developers can bring these experiences to an even larger and growing user base.
Previous versions of Sceneform defaulted to optimizing ambient light as yellow. Version 1.6 defaults to neutral and white. This aligns more closely to the way light appears in the real world, making digital objects look more natural. You can see the differences below.
This change will also make objects rendered with Sceneform look as if they're affected more naturally by color and lighting in the surrounding environment. For example, if you're viewing an AR object at sunset, it would appear to be illuminated by the red and orange hues, just like real objects in the scene.
In addition, we've updated Sceneform's built-in environmental image to provide a more neutral scene for your app. This will be most noticeable when viewing reflections in smooth metallic surfaces.
To help you further improve quality and engagement in your AR apps, we're adding screen capture and recording to Sceneform. This is something a number of developers have requested to help with demo recording and prototyping. It can also be used as an external facing feature, allowing your users to share screenshots and videos on social media more easily, which can help get the word out about your app.
You can access this functionality through the surface mirroring API for the SceneView class. The API allows you to display the Sceneform view on a device's screen at the same time it's being rendered to another surface (such as the input surface for the Android MediaRecorder).
The new updates to Sceneform and ARCore are available today. With these new versions also comes support for new devices, such as the Samsung Galaxy A3 and the Huawei P20 Lite, that will join the list of ARCore-enabled devices. More information is available on the ARCore developer website.
As developers, we all know that having the right assets is crucial to the success of a 3D application, especially with AR and VR apps. Since we launched Poly a few weeks ago, many developers have been downloading and using Poly models in their apps and games. To make this process easier and more powerful, today we launched the Poly API, which allows applications to dynamically search and download 3D assets at both edit and run time.
The API is REST-based, so it's inherently cross-platform. To help you make the API calls and convert the results into objects that you can display in your app, we provide several toolkits and samples for some common game engines and platforms. Even if your engine or platform isn't included in this list, remember that the API is based on HTTP, which means you can call it from virtually any device that's connected to the Internet.
Here are some of the things the API allows you to do:
If you are using Unity, we offer Poly Toolkit for Unity, a plugin that includes all the necessary functionality to automatically wrap the API calls and download and convert assets, exposing it through a simple C# API. For example, you can fetch and import an asset into your scene at runtime with a single line of code:
PolyApi.GetAsset(ASSET_ID, result => { PolyApi.Import(result.Value, PolyImportOptions.Default()); });
Poly Toolkit optionally also handles authentication for you, so that you can list the signed in user's own private assets, or the assets that the user has liked on the Poly website.
In addition, Poly Toolkit for Unity also comes with an editor window, where you can search for and import assets from Poly into your Unity scene directly from the editor.
If you are using Unreal, we also offer Poly Toolkit for Unreal, which wraps the API and performs automatic download and conversion of OBJs and Blocks models from Poly. It allows you to query for assets and filter results, download assets and import assets as ready-to-use Unreal actors that you can use in your game.
Not using a game engine? No problem! If you are developing for Android, check out our Android sample code, which includes a basic sample with no external dependencies, and also a sample that shows how to use the Poly API in conjunction with ARCore. The samples include:
If you are an iOS developer, we have two samples for you as well: one using SceneKit and one using ARKit, showing how to build an iOS app that downloads and imports models from Poly. This includes all the logic necessary to open an HTTP connection, make the API requests, parse the results, build the 3D objects from the data and place them on the scene.
For web developers, we also offer a complete WebGL sample using Three.js, showing how to get and display a particular asset, or perform searches. There is also a sample showing how to import and display Tilt Brush sketches.
No matter what engine or platform you are using, we hope that the Poly API will help bring high quality assets to your app and help you increase engagement with your users! You can find more information about the Poly API and our toolkits and samples on our developers site.
(Cross-posted from the Chromium Blog)
We’re happy to announce that WebGL is now on by default in Google Chrome’s beta channel, with some shiny new demos to show off what the technology can do.
WebGL is a 3D graphics API for JavaScript that developers can use to create fully 3D web apps. It is based on the OpenGL ES 2.0 API, which should be familiar to many 3D graphics developers. Google, Mozilla, Apple, Opera and graphics hardware vendors have been working together to standardize WebGL for over a year now, and since the spec is just about final at this point, we wanted to get our implementation out there for feedback.
While you may not find much WebGL content on the web, we expect developers to quickly create a lot of content given the power and familiarity of the API. To inspire developers and give users a taste of the kind of apps they can expect in the near future, we’ve worked with a few talented teams to build a few more 3D web apps:
Body Browser, a human anatomy explorer built by a team at Google as a 20% project
Nine Point Five, a 3D earthquake map by Dean McNamee
Music Visualizer, a jukebox that synchronizes 3D graphics to the beat of the music by Jacob Seidelin
You can find these and other demos in the new Chrome Experiments Gallery for WebGL demos. Now that WebGL is enabled in the beta channel, the Chrome Experiments team is looking for your cool WebGL app submissions to show off this slick technology, so don’t forget to submit your cool 3D apps!