Posted:

Posted by Chandu Thota, Engineering Director and Matthew Kulick, Product Manager

Just like lighthouses have helped sailors navigate the world for thousands of years, electronic beacons can be used to provide precise location and contextual cues within apps to help you navigate the world. For instance, a beacon can label a bus stop so your phone knows to have your ticket ready, or a museum app can provide background on the exhibit you’re standing in front of. Today, we’re beginning to roll out a new set of features to help developers build apps using this technology. This includes a new open format for Bluetooth low energy (BLE) beacons to communicate with people’s devices, a way for you to add this meaningful data to your apps and to Google services, as well as a way to manage your fleet of beacons efficiently.

Eddystone: an open BLE beacon format

Working closely with partners in the BLE beacon industry, we’ve learned a lot about the needs and the limitations of existing beacon technology. So we set out to build a new class of beacons that addresses real-life use-cases, cross-platform support, and security.

At the core of what it means to be a BLE beacon is the frame format—i.e., a language—that a beacon sends out into the world. Today, we’re expanding the range of use cases for beacon technology by publishing a new and open format for BLE beacons that anyone can use: Eddystone. Eddystone is robust and extensible: It supports multiple frame types for different use cases, and it supports versioning to make introducing new functionality easier. It’s cross-platform, capable of supporting Android, iOS or any platform that supports BLE beacons. And it’s available on GitHub under the open-source Apache v2.0 license, for everyone to use and help improve.

By design, a beacon is meant to be discoverable by any nearby Bluetooth Smart device, via its identifier which is a public signal. At the same time, privacy and security are really important, so we built in a feature called Ephemeral Identifiers (EIDs) which change frequently, and allow only authorized clients to decode them. EIDs will enable you to securely do things like find your luggage once you get off the plane or find your lost keys. We’ll publish the technical specs of this design soon.


Eddystone for developers: Better context for your apps

Eddystone offers two key developer benefits: better semantic context and precise location. To support these, we’re launching two new APIs. The Nearby API for Android and iOS makes it easier for apps to find and communicate with nearby devices and beacons, such as a specific bus stop or a particular art exhibit in a museum, providing better context. And the Proximity Beacon API lets developers associate semantic location (i.e., a place associated with a lat/long) and related data with beacons, stored in the cloud. This API will also be used in existing location APIs, such as the next version of the Places API.

Eddystone for beacon manufacturers: Single hardware for multiple platforms

Eddystone’s extensible frame formats allow hardware manufacturers to support multiple mobile platforms and application scenarios with a single piece of hardware. An existing BLE beacon can be made Eddystone compliant with a simple firmware update. At the core, we built Eddystone as an open and extensible protocol that’s also interoperable, so we’ll also introduce an Eddystone certification process in the near future by closely working with hardware manufacturing partners. We already have a number of partners that have built Eddystone-compliant beacons.

Eddystone for businesses: Secure and manage your beacon fleet with ease

As businesses move from validating their beacon-assisted apps to deploying beacons at scale in places like stadiums and transit stations, hardware installation and maintenance can be challenging: which beacons are working, broken, missing or displaced? So starting today, beacons that implement Eddystone’s telemetry frame (Eddystone-TLM) in combination with the Proximity Beacon API’s diagnostic endpoint can help deployers monitor their beacons’ battery health and displacement—common logistical challenges with low-cost beacon hardware.

Eddystone for Google products: New, improved user experiences

We’re also starting to improve Google’s own products and services with beacons. Google Maps launched beacon-based transit notifications in Portland earlier this year, to help people get faster access to real-time transit schedules for specific stations. And soon, Google Now will also be able to use this contextual information to help prioritize the most relevant cards, like showing you menu items when you’re inside a restaurant.

We want to make beacons useful even when a mobile app is not available; to that end, the Physical Web project will be using Eddystone beacons that broadcast URLs to help people interact with their surroundings.

Beacons are an important way to deliver better experiences for users of your apps, whether you choose to use Eddystone with your own products and services or as part of a broader Google solution like the Places API or Nearby API. The ecosystem of app developers and beacon manufacturers is important in pushing these technologies forward and the best ideas won’t come from just one company, so we encourage you to get some Eddystone-supported beacons today from our partners and begin building!

Update (July 16, 2015 11.30am PST) To clarify, beacons registered with proper place identifiers (as defined in our Places API) will be used in Place Picker. You have to use Proximity Beacon API to map a beacon to a place identifier. See the post on Google's Geo Developer Blog for more details.

Posted:

Posted by Akshay Kannan, Product Manager

Mobile phones have made it easy to communicate with anyone, whether they’re right next to you or on the other side of the world. The great irony, however, is that those interactions can often feel really awkward when you're sitting right next to someone.

Today, it takes several steps -- whether it’s exchanging contact information, scanning a QR code, or pairing via bluetooth -- to get a simple piece of information to someone right next to you. Ideally, you should be able to just turn to them and do so, the same way you do in the real world.

This is why we built Nearby. Nearby provides a proximity API, Nearby Messages, for iOS and Android devices to discover and communicate with each other, as well as with beacons.

Nearby uses a combination of Bluetooth, Wi-Fi, and inaudible sound (using the device’s speaker and microphone) to establish proximity. We’ve incorporated Nearby technology into several products, including Chromecast Guest Mode, Nearby Players in Google Play Games, and Google Tone.

With the latest release of Google Play services 7.8, the Nearby Messages API becomes available to all developers across iOS and Android devices (Gingerbread and higher). Nearby doesn’t use or require a Google Account. The first time an app calls Nearby, users get a permission dialog to grant that app access.

A few of our partners have built creative experiences to show what's possible with Nearby.

Edjing Pro uses Nearby to let DJs publish their tracklist to people around them. The audience can vote on tracks that they like, and their votes are updated in realtime.

Trello uses Nearby to simplify sharing. Share a Trello board to the people around you with a tap of a button.

Pocket Casts uses Nearby to let you find and compare podcasts with people around you. Open the Nearby tab in Pocket Casts to view a list of podcasts that people around you have, as well as podcasts that you have in common with others.

Trulia uses Nearby to simplify the house hunting process. Create a board and use Nearby to make it easy for the people around you to join it.

To learn more, visit developers.google.com/nearby?utm_campaign=nearby-api-714&utm_source=gdbc&utm_medium=blog.

Posted:

Posted by Addy Osmani, Staff Developer Platform Engineer

Back in 2014, Google published the material design specification with a goal to provide guidelines for good design and beautiful UI across all device form factors. Today we are releasing our first effort to bring this to websites using vanilla CSS, HTML and JavaScript. We’re calling it Material Design Lite (MDL).

MDL makes it easy to add a material design look and feel to your websites. The “Lite” part of MDL comes from several key design goals: MDL has few dependencies, making it easy to install and use. It is framework-agnostic, meaning MDL can be used with any of the rapidly changing landscape of front-end tool chains. MDL has a low overhead in terms of code size (~27KB gzipped), and a narrow focus—enabling material design styling for websites.

Get started now and give it a spin or try one of our examples on CodePen.

MDL is a complimentary implementation to the Paper elements built with Polymer. The Paper elements are fully encapsulated components that can be used individually or composed together to create a material design-style site, and support more advanced user interaction. That said, MDL can be used alongside the Polymer element counterparts.

Out-of-the-box Templates

MDL optimises for websites heavy on content such as marketing pages, text articles and blogs. We've built responsive templates to show the broadness of sites that can be created using MDL that can be downloaded from our Templates page. We hope these inspire you to build great looking sites.

Blogs:

Text-heavy content sites:

Dashboards:

Standalone articles:

and more.

Technical details and browser support

MDL includes a rich set of components, including material design buttons, text-fields, tooltips, spinners and many more. It also include a responsive grid and breakpoints that adhere to the new material design adaptive UI guidelines.

The MDL sources are written in Sass using BEM. While we hope you'll use our theme customizer or pre-built CSS, you can also download the MDL sources from GitHub and build your own version. The easiest way to use MDL is by referencing our CDN, but you can also download the CSS or import MDL via npm or Bower.

The complete MDL experience works in all modern evergreen browsers (Chrome, Firefox, Opera, Edge) and Safari, but gracefully degrades to CSS-only in browsers like IE9 that don’t pass our Cutting-the-mustard test. Our browser compatibility matrix has the most up to date information on the browsers MDL officially supports.

More questions?

We've been working with the designers evolving material design to build in additional thinking for the web. This includes working on solutions for responsive templates, high-performance typography and missing components like badges. MDL is spec compliant for today and provides guidance on aspects of the spec that are still being evolved. As with the material design spec itself, your feedback and questions will help us evolve MDL, and in turn, how material design works on the web.

We’re sure you have plenty of questions and we have tried to cover some of them in our FAQ. Feel free to hit us up on GitHub or Stack Overflow if you have more. :)

Wrapping up

MDL is built on the core technologies of the web you already know and use every day—CSS, HTML and JS. By adopting MDL into your projects, you gain access to an authoritative and highly curated implementation of material design for the web. We can’t wait to see the beautiful, modern, responsive websites you’re going to build with Material Design Lite.

Posted:

Posted by Leon Nicholls, Developer Programs Engineer

Remote Display on Google Cast allows your app to display both on your mobile and Cast device at the same time. Processing is a programming language that allows artists and hobbyists to create advanced graphics and interactive exhibitions. By putting these two things together we were able to quickly create stunning visual art and display it on the big screen just by bringing our phone to the party or gallery. This article describes how we added support for the Google Cast Remote Display APIs to Processing for Android and how you can too.

An example app from the popular Processing toxiclibs library on Cast. Download the code and run it on your own Chromecast!

A little background

Processing has its own IDE and has many contributed libraries that hide the technical details of various input, output and rendering technologies. Users of Processing with just basic programming skills can create complicated graphical scenes and visualizations.

To write a program in the Processing IDE you create a “sketch” which involves adding code to life-cycle callbacks that initialize and draw the scene. You can run the sketch as a Java program on your desktop. You can also enable support for Processing for Android and then run the same sketch as an app on your Android mobile device. It also supports touch events and sensor data to interact with the generated apps.

Instead of just viewing the graphics on the small screen of the Android device, we can do better by projecting the graphics on a TV screen. Google Cast Remote Display APIs makes it easy to bring graphically intensive apps to Google Cast receivers by using the GPUs, CPUs and sensors available on the mobile devices you already have.

How we did it

Adding support for Remote Display involved modifying the Processing for Android Mode source code. To compile the Android Mode you first need to compile the source code of the Processing IDE. We started with the source code of the current stable release version 2.2.1 of the Processing IDE and compiled it using its Ant build script (detailed instructions are included along with the code download). We then downloaded the Android SDK and source code for the Android Mode 0232. After some minor changes to its build config to support the latest Android SDK version, we used Ant to build the Android Mode zip file. The zip file was unzipped into the Processing IDE modes directory.

We then used the IDE to open one of the Processing example sketches and exported it as an Android project. In the generated project we replaced the processing-core.jar library with the source code for Android Mode. We also added a Gradle build config to the project and then imported the project into Android Studio.

The main Activity for a Processing app is a descendent of the Android Mode PApplet class. The PApplet class uses a GLSurfaceView for rendering 2D and 3D graphics. We needed to change the code to use that same GLSurfaceView for the Remote Display API.

It is a requirement in the Google Cast Design Checklist for the Cast button to be visible on all screens. We changed PApplet to be an ActionBarActivity so that we can show the Cast button in the action bar. The Cast button was added by using a MediaRouteActionProvider. To only list Google Cast devices that support Remote Display, we used a MediaRouteSelector with an App ID we obtained from the Google Cast SDK Developer Console for a Remote Display Receiver.

Next, we created a class called PresentationService that extends CastRemoteDisplayLocalService. The service allows the app to keep the remote display running even when it goes into the background. The service requires a CastPresentation instance for displaying its content. The CastPresentation instance uses the GLSurfaceView from the PApplet class for its content view. However, setting the CastPresentation content view requires some changes to PApplet so that the GLSurfaceView isn’t initialized in its onCreate, but waits until the service onRemoteDisplaySessionStarted callback is invoked.

When the user selects a Cast device in the Cast button menu and the MediaRouter onRouteSelected event is called, the service is started with CastRemoteDisplayLocalService.startService. When the user disconnects from a Cast device using the Cast button, MediaRouter onRouteUnselected event is called and the service is stopped by using CastRemoteDisplayLocalService.stopService.

For the mobile display, we display an image bitmap and forward the PApplet touch events to the existing surfaceTouchEvent method. When you run the Android app, you can use touch gestures on the display of the mobile device to control the interaction on the TV. Take a look at this video of some of the Processing apps running on a Chromecast.

Most of the new code is contained in the PresentationService and RemoteDisplayHelper classes. Your mobile device needs to have at least Android KitKat and Google Play services version 7.5.71.

You can too

Now you can try the Remote Display APIs in your Processing apps. Instead of changing the generated code every time you export your Android Mode project, we recommend that you use our project as a base and simply copy your generated Android code and libraries to our project. Then simply modify the project build file and update the manifest to start the app with your sketch’s main Activity.

To see a more detailed description on how to use the Remote Display APIs, read our developer documentation. We are eager to see what Processing artists can do with this code in their projects.

Posted:

We know that developers are always interested in learning about new APIs and best practices for existing ones. And, one of the best ways to learn is face to face interaction with an expert in the subject.

Your friendly neighborhood Google Developer Relations team members work everyday with the APIs you care about. We host, as well as attend, a number of events around the world to help as many developers as possible throughout the year. However, it hasn’t been easy for interested developers to find relevant events close to them.

We also realized that while many developers have met at least a couple of our Developer Advocates, it’s hard to tie an Advocate to their API expertise.

Enter the Advocate Bios and Developer Events pages.

The Advocates Bios page provides names, pictures and short descriptions of Developer Relations team members. You can filter them by what they work on and/or where they’re based out of.

The Developer Events page is a mashup of the Calendar and Maps APIs, running on an App Engine backend. Want to know about upcoming Android events in Prague? Or whether Patrick Chanezon is speaking at the GDD in Munich on Nov 9th? (He is!) You can do all of that and more with the Developer Events page.

Both the bios and the events pages are conveniently linked under the Developer Resources section on the Google Code home page.

We hope to see you at the events!

Posted:
If you're reading this post, we know your passion is coding. You thrive when given the opportunity to tackle a challenge, and enjoy the rush of applying your knowledge and creativity to approach a problem. Once solved, there's nothing like the satisfaction that comes from knowing you've accomplished something great.

That's why we are excited to announce Google Code Jam 2010 to the true die-hard coding fans. Google Code Jam, powered by Google App Engine, is our annual programming competition, where thousands of coders around the world attack algorithmic problems in several 2.5-hour online rounds. If you make it through the first four rounds, you'll be flown to our on-site finals, to be held for the first time at the Google office in Dublin! Once there, you will compete with 24 other top coders for the $5,000 first prize -- and the coveted title of Code Jam champion.

We don't want you to miss out on any of the action, so we are announcing some important dates for Google Code Jam 2010. Mark your calendars:

Wednesday, April 7, 2010 | 19:00 UTC | Registration Begins
Friday, May 7, 2010 | 23:00 UTC | 24-hr Qualification Round Begins
Saturday, May 8, 2010 | 23:00 UTC | Registration Deadline & 24 hr Qualification Round Ends
Saturday, May 22, 2010 | 1:00 UTC | Online Round 1: Sub-Round A
Saturday, May 22, 2010 | 16:00 UTC | Online Round 1: Sub-Round B
Sunday, May 23, 2010 | 9:00 UTC | Online Round 1: Sub-Round C
Saturday, June 05, 2010 | 14:00 UTC | Online Round 2
Saturday, June 12, 2010 | 14:00 UTC | Online Round 3
Friday, July 30, 2010 | Google Office - Dublin, Ireland | Onsite FINALS

In the meantime, visit the Google Code Jam site and try out some of the practice problems so that you'll be ready to go once we kick off the qualification round. Hope to see you in Dublin on July 30th!

Posted:
I'm excited to announce that registration for Google I/O is now open at code.google.com/io. Our third annual developer conference will return to Moscone West in San Francisco on May 19-20, 2010. We expect thousands of web, mobile, and enterprise developers to be in attendance.

I/O 2010 will be focused on building the next generation of applications in the cloud and will feature the latest on Google products and technologies like Android, Google Chrome, App Engine, Google Web Toolkit, Google APIs, and more. Members of our engineering teams and other web development experts will lead more than 80 technical sessions. We'll also bring back the Developer Sandbox, which we introduced at I/O 2009, where developers from more than 100 companies will be on hand to demo their apps, answer questions and exchange ideas.

We'll be regularly adding more sessions, speakers and companies on the event website, and today we're happy to give you a preview of what's to come. Over half of all sessions are already listed, covering a range of products and technologies, as well as speaker bios. We've also included a short list of companies that will be participating in the Developer Sandbox. For the latest I/O updates, follow us (@googleio) on Twitter.

Today's registration opens with an early bird rate of $400, which applies through April 16 ($500 after April 16). Faculty and students can register at the discounted Academia rate of $100 (this discounted rate is limited and available on a first come, first serve basis).

Last year's I/O sold out before the start of the conference, so we encourage you to sign up in advance.

Google I/O
May 19-20, 2010
Moscone West, San Francisco

To learn more and sign up, visit code.google.com/io.

We hope to see you in May!

(Cross-posted with the Official Google Blog)

Posted:
A couple days ago, Google welcomed Don Dodge to our Developer Relations team, where he joins us as a Developer Advocate working with developers, startups, and other Google Apps partners. We're expecting Don to be a fantastic addition to our team. He's already a prominent voice in the developer community, well-known and highly-regarded among entrepreneurs, technologists, and the media.

In the TechCrunch post first announcing Don's availability, Michael Arrington wrote how Don, "makes a big effort to give young startups the attention they deserve. This is a guy who gives a heck of a lot more to the community than he ever takes back." This dedication to the community of developers and the businesses they build is one of the things that excites us the most about having Don on our team. These businesses have been central to Google's success over the years, so we already know that Don's attitude will fit right in with our efforts. Don has deep experience working in startups from his days at companies like AltaVista, Napster, and Groove Networks, and has always continued to maintain the connection and passion for that community since leaving their ranks to join Microsoft, and now Google. We are eager for Don to share his personal experience and professional insights with developers and small businesses integrating with Google Apps, and be an advocate for developers and partners inside the company.

Don already wrote about his first day on the job at Google. Tomorrow you can hear him speak on the Enterprise Cloud Summit Panel in New York City. You can follow Don on his personal blog, email him at dondodge at google.com, or follow @dondodge on Twitter.

Posted:

At Google we're excited about Scalable Vector Graphics (SVG). SVG is an open, browser-based standard that makes it easy to create interactive web graphics with new HTML-like tags such as the CIRCLE tag. We like it because it's part of the HTML 5 family of technologies while being search engine friendly; easy for JavaScript and HTML developers to adopt; exportable from your favorite drawing tools like Adobe IllustratorTM; and straightforward to emit from server-side systems like PHP and Google App Engine. It's also available in all modern browsers.

As part of our commitment to the Open Web and SVG we are helping to host the SVG Open 2009 conference this fall at our Mountain View campus. The theme this year is SVG Coming of Age. It will be held at the Google Crittenden Campus in Mountain View, California on October 2nd through 4th 2009, with additional workshops on October 5.

Co-sponsored by W3C, the SVG Open conference series is the premier forum for SVG designers, developers, and implementors to share ideas, experiences, products, and strategies. Over 60 presentations will be delivered by SVG experts from all over the world, tackling topics such as design workflows, mobile SVG, Web application development, Web mapping, geo-location based services, and much more.

Two panel discussions will allow the audience to discuss ideas and issues with the W3C SVG Working Group and implementors. Many W3C Members will be participating, including Google, IBM, Mozilla, Opera, Oracle, Quickoffice and Vodafone. The conference schedule and confirmed keynote speakers are now available.

The deadline for early-bird registration is August 31st, so get your registrations in soon! Full-price registration will remain available until October 1, and limited on-site registration may also be available at the registration desk during the conference. The W3C SVG Working Group and W3C's Chris Lilley and Doug Schepers will participate.

A wide range of exciting talks are on the docket. Here's a small sample:

* Ajax Toolkits supporting SVG graphics: Raphaël, dojo, Ample SDK, SVG
Web Project, JSXGraph
* SVG in Internet Explorer and at Google
* Beyond XHTML
* Progress in Opera and Mozilla
* Using Canvas with SVG
* Progress in Inkscape
* Implementors and Panel Sessions
* SVG and OpenStreetmap
* SVG in Wikipedia/Wikimedia
* SVG and ODF
* SVG for Scientific Visualization
* SVG for Webmapping
* SVG for Games
* SVG for Mobile Applications
* SVG Wow - demonstrations of great SVG demos

See you there!

Posted:
Given a 49x49 grid of numbers, can you place mines in the cells in such a way that each number represents the number of mines in its 3x3 sub-grid (the cell itself and its 8 immediate neighbors)? Find the maximum number of mines that could end up in the middle row of the grid.

Intrigued? Think you can solve it with a clever algorithm? Here at Google, we know how thrilling it can be to encounter a challenge and then overcome it by coding up a creative solution. Since 2003, we've been privileged to share that experience with a global community of computer programmers through our annual programming competition, Google Code Jam.

We're excited to announce Google Code Jam 2009, powered by Google App Engine. Join the fun and compete in several 2½-hour online rounds, attacking three to four difficult algorithmic problems during each round. You may use your favorite programming languages and tools to code up a solution. When ready, run your solution against our fiendish test data. The algorithm needs to be right, and it needs to be efficient: when N=10000, O(N3) won't cut it!

If you're up to the challenge, visit the Google Code Jam site to register and read the rules. Most importantly, you can practice on the problems from last year's contest, so you are in shape when the qualification round starts on September 2. You could be one of the top 25 competitors who will be flown to our Mountain View headquarters to match wits for the $5,000 first prize, and the title of Code Jam champion!

P.S. Think you can solve our "Mine Layer" problem? Try it out on the Code Jam website!

Posted:
The Developer Sandbox was a new addition to this year's Google I/O. The Sandbox featured a diverse range of developers and apps, all with one thing in common -- they've all built applications based on technologies and products featured at I/O. The Sandbox was very popular with attendees and saw a lot of foot traffic throughout both days of the event. Sandbox developers welcomed the opportunity to interact with fellow developers, discuss their products and answer questions.



We interviewed these developers about their apps, the challenges they faced and the problems they solved, and finally their learnings & hopes for web technologies going forward. We also asked these developers to create screencast demos of their apps, much like the demos they gave to people visiting their station during I/O.

These video interviews and demos are now available in the Developer Sandbox section of the I/O website. Each developer has their own page with a brief description of their company, their app, and their interview video (if one was filmed) and screencast demo video (if available). For instance, here's a video interview with Gustav Soderstrom of Spotify, who walks us through a demo of their Android app and then talks about the platform and why Spotify chose to develop their app on Android.



Are you building an app on one of Google's platforms or using Google APIs? Please send us a video about your company and your app and you could be featured on Google Code Videos. Click here for the submission form and guidelines.

Each Sandbox developer page also features a Friend Connect gadget that allows anyone visiting the page to sign in with their Friend Connect id and leave comments & feedback. It's a great way to continue the conversation or to ask questions if you did not get a chance to meet them at I/O.

Posted:
We would like to thank the thousands of developers who joined us last week and made this year's Google I/O a wonderful developer gathering. We announced some of the things we've been working on and shared our thoughts on the future of the web. 140 companies joined us to showcase what they've been working on and talk about their experiences building web applications. We hope you left I/O inspired with new ideas for your own products. Our engineers were pumped to get your feedback and were inspired by what they learned from conversations at Office Hours, in the Sandbox, and during the After Hours party.

If you missed a session you really wanted to see at Google I/O, you'll be happy to know that over 70 of the sessions (videos and slides) will be made available over the next few days. For your convenience, you'll also be able to download those videos to view them on the go.

These will be going live soon at code.google.com/io. We'll be releasing I/O content in the following waves:
  • Wed, June 3: Client (Chrome, HTML 5, V8, O3D, Native Client, and more)
  • Thurs, June 4: Google Wave, Mobile/Android
  • Fri, June 5: Tech Talks
  • Mon, June 8: Google Web Toolkit, App Engine, Enterprise
  • Tues, June 9: AJAX + Data APIs, Social
You can check out some of our favorite Google I/O photos here. In addition, check out video interviews with the 3rd Party developers featured in our Developer Sandbox, and see how they've implemented products & technologies represented at I/O.

We've gotten many inquiries about the opening video for the Day 1 keynote. The video is comprised of different Chrome Experiments and the soundtrack music and lyrics were created by our very own Matt Waddell. Lastly, wondering why the Lego character on the Google I/O t-shirt is holding a spray can? For those of you who have t-shirts, turn off your room light and see what's written on the back of the green brick :)

Stay tuned for more updates on Google I/O!

Posted:
Google I/O has sold out and general registration is now closed. If you have received a registration code, you can still register here. If you're unable to join us next week, we will post all of the videos and presentations shortly after I/O -- keep an eye out for updates on that. During the event, you can follow us on the @googleio Twitter account and on the Google Code Blog for the latest in-conference updates and announcements. And if you're coming to Google I/O, we look forward to seeing you there!

Posted:
Google is making the web a more sociable place by contributing to new standards and releasing new products that make it easy to integrate your website with the social web. We've invited a few friends that are helping build the social web to Google I/O so you can learn what's coming next and what it means for you.

Learn from successful developers
Social apps can grow up fast, and some have attracted tens of millions of users. We're planning sessions to help you understand the business side of social apps, and we'll have a panel where you can pick the brains of some of the biggest social app developers in the world.




Make some new friends
There's more to a successful social app than just a creative idea. From analytics to payment processing, there's a lot of code to write beyond the core functionality of an app. Luckily, companies have been springing up to fill the needs of this ecosystem. You can meet some of these new companies in the Developer Sandbox and see how their products can make your app better (and your life easier).




Meet the containers
One of the key benefits of OpenSocial is the incredible distribution it provides to app developers. Building your app on OpenSocial makes it possible to reach hundred of millions of users around the world. We've got sessions planned to let you meet the folks building OpenSocial platforms and learn more about what kind of apps work well in different social environments.




The next generation
IBM, Salesforce.com, Oracle, eXoPlatform, SAP, Atlassian - not who you'd expect to be speaking on an OpenSocial session. Speakers from these companies will come together to talk about how the enterprise software development community is bringing social concepts and technology like OpenSocial into the enterprise.

To see all the sessions we've got planned to help you learn about the social web, go to http://code.google.com/events/io/sessions.html, and search for 'social'.

*Keep up with the latest I/O updates: @googleio.

Posted:
As mentioned before, I will be hosting an Ignite at Google I/O on Wednesday, May 27 from 4:15-5:15pm at Moscone West in San Francisco. I'm happy to announce the following nine speakers who will be joining me onstage. In no particular order, here they are - as well as a preview of what they'll be presenting during their five minutes in the hot seat:
  • Leo Dirac - Transhumanism Morality
    Why only geeks and hippies can save the world.

  • Michael Driscoll - Hacking Big Data with the Force of Open Source
    The world is streaming billions of data points per minute. This is Big Data ? capital B, capital D. But capturing data isn't enough. We need tools to make sense of it, to help us better understand -- and predict -- what we click and consume. We want to make hypotheses about the world. And to test hypotheses, we need statistics. We need R.

  • Pamela Fox - My Dad, the Computer Scientist: Growing up Geek

  • Tim Ferriss - The Case for Just Enough: Minimalism Metrics
    Looking at how removing options and elements gets better conversions, etc., looking at screenshots of start-ups I'm working with and real numbers. Some humor (I hope) and fun, both philosophical and tactical.

  • Nitin Borwankar - Law of Gravity for Scaling
    Why did Twitter have scaling problems? I spent 6 months thinking deeply about this and derived a simple formula that a high school student would understand. It demonstrates where the center of gravity is moving in the "Next Web" and why this aggregation of CPU's is even bigger than Google's. And oh yes, it explains how to build a service that scales to 100 million CPU's.

  • Kevin Marks - Why are we bigoted about Social networks?

  • Andrew Hatton - Coding against Cholera
    I'll examine what IT life is like on the front line with Oxfam, a humanitarian agency, and how good code can make a real difference to people's lives in all sorts of ways..some of them surprising..

  • Robin Sloan - How to Predict the Future
    OK, back in 2004 I made a video called "EPIC 2014," predicting the future of media (and Google). It turned out to be 100% CORRECT. No, just kidding. But it made a lot of people think, which is really the point of talking about the future. Turns out there's a whole professional discipline of future forecasting. And there are certain ways you can think about the future that will give you better odds of being right than others.

  • Kathy Sierra - Become Awesome

Posted:
Scalable Vector Graphics (SVG) is starting to pop up over all the place. It's showing up natively in browsers (including Firefox, Safari, Opera, Chrome and more). It's natively supported on the iPhone, and work is happening in various open source communities to create options for Internet Explorer. Google uses it under the covers in Google Maps (to create vector line drawings showing where to go); Google Docs (for drawing into presentations); and more. Wikipedia has a huge repository of SVG images, while many tools such as Inkscape, Illustrator, and Visio can either export to SVG or work with it natively. Vector graphics support through SVG and Canvas is consistently one of the top voted requests by developers.

Since we use and support SVG we thought it would be great to work with the community to host the SVG Open 2009 conference this fall. SVG Open will be in Mountain View at the Google campus from October 2-4, 2009. The theme this year is "SVG coming of age".


We are looking for contributors to present papers or teach courses. Presenters are asked to submit an extended abstract in English with an approximate length of 400 to 800 words by May 15. The abstracts are reviewed by a reviewing committee and presenters will be informed about acceptance on or before June 26. If your abstract is accepted, you will be asked to submit your full paper by August 31, according to instructions that will be sent to you.

Come and join us in the fall at SVG Open!

Posted:
Ignite is a series of geek nights started by Brady Forrest of O’Reilly Radar and Bre Pettis of Makerbot Industries that has since spread around the world. The nights combine a Make contest (like a popsicle stick bridge-building contest) and a series of fast-paced, short-form tech talks. To check out past Ignite talks, view the videos at http://ignite.oreilly.com/show.

At Google I/O, we'll be doing a one hour Ignite on the first day of the conference (May 27). In typical Ignite fashion, these talks will each be 5 minutes long with 20 slides and only 15 seconds a slide (they auto-advance). We want to hear your cool ideas, hacks, lessons, and "war stories". What do you want to talk about?

We're looking for speakers to participate, so if you're interested in submitting a talk, sign up via the form below.

Submit your talk for Ignite Google I/O

Submission Deadline: May 11th
Speaker Notification (rolling): May 12th

We’re taking submissions from everyone — whether this is your first time or whether you’ve done an Ignite talk before. Once we've chosen the speakers, we'll be sharing who they are after May 12. Stay tuned!

Posted:
This post is part of the Who's @ Google I/O, a series of blog posts that give a closer look at developers who'll be speaking or demoing at Google I/O. Today's post is a guest post written by Alex Moffat, Chief Engineer - Blueprint, Lombardi Software

Lombardi has been using Google Web Toolkit (GWT for short) since January 2007 to build Lombardi Blueprint, an online, collaborative business process modeling and documentation tool. The client part of Blueprint is completely written using GWT and runs in any modern browser, that is IE6 and anything later, such as IE7, IE8 or Firefox. One of the biggest advantages of Blueprint is that it's easier to learn and quicker to use than a pure diagramming tool like Visio and it's more effective because it's focused on process mapping.

One of the things we do to make Blueprint more effective is automatic diagram layout. This allows the user to focus on describing their process instead of worrying about object positioning and line routing. You can see this in action in the video below as objects are added to the diagram.



Remember, this is JavaScript compiled by GWT from Java code, but it's as fast as, or faster than, anything you could write by hand, compact, and much, much, easier to maintain. The ability to use the excellent tooling available for Java is one of the great advantages of GWT.

One of the goals for our automated layout routines is to generate a flow diagram that looks like it was produced by a human. When the algorithms don't get it quite right, Blueprint also supports hinted layouts so that the user can drag and drop hints about where one object should be positioned in relation to another. Working out what the final layout should be and where the lines should go for large diagrams can be computationally expensive.

Modern browsers have very fast JavaScript engines. For these systems, there are no problems. However, we still need to support the browsers our customers use, which may not necessarily be the fastest or most up-to-date.

This is where GWT gives us a unique benefit. We can implement our algorithms in Java and compile this implementation twice, once with GWT to produce JavaScript to run on the client and once with javac to produce JVM bytecode to run on the server. This lets us use the much faster JVM if we need to without having to create, and maintain, separate client and server layout implementations. There's no other toolkit that makes this possible, never mind easy.

Blueprint client code continuously measures how long it takes to perform the layout and routing operation in the browser. If this exceeds our threshold value, then the code dynamically switches to trying a server side layout. We call the server code with GWT and the data structures returned, via the GWT serialization mechanism, are of course the same ones produced by the layout when executed on the client. The time required for a server layout is also measured, which includes both the execution time and any network delays so we account for the different connection experiences people have. After the first server layout, Blueprint chooses whichever method, client or server, has the lowest average elapsed time. I'm still amazed by how easy this was to implement.

Damon Lundin and I will be talking at Google I/O this year about how we built Blueprint, both what we do technically with GWT to get high performance and how we organize development so that we can make the most effective use of GWT. We look forward to meeting fellow GWT developers in person!

Posted:
Heads up, early registration for Google I/O ends May 1. After this Friday, the rates increase $100 and even worse, you'll lose out on your Google Chrome Comic book.

Google I/O will feature 80+ sessions that cover Android, HTML5, App Engine, Chrome, AJAX APIs as well as special sessions for Enterprise and Social app developers. These technical sessions will be led by engineers from Google and a wide range of developers from the community. We'll also have Fireside Chats and Office Hours, giving attendees a chance to meet with Google engineers and get answers to all those pressing questions.

This year's newest addition to Google I/O is the Developer Sandbox. Over 100 developers representing a wide range of companies and apps will be on hand to demo their products, which they built using Google technologies. We've announced a few thus far and will continue to announce more over the coming weeks here on the Code Blog.

We've also worked really hard to make your visit to the San Francisco Bay Area as affordable as possible, including lower-than-usual hotel rates. Those rates expire on May 4 so register and make your hotel reservations today to secure them. For more information visit the Google I/O website.

If you plan to attend, register before the fee increases. We look forward to seeing you at I/O!

Posted:
Google Web Toolkit, or GWT for short, recently went live with their 1.6 release, which also included a Google plugin for Eclipse and integration with App Engine's Java language support. Google I/O will be rich with GWT content, including a number of sessions on improving productivity and app performance with GWT. In addition, there will be a number of external GWT developers leading some of these sessions and/or part of the Developer Sandbox.

As mentioned last week, we're giving you a closer look at developers who'll be presenting or demoing at I/O. Here is a taste of these GWT developers below. (New to GWT? Check out this overview)
  • JBoss, a Division of Red Hat
    JBoss is well-known by developers for their enterprise open source middleware. Red Hat developer communities such as the Fedora Project and jboss.org have collaborated with Google on a number of developer initiatives over the years including Google Summer of Code, Hibernate Shards, integration with Drools and the Seam Framework and Google Gadgets integration with JBoss Portal. JBoss will be present at the Developer Sandbox.

  • Timefire
    Timefire produces highly scalable, interactive visualizations of up to millions of data points for business intelligence, analytics, finance, sensor networks, and other industries in what they like to call "Google Maps, but for the time dimension." Their platform's built on Google Web Toolkit from the ground up, but also runs natively on Android. Timefire also uses App Engine's new Java language support for their social charting tool, Gadgets, OpenSocial, GData, Google Maps, GViz, YouTube Player API, and Protocol Buffers. Ray Cromwell will be at the Developer Sandbox as well as speaking on 2 sessions - Building Applications on the Google OpenStack and Progressively Enhance AJAX Applications with Google Web Toolkit and GQuery

  • StudyBlue
    StudyBlue is an academic network which enables students to connect with each other and offers study tools. StudyBlue's website is built entirely with GWT. According to StudyBlue, GWT allows for complete AJAX integration without sacrificing usability or integration capabilities. StudyBlue will be at the Sandbox.

  • Lombardi Blueprint
    Lombardi Blueprint is a cloud-based process discovery and documentation platform accessible from any browser. They've used GWT since early 2007 to write the client side of Lombardi Blueprint. GWT has enabled Lombardi to focus on writing and maintaining their Java code, while taking care of creating the browser-specific optimized AJAX for them. Alex Moffat and Damon Lundin will be at the Developer Sandbox as well as leading a session, Effective GWT: Developing a complex, high-performance app with Google Web Toolkit. (Check out Alex Moffat's video about Lombardi's use of GWT)
Finally, one little known fact - a number of Google products were developed with the help of GWT. This includes Google Moderator, Health, Checkout, Image Labeler, and Base.

Don't forget - early registration for Google I/O ends May 1. This means $100 off the standard list price (and a copy of the Chrome comic book). To register, check out the latest sessions, or see more developers who'll be presenting at I/O, visit code.google.com/io.

*Follow us for the latest I/O updates: @googleio.