Google Webmaster Tools has a new name: it's now called Google Search Console. Why change the name? "It turns out that the traditional idea of the 'webmaster' reflects only some of you. We have all kinds of Webmaster Tools fans: hobbyists, small business owners, SEO experts, marketers, programmers, designers, app developers, and, of course, webmasters as well," explains Google.
Google Search Console will continue to offer the same features, including Google Search analytics, information about external and internal links, mobile usability issues, crawling and indexing issues, security and spam.
Google tested various ways to highlight if search results are optimized for mobile devices. Some of the experiments displayed icons next to mobile-friendly results, other experiments placed icons next to the results that aren't optimized for mobile devices.
Google announced that it will add in the coming weeks a "mobile-friendly" label next to the mobile search results that are optimized to be viewed on a mobile phone. Google will only add this label if the pages don't use plugins like Flash, if text is readable without zooming, content is properly adjusted so that users don't have to scroll horizontally or zoom and links are placed far enough apart so you can tap them.
I think that adding a small icon is a better idea than using the label "mobile-friendly" next to search results. It uses less space and it's easier to find mobile-optimized results.
Blogger blogs are no longer that limited. After adding support for static pages and favicons, Blogger added some new advanced features to the "search preferences" section of the settings page.
Now you can edit the description meta tag without editing the template. If you edit the description meta tag for the entire blog, you can also write descriptions for your blog posts. This is useful because Google's snippets sometimes rely on this meta tag.
For the first time you can create a custom 404 error page for a Blogger blog without buying a domain. Just "enter an HTML message that will be displayed on the Page Not Found page instead of the generic message." Google has some tips for creating useful 404 pages and there's even a widget powered by Google search that shows related links and a search box with appropriate search suggestions.
There are also options for customizing the robots.txt page and robots header tags. It's probably a good idea to use Blogger's robots.txt page as a template (http://yourblog.blogspot.com/robots.txt) and only add some new pages you want to be ignored by search engines.
As reported last month, the code for Google +1 buttons could be improved so that the buttons load faster and stop blocking other resources. Google updated the code and recommends publishers to generate a new code.
"We're introducing a new asynchronous snippet, allowing you to make the +1 experience even faster. The async snippet allows your web page to continue loading while your browser downloads the +1 JavaScript. By loading these elements in parallel, we're ensuring the HTTP request to get the +1 button JavaScript doesn't lead to an increase in your page load time," explains Google.
Google also optimized the existing code so that the button renders up to 3 times faster. Even if you don't update the code, you'll still benefit from these changes.
The code generator is easy to use and I've noticed that a lot of sites added a +1 button next to Facebook's "Like" button. It's unfortunate that Google didn't optimize the code when it was released.
Google displays a special ad if your query includes the site: operator, followed by a domain name: "Do you own domain.com? Get indexing and ranking data from Google."
Many webmasters use the site: operator to check the number of pages indexed by Google, so it's a good opportunity to promote Google Webmaster Tools.
This isn't a regular AdWords ad, since it's labeled as "Google promotion". From what I know, it's not even possible to create an AdWords campaign for all the searches that use the site: operator.
Google has recently published a report about the Web, which includes a lot of interesting stats. The results were obtained from a sample of 4.2 billion web pages indexed by Google.
"The average web page takes up 320 KB on the wire (Google took into account the embedded resources such as images, scripts and stylesheets). Only two-thirds of the compressible material on a page is actually compressed. In 80% of pages, 10 or more resources are loaded from a single host."
The average number of images per page is 29.39 and the average size of all the images from a page is 205.99 KB. A web page includes an average of 7.09 external scripts and 3.22 external stylesheets. The average size of the scripts is 57.98 KB and the size of the stylesheets is 18.72 KB. Google also found that only 17 million pages from the sample use SSL (about 0.4%).
Urs Hölzle, Google's Senior Vice President of Operation, said that the average web page takes 4.9 seconds to load and it makes 44 calls to different resources. "Speed matters. The average web page isn't just big, it's complicated. Web pages aren't just HTML. A web page is a big ensemble of things, some of which must load serially," said Urs Hölzle.
Google Browser Size is an experimental service that shows if a web page has interface elements that can't be viewed by a significant amount of people. "Google Browser Size is a visualization of browser window sizes for people who visit Google. For example, the 90% contour means that 90% of people visiting Google have their browser window open to at least this size or larger."
The service can be used for any web page, but the data is obtained from the visitors of google.com. As you can see from the screenshot, Google's top result can be viewed by more than 99% of the visitors if no ad is displayed above the results.
Google Browser Size is one of the many Google tools that help you optimize web sites:
Google Related Links is a new Google Labs service that lets you add a list of related web pages and searches to your site. Unlike the homonymous service released by Google in 2006, the new Related Links restricts the results to your site.
"Related Links is a tool to help webmasters increase page views on their sites. Given a page on your site, Related Links can choose the most related pages from your site and show them in a gadget. You can embed this gadget in your page to help your users reach other pages easily. Related Links also suggests searches that users can run within your site to find even more related pages."
The service is not publicly available, but you can try a demo and ask for an invitation. "To apply for an invitation, please send an email to [email protected] stating your Gmail address, website domains and approximate pageviews per day."
Once you get the invitation, log in using your Google account and click on "Manage Related Links". You'll be able to configure the gadget, customize the look and feel and enable some advanced features: highlighting the keywords from the page for visitors that come from a search engine, blacklisting web pages from the list of related links and removing prefixes or suffixes from titles.
After configuring the gadget, paste the code in one of your sites and test if it works well. If you edit the gadget's configuration, the changes are reflected instantly and you don't need to change the code.
The results are relevant, but there are some issues which show that the service is still in an early phase: there's an encoding bug when displaying page titles and links are only opened in a new window.
If you create a web site, a difficult task is to test if it looks properly in most of the browsers and the operating systems that your users are likely to use. Unfortunately, this requires that you install multiple operating systems, buy more than one computer or use virtual machines.
An easier way to test your site is to use online services like BrowserShots, which generates screenshots for a web page in more than 80 versions of the most common browsers used in Windows, Linux, BSD and Mac. The process takes time and you may have to wait up to an hour to see the screenshots.
Adobe BrowserLab is a recently-launched service that has the advantage of generating screenshots almost instantaneously, but the number of browsers that are tested is smaller: Firefox 2.0 (XP, OS X), Firefox 3.0 (XP, OS X), IE6 (XP), IE7 (XP), Safari 3.0 (OS X). The service has an interesting "Onion Skin View", which superimposes one screenshot over another to see the differences between the different renderings. BrowserLab is integrated with Dreamweaver CS4, but you don't need to buy the software to use the online service.
"Cross-browser testing has been one of the biggest challenges for Web designers because it is such an arduous and time-intensive task. Now with Adobe BrowserLab, designers have a simple solution that enables comprehensive browser compatibility testing in just a matter of minutes," says Adobe's Lea Hickman. The bad news is that the service is free for a limited time.
* it's not important to get the top rankings, users often click on the next pages of results to find appropriate images. There are many "subjective" queries and users tend to explore instead of trying to find the perfect result.
* use images that are large enough.
* use high-quality images.
* include descriptive text next to the images.
* place the images so that it's not necessary to scroll too much in order to find them.
* Google Image Search clusters (almost) identical images and usually only one of them is displayed. If more than one page embeds the image, Google tries to find the most relevant page for that query.
* there are many new use cases for Image Search: inspiration, visual dictionary for foreign languages, shopping, research.
Google Webmaster Tools is a valuable source of information if you have a website: Google lists crawl errors, popular searches that lead to your site, backlinks and SEO tips. Until now, Google listed the URLs from your site that returned the 404 (page not found) error, without mentioning how GoogleBot found those addresses. This has been fixed and you can find the list of pages that link to broken URLs from your site in the "Linked From" column of the "Diagnostics > Web crawl > Not found" report.
"If we report crawl errors for your website, you can use crawl error sources to quickly determine if the cause is from your site or someone else's. You'll have the information you need to contact them to get it fixed, and if needed, you can still put in place redirects on your own site to the appropriate URL," suggests Google Webmaster Central blog.
Google Toolbar 5 for IE (and soon for Firefox) has a useful feature that adds custom 404 error pages. If a site doesn't customize the error page and you click on a broken link or mistype the web address, Google Toolbar shows a list of suggestions: the site's homepage and some searches that could help you locate the right page.
Google implemented a similar error page for google.com and now you can add it to your own site. If you have access to the web server and you can customize the 404 error page, Google provides a widget that enhances the page. "The 404 widget is a quick and easy way to embed a search box on your custom 404 page and provide users with useful information designed to help them find the information they need. Where we can, we'll also suggest other ways for the user to find the information they need, thus increasing the likelihood that they'll continue to explore your site," explains Google.
The widget can be found in Google Webmaster Central, by selecting a site from the dashboard and clicking on Tools > Enhance 404 pages. Unfortunately, it requires JavaScript, so a small number of visitors will not see it.
Google has some recommendations for creating useful error pages: informing visitors that the page could not be found, recommending popular pages from the site, using a consistent design and preventing 404 pages from being indexed by search engines.
The Webmaster Central Blog mentions that this feature is experimental and that not all the sites show great suggestions.
Google Webmaster Tools shows even more information about your site. Now you can find historical data about the most popular queries for which your site appeared in the top 10. You can compare the queries for the last week with the queries from last month or two months ago and find what has changed. It's also interesting to compare the top search queries with the top clicked queries and see if you can improve the title of the page or the actual content. Google Webmaster Tools lets you find data from specialized search engines like Blog Search and restrict it to some international Google Domains, like UK, Canada or India.
While some of this data can be obtained using tools like Google Analytics, you can't find the queries that bring you high rankings, but no clicks. In Webmaster Tools, the information is available in Statistics > Top search queries. Special tip: if you download all the query stats as a CSV file, you'll also get data for each subdirectory. For example, I found that my posts from March (that are placed in the subdirectory googlesystem.blogspot.com/2007/03/) ranked well in India for these queries: [google transliteration], [google bookmarks], [google screensaver].
Another update lets you exclude some of the links automatically generated by Google that are displayed in some cases for the first search result. "Sitelinks are extra links that appear below some search results in Google. They serve as shortcuts to help users quickly navigate to the important pages on your site. (...) Now, Webmaster Tools lets you view potential sitelinks for your site and block the ones you don't want to appear in Google search results."
Google autogenerates the list of sitelinks at least in part from internal links from the home page. (...) If you want to influence the sitelinks that appear for your site, make sure that your home page includes the links you want and that those links are easy to crawl (in HTML rather than Flash or Javascript, for instance) and have short anchor text that’ll fit in a sitelinks listing. They’ll also have to be relevant links. You can’t just put your Buy Cheap Viagra now link on the home page of your elementary school site and hope for the best. (...)
Not all searches trigger sitelinks. This only happens for searches that Google thinks might benefit from them. For instance, if they think the query has enough inherent intent (...), they figure the listings alone are likely the one-click answer for the searcher.
Google Webmaster Tools becomes more useful every month. Initially developed as a way to submit sitemaps, the service expanded its focus by displaying interesting information Google has about your sites, but shouldn't be available to the public: top search queries are the queries that most often returned pages from your site, PageRank distribution, backlinks, crawling errors. Google Webmaster Tools is also a way to alert webmasters about sites that violate Google's quality guidelines.
A new feature shows a list of feeds from your site and the number of subscribers that come from Google services. "If your site publishes feeds of its content, this page will display the number of users who have subscribed to these feeds using Google products such as iGoogle, Google Reader, or Orkut. Because readers can use other sites and aggregators to subscribe to your content, your total number of subscribers from all sources may be higher." At the beginning of the year, Google started to include the number of subscribers in Feedfetcher's user-agent, but only people that had access to the logs or used a service like FeedBurner could see it. Now everyone who authenticates a site in Google Webmaster Tools can see the number of subscribers.
Even though this blog's main feeds are redirected to FeedBurner, they still have Google subscribers. That's because Blogger does a temporary redirect to FeedBurner (HTTP/1.x 302 Moved Temporarily) and Google Reader treats them as separate feeds.
Google Reader's main competitor, Bloglines, shows extensive information about each feed: the number of subscribers and a list of those who made their subscriptions public. This information is even included next to the feed's URL in search results and can be obtained through an API. The complete list of backlinks, displayed in Google Webmaster Tools, is publicly available at Yahoo: just use the link operator. So some of the data could be easily made available to the public without causing too much trouble.
Web applications bring your data online and make it available anywhere there's an Internet connection. But happens when you're on a plane or when you can't find a WiFi hotspot?
Google launched an open source browser extension for IE and Firefox called Google Gears that enables web applications to be available offline.
"Gears is an incremental improvement to the web as it is today. It adds just enough to AJAX to make current web applications work offline. Gears today covers what we think is the minimal set of primitives required for offline apps. It is still a bit rough and in need of polish, but we are releasing it early because we think the best way to make Gears really useful is to evolve it into an open standard. We are releasing Gears as an open source project and we are working with Adobe, Mozilla and Opera and other industry partners to make sure that Gears is the right solution for everyone," explains Google.
Once you install the extension, every Gears-enabled web application will ask your permission before storing data offline.
Depending on the functionality implemented in the application, Google Gears caches resource files so they're available offline, stores data in a SQLite database that has powerful search features and synchronizes data in the background.
Google Gears will enable you to read the most recent messages from Gmail while offline or to edit your documents in Google Docs even without a network connection.
Google Reader is the first Google application powered by Gears. To enter the offline mode, just click on the small arrow and all the recent feed items are downloaded to your computer. You can disconnect from the Internet or click on "work offline" in your browser and you will still be able to read your favorite feeds in Google Reader. Like in any feed reader installed on your computer. Well, almost, because Google Reader doesn't download images or other multimedia files embedded in the posts.
You can even close Google Reader's tab and try to load the site again: it will instantly show the cached data. Try to add tags to a post or star it; once you go back online, Google Reader will synchronize the data.
P.S.: Another nice update in Google Reader is that you can see the exact number of unread posts for each feed. Google Reader learned to count beyond 100.
Update: Here's a presentation from Google Developer Day Sydney that explains the motivations behind this project and shows some demos:
I wrote last year a post about content separation that suggested a way to separate the main content of a page from other content that's not very interesting. Most of the elements of a template (navigation, footer etc.) could confuse search engines into thinking a page talks about something else than it does. As a result, a page could end up ranking well for unrelated queries and not so well for the right queries.
As a solution for this problem, Yahoo introduces a 'robots-nocontent' class that can be added to any HTML tag.
"This tag is really about our crawler focusing on the main content of your page and targeting the right pages on your site for specific search queries. Since a particular source is limited to the number of times it appears in the top ten, it's important that the proper matching and targeting occur in order to increase both the traffic as well as the conversion on your site. It also improves the abstracts for your pages in results by omitting unrelated text from search result summaries.
To do this, webmasters can now mark parts of a page with a 'robots-nocontent' tag which will indicate to our crawler what parts of a page are unrelated to the main content and are only useful for visitors. We won't use the terms contained in these special tagged sections as information for finding the page or for the abstract in the search results."
While this could be useful to reduce the importance of unrelated parts of your site (like AdSense's section targeting), I can't stop wondering if this isn't the search engine's job. For example, Google can detect the navigation links from a page (you can notice this if you use the mobile version), but I don't think it minimizes the importance of the keywords used in that area.
Google Webmaster Central shows the most popular 100 phrases used by other sites to link to your site. If you go to Statistics / Page analysis, you'll find a list of anchor phrases, obtained by removing punctuations.
Google's algorithms use those keywords to understand a page better. Sometimes a page ranks well for some keywords that are not even contained in that page, but they're used in links from other pages. For example, Yahoo.com is the fourth result for [under 18] mostly because of backlinks that use this anchor text.
"This information is useful, because it helps you know what others think your site is about. How sites link to you has an impact on your traffic from those links, because it describes your site to potential visitors," notes Vanessa Fox.
The problem was that Google was too afraid to report the number of subscribers to these services, so you couldn't know for sure how many readers of your feed use Google. The personalized homepage has always had a big number of users, but with the latest update of Google Reader, many people switched to Google Reader.
Currently, these counts include users of both Google Reader and the Google Personalized Homepage, and over time will include subscriptions from other Google properties.
The "User-Agent:" header of our crawler includes the name of our crawler ("FeedFetcher-Google") along with its associated URL, the subscriber count, and a unique 64-bit feed identifier ("feed-id"). (...)
Below is an example of the contents of the "User-Agent:" header: User-Agent: Feedfetcher-Google; (+http://www.google.com/feedfetcher.html; 4 subscribers; feed-id=1794595805790851116)
If you use FeedBurner for feed stats (this is a good option if you use hosted services like Blogger's Blog*Spot and you don't have access to server logs), there's a good news. Starting tomorrow, you'll be able to see the number of Google subscribers. "This information will show up in tonight's subscriber reports (meaning that most of you will start to see the data on Saturday morning, U.S. Central Time)."
Google Webmaster Tools added a new feature: a complete of the links that point to your site and a list of your internal links. Google's link operator shows only some of the backlinks. Now, because Google trusts you (you validated the site, so you have access to it), you can see the number of backlinks for each page of your site and a list of backlinks.
The interface is pretty difficult to use, especially for large sites, so it's a good idea to download the data and analyze it Excel or other spreadsheet application.
Google's blog says there are some limitations: "We do limit the amount of data you can download for each type of link (for instance, you can currently download up to one million external links). Google knows about more links than the total we show, but the overall fraction of links we show is much, much larger than the link: command currently offers."
If you underestimated the importance of submitting a sitemap for your site to Google, or if you didn't know what sitemap format to choose, watch this WebProNews interview with Vanessa Fox from Google.
A sitemap is useful to let Google discover all the pages of your site. Even though it's not necessary to submit a sitemap if you use internal links properly, sometimes it's just to easy to obtain one. If you have a blog, your feed could be submitted as a partial sitemap. If you have a simple site, ROR Sitemap Generator will crawl it and generate a sitemap.
The sitemap protocol, developed by Google, and supported by Yahoo and Microsoft, is useful to create complex sitemaps that include information about the last update of a page or its importance. "Sitemaps enhance the current model of Web crawling by allowing webmasters to list all their Web pages to improve comprehensiveness, notify search engines of changes or new pages to help freshness, and identify unchanged pages to prevent unnecessary crawling and save bandwidth."
Vanessa Fox recommends to create a sitemap especially for new sites and dynamic sites with a lot inaccessible pages.