Google Webmaster Central Blog - Official news on crawling and indexing sites for the Google index

First Click Free for Web Search

Friday, October 17, 2008 at 7:55 AM

While working on our mission to organize the world's information and make it universally accessible and useful, we sometimes run into situations where important content is not publicly available. In order to help users find and access content that may require registration or a subscription, Google offers an option to web and news publishers called "First Click Free." First Click Free has two main goals:
  1. To include highly relevant content in Google's search index. This provides a better experience for Google users who may not have known that content existed.
  2. To provide a promotion and discovery opportunity for publishers with restricted content.

First Click Free is designed to protect your content while allowing you to include it Google's search index. To implement First Click Free, you must allow all users who find your page through Google search to see the full text of the document that the user found in Google's search results and that Google's crawler found on the web without requiring them to register or subscribe to see that content. The user's first click to your content is free and does not require logging in. You may, however, block the user with a login or payment or registration request when he tries to click away from that page to another section of your content site.

Guidelines
Webmasters wishing to implement First Click Free should follow these guidelines:
  • All users who click a Google search result to arrive at your site should be allowed to see the full text of the content they're trying to access.
  • The page displayed to all users who visit from Google must be identical to the content that is shown to Googlebot.
  • If a user clicks to a multi-page article, the user must be able to view the entire article. To allow this, you could display all of the content on a single page—you would need to do this for both Googlebot and for users. Alternately, you could use cookies to make sure that a user can visit each page of a multi-page article before being asked for registration or payment.

Implementation Suggestions
To include your restricted content in Google's search index, our crawler needs to be able to access that content on your site. Keep in mind that Googlebot cannot access pages behind registration or login forms. You need to configure your website to serve the full text of each document when the request is identified as coming from Googlebot via the user-agent and IP-address. It's equally important that your robots.txt file allows access of these URLs by Googlebot.

When users click a Google search result to access your content, your web server will need to check the "Referer" HTTP request-header field. When the referring URL is on a Google domain, like www.google.com or www.google.de, your site will need to display the full text version of the page instead of the protected version of the page that is otherwise shown. Most web servers have instructions for implementing this type of behavior.

Frequently Asked Questions
Q: Can I allow Googlebot to access some restricted content pages but not others?
A: Yes.

Q: Can I limit the number of restricted content pages that an individual user can access on my site via First Click Free?
A: No. Any user arriving at your site from a Google search results page should be shown the full text of the requested page.

Q: Can First Click Free URLs be submitted using Sitemap files?
A: Yes. Simply create and submit your Sitemap file as usual.

Q: Is First Click Free content guaranteed inclusion in the Google Index?
A: No. Google does not guarantee inclusion in the web index.


Do you have any more questions or comments? Come on over to the Google Webmaster Help forum and join the discussion!


The comments you read here belong only to the person who posted them. We do, however, reserve the right to remove off-topic comments.

54 comments:

Willem said...

Nice idea!

Would it also be possible to filter out these result by the user? Sites which only show a question where you have to pay to see the answer annoy me already (experts-exchange comes to mind) This feature seems to promote these kinds of sites.

I have nothing against pay-for-information sites. In fact I use some. But it would be nice to make a distinction between a search for information and a search for places where I can buy information.

Jenn said...

This is absolutely and utterly brilliant. Having optimized for subscription based companies (Classmates.com and Smartsheet) I have been struggling with providing content for the search engines without compromising what drives revenue.

Thank You Google!
Jenn

cka3o4nik said...

I have two questions:

1. Google have been preaching the idea of building websites for people, not search engines, therefore webmasters were not able to treat Googlebot differently from other users. Now you are saying the we SHOULD treat it differently, as well as treat all Google users differently from all other users. Why is Google user any more special than Yahoo user, for example?

2. You are providing incentive for users to change their UA to Googlebot while doing their normal Internet surfing to get most content out of it. What would you suggest to webmasters to adjust to this?

Dominik said...

How can I prevent someone copying my whole page just by supplying/spoofing a google.com referrer on each request?

John Mueller said...

@Willem With First Click Free (FCF) pay-for-information sites will be able to make the whole page available for free (if you are coming in from Google's search results).

Showing "the answer" only to Googlebot and requireing payment from users coming from search results would be considered cloaking.

John Mueller said...

@cka3o4nik I'm not aware of similar programs from other search engines - but if they exist, you're free to implement the same for everyone.

Also, keep in mind that it's possible to verify Googlebot based on a reverse IP/DNS lookup, so surfing with Googlebot's user-agent will likely not be that useful.

Yuvaraj said...

This is really brillaint job. Hats off to Google again...

Yuvaraj

Philipp Lenssen said...

Just to clarify, per these guidelines it would thus be forbidden to

1) show the content during the first click through for a user from Google
2) while at the same time hiding the content (and show a registration box instead) during the second click through for a user coming from Google that same day?

John Mueller said...

Hi Philipp, no, but close :-)

When a user comes in from Google, he should be able to see the full content, always, every time he comes in from any search on any Google site. However, when that user clicks around within the site after viewing the first content, you are able to limit the available content.

Again, if a user comes in from Google multiple times a day, he should be able to see the full content of the article (as it is crawled and indexed by Googlebot) every time he comes in from Google. It is not limited to the first time that the user comes in from Google.

I hope that makes it clearer :)

Philipp Lenssen said...

> Hi Philipp, no, but close :-)

John, you say "no", but you seem to describe just what I described too -- that it's forbidden to show a registration box for subsequent click-throughs coming from Google. Could you clarify?

Hankwang said...

Will this also put an end to search results to scientific papers that are only accessible to googlebot and not to the web user?

Example queries:

* inurl:ieeexplore filetype:pdf sampling (replace sampling by something else) - clicking search results gives "your client is not allowed to access".

* inurl:springerlink filetype:pdf spectrum - clicking search results gives you the paper abstract only; the pdf link requests payment.

I quite often encounter this type of search results when I'm not looking for them, and I consider it search engine spam. I've reported it plenty of times, but to no avail.

Vishwa said...

Recently i hosted website jhoomba.com, also url is submitted to google the problem when i am searching jhoomba in google search results showing different website

please give me solution

John Mueller said...

Hi Philipp, in that case I misunderstood you :). If you are showing Googlebot the full content, you should always show that to users clicking through from Google as well. Users should only see a subscription message if they navigate to a different article on the same site (after having received the article in full length because they came in from Google).

Dincho Todorov said...

This is very good idea, BUT
spoofing referrer is very easy, so I can view such content (FCF), without googling.

How you are planning to prevent this issue ?

Roycer said...

Just implementing a referer cloaking would probably not be enough from a publishers point of view. Since all pages are indexed on Google - people will eventually learn to use the google advanced search and use it to search a domain and get all the content for free.

There is a need for possibly a encrypted cookie and also sessions which need to be checked.

Stephen said...

Am I reading this right?

Website owners are now expected to reward users for using Google to navigate their site rather than the Website's own navigation.

A search for "site: www.restricted-content-here.com" and then opening all the documents in a new tab would seem to provide all "first click free" content, and when people figure this out, the only way those sites will make any money / signups is by having their valuable content unavailable through this system, and only having teaser content for free - Just like it is now.

Further to that, with this proposal, Google doesn't seem to be providing code for owners to use to filter these users, or treating the search traffic differently in any way. While it's not rocket science to implement, implying that this is a Google service seems dubious, as Google don't seem to be actually doing *anything* here.

I can see the benefit to general web users (In the short term perhaps), but what's the incentive for website owners to actually implement this? Are they going to be penalised in the search rankings if they don't?

I'm just a bit baffled - am I missing something important being offered here? I assume I must be, as Google usually offers well thought through products.

figvam said...

Looks like a new therm should be coined for the things like that - the "content neutrality". Or rather, the lack of it in this case.

Also it's a pity Google has changed its stance on the cloaking. Time to update the webmaster guidelines?

Cloaking refers to the practice of presenting different content or URLs to users and search engines. Serving up different results based on user agent may cause your site to be perceived as deceptive and removed from the Google index.

Some examples of cloaking include:

* Serving a page of HTML text to search engines, while showing a page of images or Flash to users.
* Serving different content to search engines than to users.

Philipp Lenssen said...

I added my thoughts here: http://blogoscoped.com/archive/2008-10-20-n30.html

Stephen said...

There seems to be some confusion here on this issue of cloaking. The distinction here is not between users and search engines, but between users coming from one place, and users coming from another. Google apparently wants its customers to have more access to your site than customers from elsewhere - which shouldn't come as a surprise.

*However*, if you allow googlebot to index the whole page, but only show a limited page to browsers (whether refered from Google or not), then it is cloaking.

Google would possibly (probably?) argue that it's okay as long as the googlebot indexed page has the same content as the page seen by users *when they click through from Google*

I wonder why anybody would want to implement this. Maybe some of the above enthusiasts could explain what good they see here?

Ryan Svoboda said...

This is not something Google is "creating" just endorsing. This is easily implemented, but how many people are actually going to do this? Why would you provide free access to only Google users?

What it boils down to, is will this increase registration conversion rates at sites?

There's two sides to this; first off if showing the "first click for free" does increase the conversion ratio, then you should do that for everyone who visits your site, not just those coming from Google results.

Second off, if you decide to do this only for Google users, people are going to spoof their referer ID to get to your content for free, not just for the first click, but for everywhere this is enabled on your site, which seems like it would reduce registration rates.

I see absolutely no reason a website should choose to do this.

Model said...

:-)

Stephen said...

Here's (I think) a better solution which avoids most of these problems. Implement this, and pay me a commission on revenue (just kidding), or give me a job and I'll implement it for you (kidding a bit less ;) ).

----

1) Update the sitemap spec to support the following true/false flags for each page

* Free View (Is it cost free to view the full page)

* Free Preview (Is it cost free to view a preview of the page)

* Registration View (Is registration required to view the page)

* Registration Preview (Is registration required to view a preview of the page)

* Cache Preview (Should a preview of the page be cached)

* Cache page (Should the full page be cached)

(Depending on the model you require, you could base this on pages or articles - which would require some other modifications to the sitemap spec. This could easily be done without breaking for existing sitemaps.)

For each sitemap, indicate if SSL is supported for the URLs spidered (or alternate URLs, etc)

2) Equip googlebot with a client certificate to identify itself.

When googlebot spiders a site, the site can decide if it wants to let Google index the full article or not. If they are a big enough site to support SSL, they can validate the client is Google with higher certainty.

3) In the search listings, show flags depending on whether the information is free to view/preview and whether registration is required. Allow users to filter search results on these criteria, and setup default preferences.

Penalise sites if they are reported (and confirmed) as lying about the cost and registration flags.

Result: You can index the hidden web in a way allowed by the content authors, but still enable the end users to be in control of the type of sites their search will return.

Webmasters could build their registration and authentication system based on the rules in their sitemap file, so everything would be automatically up-to-date and in agreement between themselves and the search engines.

Just pop the cheque in the post ;)

Eddie said...

this is essentially what webmasterworld and google have been doing for since the former erected a pay wall.

adam127 said...

OK so Google wants me to agree to make all my premium content free to anyone who either 1) uses Google's site specific search instead of the navigation I provide OR 2) who fakes the referrer.

WOW what a deal!

David

b1 said...

I run a subscription-based website and I also sell a plugin for Wordpress that provides subscription services, so I've wrestled with this subject a lot.

Any subscription-based website admin/author will tell you that they are continually trying to find the happy medium of giving enough teaser (public) material to show what the site is about, its quality and depth, etc, but not giving it all away. After all, your aim is to have people pay to read the rest of it.

This balance can and is already achieved by admins selecting some info for public viewing and restricting the rest.

I'm a big fan of Google and the technology it has introduced over the years, but I don't think this suggestion in its current form will be embraced because Google is asking to circumvent this crucial and delicate balance.

I might be more interested in this if Google's search engine would not display the FCF content in its search results (won't this show up in the cached area?), yet still have the search engine match any terms that the bot may have found in the FCF page. Also, I don't think the user's should be given any FCF privileges - only the Google Bot and only if it doesn't reveal exactly what it saw. If the user is interested in the content then let the website decide whether to let them in or not - maybe with a free trial if they signup, or a discount on a new subscription, or whatever the admin thinks is best.

Many thanks to Google though for at least sharing with us what goes on inside its incubator and giving us an opportunity to comment. How many other titans out there are so open and engaging. Thank you Google, I appreciate that.

- BC.

Stephen said...

B1, Google's cache can be instructed to ignore your page by use of a Meta tag in the document.

[meta name=”robots” content=”noarchive”]

(Use angle brackets around it instead of square)

For premium content this would seem like a good idea anyway, but it doesn't solve all the issues.

G Ragu said...

great info man...

Beth Ann said...

My feelings on this "service" are mixed, but I've got one simple question for the naysayers: How many *regular* people really know how to fake a referrer? Many of the Joe Searchers I know can't figure out the difference between the address box and the Google search box. A few of them might be able to figure out that an advanced search could let them see more content for free, but most people I know aren't that industrious.

Dominik said...

@beth ann: The real problem are not those few people knowing how to fake referrers in their browsers... the real problem arises from people writing own bots which always supplys a google referrer trying to steal the content of my site. As long as there is no 100% working way to tell if someone did really a search on searchengineXY or is just supplying a google.com?q=asdfasdf referrer, I will not implement this. I can check if someone claiming googlebot is really googlebot with dns lookups, but currently their is no way to check the referrer for authenticity and thats the big problem.

Stephen said...

Beth Ann,

1) You don't need to spoof a referer to "beat" this, all you need to do is do a Google search for
"site: www.premium-content-here.com" and then open all the links in a new tab. You then get the referer in a perfecly legimate way, and get access to all the FCF content. Not difficult.

2) How long do you think it would be before browser versions / plugins / mods appear with a "Via Google" button that reloads the page with a Google referer header? Various plugins and browsers already exist to alter these headers (mainly for debugging purposes), and don't require any specialist knowledge.

Subscription/paid sites need signups to be sustainable, and for a key aspect of their business cannot rely on such weak "security".

IMO, a better approach to this would be for search engines to support meta data about content (via the sitemap), which specifies whether or not it's free / signup, which appear in the search results, and allow the searcher to specify preferences about whether they require free/pay/subscribe sites in their results. Robust authentication of googlebot could allow webmasters to permit indexing of a full article whilst retaining it as premium content for customers.

Cloaking content in this way would be far less damaging, as users would be aware that they were about to view a pay / subscription link, and would have the option to filter their search results to restrict these sites if they wished.

If extra processing & development costs are an issue, Google could potentially charge businesses a small amount per clickthrough on links cloaked like this - in the manner of Adwords. The customer would be well aware that the link was pay / register beforehand, so the conversion rates would be high. Google searchers would of course want to be confident that search rankings were not otherwise affected by this, and would be able to filter those results if desired.

Deep / hidden web indexed, customers happy that they are in control of their search results. Businesses happy that their content can be found and conversion rates are high. Google making more money.

Everyone is happy, the world becomes a better place. Free cake for everyone. Steve hailed as saviour of teh interwebs.

Now I'm off to solve the Middle East. What's all the fuss about?

Philipp Lenssen said...

> Subscription/paid sites need signups to be sustainable, and
> for a key aspect of their business cannot rely on such
> weak "security".

It could be enough for many sites to just show the registration box to say 80% of the people, not caring about all the different ways power users would see the content, as it's not really about "securing" the site to users (after all Google searchers can see it anyway; though the case may be different if you want to "secure" your site from non-Googlebot crawling). It's more about deceiving, uhm, I mean convincing enough people to send around the link they found in Google believing that the content will show to their friend, and then the friend will see the payment request instead of what was intended to be sent.

Stephen said...

I agree it may be enough for some types of site in the short term (before they go bust, or realise their mistake ;) ), but the aim of FCF is to make premium content freely available, and if it becomes widespread (which I can't see happening in this form - for anything other than non-profit sites - or sites that don't really need signups anyway), awareness and tools will develop to automate this for users. I'd have to place little value on signups/payments on the content to use a system like this.

I'm suggesting that the entire model of allowing access based on referer is flawed, and suggesting a model by which IMO a better result could be achieved.

FCF is based on the idea that users want to be able to access stuff fully when they click on it (From Google, and by extension from any search engine or directory - why stop there?). I fundamentally don't think that this model is sustainable or securable.

I'm suggesting that it would be a better outcome for everyone if searchers were simply able to distinguish between these results, and choose to filter based on whether they would tolerate signup or pay-for links.

Someone has to pay for the web, and not all content is freely available. I think people will be much more understanding about this (and hence links will convert better to customers) if it's clear to them that the information will cost or require signup before they click on the link.

Eddie said...

great idea, but unfortunately as is, poor implementation.

the main question is: what will keep people from using the "site:" search modifier, or faking the referrer, to see my entire site's premium content for free? it doesnt have to a perfect solution, but, as is, it's just too easy to be useful.

one idea is to make FCF only available to users logged in to google. google would then pass an anonymous reference to the google user ID to the content provider's site, and the owner of the site can decide how many FCFs to give each user (1, 3, 5, 10, who knows). google would make a web service available to verify that the user ID being supplied was recently active in a google search. of course this locks the user even more into the googlesphere, and restricts opening up FCF to users from other SEs. now if everyone was on openID that problem too would be solved.

using referrer alone is certainly not enough protective enough to be useful.

John said...

John-

Can you tell us how long we have to adhere to these guidelines before being penalized?

If penalized, would it be just the offending pages or the entire site?

Thanks

Peter said...

FCF has been around, though perhaps not well known, for years. I agree that this is practically begging for a simple Firefox StealThisWebPage plugin, which is why I helped kill the suggestion to implement FCF at my old job. Google Search quickly turns up some blogs on which folks have released one-line Javascript "bookmarklets" for accessing Wall Street Journal, and suggestions for using an existing privacy-focused Firefox plugin to take advantage of FCF.

I don't know if it's possible to truly secure this (especially with Google's insisting that there be no per-individual limit on FCF access). Chances are that the reader knows the "title" of the restricted content. Clearly the reader knows the site's URL. All the reader has to do, as eddie said, is search Google for that title with a "site:" phrase. It doesn't matter what fancy-pants crypto, Web Services, etc. Google might build for FCF -- hacking around FCF will always be fairly easy to do. So the value proposition of FCF is inversely proportional to the cost & difficulty of paying for the restricted content. WSJ wants $103 for an annual subscription, so naturally people will try to abuse that more than if they only asked for $10 per year, or were actually willing to sell a seven-day pass for $1.99.

Dave Hawley said...

RE: "we sometimes run into situations where important content is not publicly available"

That would make content private (by the publishers CHOICE), correct?

AJJA said...

Hi I have a new question however belive should be in another artical ...here I go.. HELP can anyone Help! Reason for the Cry for help is we have a site we look after that is not showing on google for search phrases that are expected and were showing strong and highly before?
the question and also discussions
i have seen note alot of sites were the owners have recorted the same problems... For instance we have a site that is now not showing for prases or words prevelent in the domain name and title etc.. were other terms not related however still important are showing ... this aposed to another site that only recently whent live is now serpassing the site in question for similar search phrases Can anyone advise... I have assumed that relevence to the domain name when searching for the name or phrase should and in every other site and account seems to be try were in the case in question this site is not highly profiled and if wasnt listed on google maps would not be found for its name within the domain name. the site in question has had sitemap submited and has been verified and we also have in google alytics. the site shows for many terms but the puzzle is that its not high for its most expected tearm. This site also when we type in the full domain name www.thesite.com.au (not with site:)shows however only showed two times were any other would dominate to the beter part (now this is the other way) are there any issues or things I should consider or who should I talk to this would help us understand more and give opertunity to adjust and encourage better profile this would help.

Chris Boggs said...

Has anyone implemented this and seen results yet? I am concerned about links being built to pages people have been able to navigate-to, but once clicked by someone else it would force regionalization. Also I am concerned about how other engines may view this.

admin said...

Well, it says First Click Free but turns out to be every click free.

What webmasters will do is just disallow googlebot accessing their premium content.

Then how will users be able to find premium content on Google?

joequincy said...

Although I understand Google's stance on this, I would recommend not referring to it as a "service" when Google is not doing anything.

I propose a modification of this concept. Google already tracks clicks on its search results, so why not take this a step further? Each time a user clicks a search result, Google POSTs the user's IP address to the page in question, which would then allow that IP to view full pages for the next X minutes (something short, 5-10). After that point, that IP would be shown the usual registration page for Y amount of time (from 1-24 hours). In this fashion, the user could be verified as having visited from Google results (only allow the POST from Google IPs), and it would provide users with full results for a short time (enough to answer a simple question, since sometimes the first page does not have a suitable answer) but not cause a site's premium content to be freely accessible at all times using simple referrer spoofing, which could easily be automated via a small addon in any of the major browsers.

In addition, this would indeed actually be a Google service, since Google would be providing a form of authentication.

Stjepan said...

I wonder is anything changed in FCF, are there some new development recently? It really seems, as someone said, it's in fact Every Click Free.

رسيم العيون said...

1. Google have been preaching the idea of building websites for people, not search engines, therefore webmasters were not able to treat Googlebot differently from other users. Now you are saying the we SHOULD treat it differently, as well as treat all Google users differently from all other users. Why is Google user any more special than Yahoo user, for example?

2. You are providing incentive for users to change their UA to Googlebot while doing their normal Internet surfing to get most content out of it. What would you suggest to webmasters to adjust to this?

loomster said...

There is one more problem with this approach. If a publisher does not allow Google access to restricted content then there is nothing preventing a malicious person from buying a subscription, and then stealing your restricted content and publishing it as their own. Google having no record of the content would have no way to verify that this is in fact duplicate content or in violation of copyrights. Personally I think Google should index restricted content but not force publishers to open it to anyone clicking through Google. I don't think First click for free makes sense to publishers whose livelyhood depends on proprietary research.

Antoine said...

I've been reading this article with interest and it seems the main debate is about the paid subscription websites.

I have a website which is subscription based but free. I want to encourage members to join as the content is of pertinent interest to the diving community. I also would like to increase my Rank status based on the information collected on my site.

I am struggling with the mechanics of implementing the first click to allow viewers to view the content they search for and will also encourage them to register.

Can someone give me advice on how to do this without having to open the site up completely. You can view my site on www.mydivealbum.com

fantom said...

Would it also be possible to filter out these result by the user? Sites which only show a question where you have to pay to see the answer annoy me already (experts-exchange comes to mind) This feature seems to promote these kinds of sites.

I have nothing against pay-for-information sites. In fact I use some. But it would be nice to make a distinction between a search for information and a search for places where I can buy information.

www.7areef.net

medonet said...

Would it also be possible to filter out these result by the user? Sites which only show a question where you have to pay to see the answer annoy me already
DSMAD(experts-exchange comes to mind)
This feature seems to promote these kinds of sites.

I have nothing against pay-for-information sites. In fact I use some. But it would be nice to make a distinction between a search for information and a search for places where I can buy information.

CT said...

Since this hasn't stopped newspapers from whining about Google stealing their ad dollars, I'm guessing that they're just trying to scapegoat Google for the internet.

Mithrandir said...

Can the user remove all sites which take these actions from their Google search?
If Websites can limit users to those who pay, we should equally be able to choose not to search for them.

pixels said...

Will the visitor be able to print out or save the web page under First Click Free? Thanks.

Naeem Temur said...

Keep it up Google, good

Perfume Street said...

So... in other words, Google does nothing. We change the settings on our server to track the referer if it is from Google, only allowing one click and as an extra peice of securing your content you then implment a cookie or possibly IP tracking, but... if somebody finds all your pages in Googles index and then changes IP and clears their cookies they can view all your sites pasworded content.

I might be wrong but have Google down anything here or are they just telling web masters a way that the search enginers can view their content?

Bartek said...

Good comments folks!
Looked like a good idea at first glance but after reading the comments which shed some light from different angles make me sceptical as most of you.

Anyhow if one would want to implement this without having greater technical knowledge how would one do it? Shouldn't Google provide more than this http://www.google.com/support/webmasters/bin/answer.py?answer=80553 ?

paul said...

Guys guys, have you read this :
Q: Can I allow Googlebot to access some restricted content pages but not others?
A: Yes.

All those who say "So I'm supposed to make available all my premium content to everyone, what's the point" you're wrong.

Look, that is how I see this option interesting: You have great premium or subscribed content and it's frustrating to have to wonder, "ok should i have this searchable or not, it's an awesome article/discussion/research that has great keywords and that could bring me incredible quality traffic, but it's premium and I can't give it away." Now it's gonna be easy to have only the first page of this article/discussion/research available for search. And it becomes a great landing page with potentially a better conversion rate that those landing pages you were struggling with before (a/b testing, design with gigantic call to action buttons etc).

Google Webmaster Central said...

Hi everyone,

Since over a year has passed since we published this post, we're closing the comments to help us focus on the work ahead. If you still have a question or comment you'd like to discuss, free to visit and/or post your topic in our Webmaster Central Help Forum.

Thanks and take care,
The Webmaster Central Team