Professional Documents
Culture Documents
66%
8% moderate consensus
63%
60%
49%
47%
45%
45%
42%
9% light consensus
38%
37%
35%
8% light consensus
33%
14. Keyword Use / Number of Repetitions in the HTML Text on the Page
33%
15. Keyword Use in Image Names Included on the Page (e.g. keyword.jpg)
33%
26%
17. Keyword Density Formula (# of Keyword Uses ÷ Total # of Terms on the Page)
25%
23%
22%
21%
19%
12%
6%
5.7% moderate consensus
5%
• Andy Beal – Keyword use in external link anchor text is one of the top SEO factors
overall. I’ve seen sites rank for competitive keywords—without even mentioning the
keyword on-page—simply because of external link text.
• Andy Beard – Keyword Use in the Meta Keywords Tag – ignore them unless using a
blogging platform which can use the same keywords as tags. Google ignores them.
• Christine Churchill – Taking the time to create a good title tag has the biggest payoff of
any on-page criteria. Just do it!
• Duncan Morris – It’s worth pointing out that even though having keywords in the meta
description doesn’t impact rankings they can play a significant role in the sites click
through rate from the SERPs.
• Peter Wailes – Domain name keyword usage gains most of its strength through what
anchor text people are then likely to link to you with, not so much from inherent value,
which is lower in my opinion.
65%
50%
3. Use of Links on the Page that Point to Other URLs on this Domain
41% low importance
41%
4. Historical Content Changes (how often the page content has been updated)
39%
37%
33%
25%
22%
16%
13%
11%
8%
• Russell Jones – If Google only ranked the “tried and true”, their results would be old and
outdated. Recency is a valuable asset when links are hard to come by.
• Tom Critchlow – Factors like recency (freshness) and content changes are difficult
factors to pin down. A fresh page is a real asset if trying to rank for fresh queries and
when QDF hits in but other times having an established page can be more of a benefit so
sometimes you need one and sometimes you need the other.
• Peter Meyers – Anecdotally, it feels like freshness is more important than ever. I’m
amazed how often a blog post ranks within the first day, holding a top-10 position before
finally settling a few spots (or even pages) lower.
• Carlos Del Rio – HTML Validation is not necessary, but running validation is an easy
way to catch broken code that can trap spiders. If you are not linking out at all you are
sending a signal that you are not part of the Internet as a whole. Creating topical
association is very important to maintaining a strong position.
• Ian Lurie – Ratio of code to text and HTML Validation don’t have direct impacts, but by
focusing on these factors you create semantically correct markup and fast-loading,
content-rich pages, which has a huge impact. The description tag and static/non-static
URLs won’t impact rankings. But they do impact click-through on your listing once you
see it. So I’m not suggesting you ignore your description tag or use messy URLs. But
when you change them, expect more clicks for the rankings you have, not better rankings.
73%
71%
67%
4. Page-Specific TrustRank (whether the individual page has earned links from trusted sources)
65%
6. Topic-Specificity/Focus of External Link Sources (whether external links to this page come
from topically relevant pages/sites)
58%
55%
8. Location in Information Architecture of the Site (where the page sits in relation to the site’s
structural hierarchy)
51%
9. Internal Link Popularity (counting only links from other pages on the root domain)
51%
25%
17%
• Jon Myers – SEO ranking for me is won in the external factors today. It is the old
80%/20% rule and time needs to be invested in the getting your linkage right as this is
where you will win. Make sure you are focusing the keyword anchor text and directing to
the relevant pages. The focus has to be towards a quality and quantity mix and also don’t
get all your links from one type of source, make sure you have a blend as this I believe
counts well for you as well.
Use PR rank to determine high ranking links but make sure they are relevant is always a
good starting point to refine the links and clean out the bad ones and refocus the anchor
text on the good ones as I tend to find that more often than not about 85% of external
links will have brand keywords as anchors, so you could be missing some great
opportunities. Never forget though ones the bots are there make sure the internal linkage
is good as it counts for a lot!
1. Trustworthiness of the Domain Based on Link Distance from Trusted Domains (e.g.
TrustRank, Domain mozTrust, etc.)
2. Global Link Popularity of the Domain Based on an Iterative Link Algorithm (e.g.
PageRank on the domain graph, Domain mozRank, etc.)
64%
3. Link Diversity of the Domain (based on number/variety of unique root domains linking to
pages on this domain)
64%
64%
52%
6. Links from Domains with Restricted Access TLD Extensions (e.g. .edu, .gov, .mil, .ac.uk,
etc.)
47%
13.8% moderate contention
21%
• Carlos Del Rio – There’s likely to be a tipping point with Nofollowed links vs. Followed
links to the domain where it’s not a factor unless the tipping point is reached where there
are too many Nofollowed links. Then it has a Negative impact.
• Will Critchlow – Temporal growth of links above and beyond the value of the links
themselves tends to only have a positive impact on QDF-type queries in my experience.
• Aidan Beanland – Google have stated in the past that .edu, .mil and .ac TLD extensions
do not inherently pass any more value than others, but that alternative factors may make
this seem to be the case.
• Ann Smarty – Domain strength is a highly important factor (still). We keep seeing pages
with 0 strength of their own hosted on reputable domains ranked very high for very
competitive words.
• Lisa D Myers – I do think the distance between trusted domains and you could have an
impact, the bots are becoming more intelligent with their reading and will take
associations of domains with them as they go to compare to the next site it links to. Using
LSI (Latent Symantic Indexing) was just the start for the search engines, I belive the
algorithm is now so much more sophisticated and has the power to read not only latent
symantic between content on a page but between sites. My mind boggles when I think
about the process, it’s a bit like when you were little and tried to imagine the end of the
universe! Again it comes down to content, if you generate highly valuable and relevant
content the brilliant links will come to you. I know, I know, it’s such a cliche, but
unfortunately true. If links are the currency of the web, content is the bank!
1. Site Architecture of the Domain (whether intelligent, useful hierarchies are employed)
52%
37%
37%
4. Domain Registration History (how long it’s been registered to the same party, number of times
renewed, etc.)
36%
5. Server/Hosting Uptime
32%
6. Hosting Information (what other domains are hosted on the server/c-block of IP addresses)
31%
7. Domain Registration Ownership Change (whether the domain has changed hands according
to registration records)
31%
11.3% moderate consensus
31%
29%
10. Domain Ownership (who registered the domain and their history)
25%
24%
12. Domain “Mentions” (text citations of the domain name/address even in the absence of direct
links)
24%
14. Citations/References of the Domain in the Yahoo! Directory (beyond the value of the link
alone)
24%
15. Citations/References of the Domain in DMOZ.org (beyond the value of the link alone)
23%
16. Citations/References of the Domain in Wikipedia (beyond the value of the link alone)
22%
21%
21%
18%
17%
14%
13%
23. Citations/References of the Domain in Google Knol Articles (beyond the value of the link
alone)
13%
6%
7.4% moderate consensus
5%
5%
5%
5%
29. Use of Google’s Hosted Web Apps (not App Engine) on the Domain
3%
• Adam Audette – Many of these factors aren”t directly related to how Google will score a
domain for ranking, BUT these all have a huge factor on the SEO of the site. For that
reason it was slightly difficult to pull them out one by one. I believe DMOZ is still very
juicy. Hint: Google still uses the directory. Double hint: search for “clothing” sometime
and see what 2 of the top 10 results are. That’s significant, especially because there’s no
ability to get a link on the ranking category page at DMOZ (which feeds Google’s).
Citations/mentions/quality directories are certainly tracked and factored in, along with
Google’s domain detective work. XML sitemaps can help with crawl fluidity but aren’t a
scoring factor per se.
• Marshall Simmonds – Search engines either don’t care to, are unable, or aren’t good at
organic comprehensive crawls of large sites (those in the millions of pages) due to size
and depth of content. This means it’s critical to the success of enterprise level sites to
implement XML sitemaps whereas smaller sites may not see the benefit as much.
• Wil Reynolds – Alexa and compete rankings would be of very little value given the
prevalence of Google analytics and the Google toolbar. They can get much more accurate
data from their own properties.
• Richard Baxter – Recent changes to Domain Registration Ownership, especially if the
domain has been allowed to expire, impact the results extremely negatively.
• Ian Lurie – Use of Adsense/Google Apps/Google Search or other search engine-owned
tools, though, won’t impact results at all. If your site is so hurting, SEO-wise, that you
have to point an Adwords ad at it to get crawled, you’ve got bigger problems.
21%
19%
17%
15%
12%
11%
• Marty Weintraub – Twitter data isn’t a factor yet, but it’s probably coming.
• Hamlet Batista – Matt Cutts explained in a video that Google doesn’t care how many
twitter followers you have. Their algorithms only care about the links.
• Dan Thies – Put me down for “no way, never” on all these.
• Todd Malicoat – Social bookmarking is a quality indicator. Brand mentions are a quality
indicator. If I was a search engine engineer, I would likely rank brand mentions based on
social media conversations from third parties that were easiest to derive valid data from.
• Ian McAnerin – I’m inclined to believe that in this case "sometimes a link is just a link",
to paraphrase Freud.
42%
39%
36%
32%
26%
19%
9%
7.7% moderate consensus
• Jessica Bowman – While usability are factors likely in the formula, I haven’t seen much
to indicate this is impacting rankings - especially for larger authoritative websites.
Companies do need to focus on these because they will likely become a bigger impact in
the next year.
• Andy Beal – While Google may well be experimenting with including these factors in
their algorithm, I’ve seen no evidence to support wide-spread usage.
• Adam Audette – CTR on a search result is a large cumulative factor, and brings in page
load time as well, which is something we're very focused on at present.
• Carlos Del Rio – Brand and domain additives to search terms have become especially
important since the Vince change.
• Ian Lurie – None of these factors have a significant impact YET. But they're coming on.
If you think Google’s ignoring all that toolbar data and Searchwiki info, you're mental.
68%
56%
51%
51%
49%
48%
46%
9. Cloaking by IP Address
46%
10. Hiding Text with CSS by Offsetting the Pixel display outside the visible page area
44% low importance
44%
43%
12. Excessive Links from Sites Hosted on the Same IP Address C-Block
41%
41%
41%
40%
39%
37%
37%
37%
36%
36%
12.2% moderate contention
36%
36%
34%
33%
33%
28. Use of “Poison” Keywords in Anchor Text of External Links (e.g. student credit cards,
buy viagra, porn terms, etc.)
32%
32%
30%
27%
26%
33. Link Acquisition from Buying Old Domains and Adding Links
24% very minimal importance
24%
24%
22%
21%
15%
• Andy Beard –
Excessive Repetition of the Same Anchor Text in a High Percentage/Quantity of External Links to
the Site/Page:
o Is it part of a navigation system that allows the user to eventually display the
content?
o If you hide a whole bunch of keywords, or keyword stuffed links, it could be a
significant factor
A perfectly optimized link points to content that is a perfect landing page for the
keyword, and Google isn’t going to give you a penalty for something they expect you to
do, tell the truth with your links.
o With CSS you could have the header in the footer or the footer in the header
o does 100+ links in that part of the visible page make sense for users?
If redirecting and hosting the old content on the new domain, this can be achieved
successfully.
70%
65%
63%
52%
41%
• Adam Audette – All killers. The last one is a grey area...but a major factor. If a link is
determined to be paid, it will normally be filtered out from the site's link graph. But there
are occasions when a serious penalty will occur from too many paid links.
• Chris Bennet – I don’t know what measures Google has taken to algorithmically spot
low quality paid/rented links but it would be very easy to build a tool that could spot 80-
90% of the crap without breaking a sweat.
• Hamlet Batista – Links from banned sites are pretty much worthless.
• Todd Malicoat – Most links won’t hurt you, but if you put significant effort into
obtaining a link that won’t help you, you’ve negatively impacted your bottom line. Make
sure you are hunting for links that matter.
• Ian McAnerin – Links are not a rankings factor – trust and topic are. Links just represent
this. If you can show that the link has little/no trust or is unfocused, then it will not be
worth much. If you can show it has neither trust nor accurately indicates the topic, then
there is no reason to count it.
Geo-Targeting Factors:
1. Country Code TLD of the Root Domain (e.g. .co.uk, .de, .fr, .com.au, etc.)
69%
63%
60%
57%
53%
52%
45%
41%
35%
10. Geographic Location of Visitors to the Site (the country/region from which many/most
visitors arrive)
30%
10.2% light consensus
11. Geo-Tagging of Pages via Meta Data (e.g. Dublin Core Meta Data Initiative)
24%
• Joost de Valk – Ranking in different countries has different requirements. For some
countries, f.i., Google cannot reliably determine server location based on IP, and some
languages are so alike to Google’s algorithm that weird stuff sometimes happens (Dutch
pages ranking in German results, f.i.)
• Russell Jones – Any opportunity you have to tell Google explicitly what region for
which your site is designed — do it. Make their job as easy as possible.
• Wil Reynolds – The address associated with the registration of a domain wouldn’t make
sense to have too large of an impact as this would severely hurt sites that are registered in
one country yet have content for multiple countries on their site
• Aidan Beanland – In my experience Google still relies mainly on the ccTLD, IP location
of host and Webmaster Tools regional target. Secondary cues are given less importance
than in other search engines.
Language of the site can act as an automatic geo-filter, as only queries in that language
would match content from that country. However, this can (and does) cause confusion
when the same language is spoken in multiple countries, or the same words are used
across multiple languages.
• Kristjan Mar Haukson – Address Associated with the registration of the domain we
have worked with large companies with their address given in one country but targeting
another and this has not played any role that we have seen
24%
22%
20%
15%
7%
6%
5%
51%
36%
Google is now showing a slightly stronger preference towards websites associated with
well-known, public brands.
9%
Google is now showing a much stronger preference towards websites associated with
well-known, public brands.
4%
No major shift occurred that preferences Google’s results towards well-known, public
brands.
83%
Content on Subdomains inherits some, but not all, of the query-independent ranking
metrics of the root domain (or other subdomains) and is judged partially as a separate
entity.
•
10%
Content on Subdomains never inherits all of the query-independent ranking metrics of the
root domain (or other subdomains) and is judged largely as a separate entity.
7%
Content on subdomains inherits all or nearly all of the query-independent ranking metrics
of the root domain (or other subdomains) and is judged much the same as other content
on the shared root domain.
TO WHAT EXTENT DO YOU BELIEVE GOOGLE WEB SEARCH EMPLOYS DATA GATHERED
FROM GOOGLE ANALYTICS TO INFLUENCE THEIR SEARCH RANKINGS?
•
74%
Google Analytics data is used only in aggregate form to help with pattern identification
and broad user behavior analysis.
16%
6%
Google Analytics data is employed on a website by website basis and can positively or
negatively affect a site's rankings.
4%
Google Analytics data is employed on a website by website basis, but can only impact
search rankings consideration positively (no web spam or penalty analysis is conducted).
WHICH OF THE FOLLOWING STATEMENTS MOST ACCURATELY REPRESENTS YOUR
BELIEF/EXPERIENCE ABOUT HOW 301 REDIRECTS ARE HANDLED BY GOOGLE?
70%
301’s pass a high percentage (but not 100%) of query dependent and independent ranking
factors from one URL to another only when certain content & spam analysis algorithms
are satisfactorily met.
23%
301’s universally pass a high percentage (but not 100%) of the query dependent and
independent ranking factors from one URL to another.
7%
301’s universally pass 100% of the query dependent and independent ranking factors
from one URL to another.
68%
Yes, but these citations are not treated directly as links, merely as indications of potential
quality/authority/trustworthiness.
26%
No. Wikipedia links only appear to pass value because many other sites/pages scrape and
re-publish the links without nofollows.
•
6%
Yes, the links are treated as though the nofollow didn’t exist
48%
Links will decline in importance, but remain powerful, as newer signals rise from usage
data, social graph data & other sources to replace them.
37%
Links will continue to be a major part of Google’s ranking algorithm, but dramatic
fluctuations will occur in how links are counted and which links matter.
15%
Links will continue to be a major part of Google’s ranking algorithm, much as they have
been over the past 5 years.
0%
Links will become largely obsolete, much the way keyword stuffing fell by the wayside
in the late 1990’s.