Enter your search terms:
Top

Give us a call

330.564.6177

How To Gauge Keyword Difficulty And Find The Easiest Keywords To Rank For

The hard part is figuring out what it takes to rank #1 for each keyword and use that information to prioritise your list and plan your SEO strategy.

Many keyword tools (Ahrefs included) try to solve that problem by showing you a “keyword difficulty” or “keyword competitiveness” metric – but can you rely on their judgement?

Well, the goal of this article is to give you the definitive answer to this question.

No one really knows how Google ranks pages

Basically, the entire SEO industry is nothing but hundreds of thousands of people using trial and error to figure out how Google ranks pages.

In a nutshell, all we know today is that Google uses over 200 different ranking factors, with the 3 most important being Links, Content and RankBrain (not necessarily in that order).

We also know that Google is experimenting a lot with machine learning and artificial intelligence algorithms, which should completely revolutionize “search” in the next few years.

So where am I going with this?

If you want to determine keyword difficulty with 100% accuracy, you need to use exactly the same algorithms that Google uses to rank pages.

So does any third‐party tool have access to Google’s ranking algorithms?

Nope.

Could they develop information‐processing algorithms that could boast the same level of sophistication as Google?

Very unlikely.

That’s why no keyword difficulty checker is perfect and each tool can only give you their best estimate.

But even an estimate is better than nothing, right? And besides, certain tools are much more accurate than others (wink).

Important: a lot of people who are new to SEO mistakenly rely on the “Competition” metric that they see in Google Keyword Tool. Please be advised that this metric has nothing to do with ranking difficulty and only shows how many advertisers are bidding to show their ads in the search results for a given keyword.

How to determine the ranking difficulty of a keyword

The only way to learn how difficult it would be to rank on top of Google for a specific keyword is by carefully analysing the pages that already rank there.

Ideally, you’d want to vet these pages for all of Google’s 200+ ranking factors. But since no one (except Google) really knows how much each individual factor contributes to the resulting ranking of a page, it makes sense to focus on the biggest ones: links and content.

Links

Let’s use one of the keywords we’re targeting in the Ahrefs Blog as an example: “anchor text”

The quickest way to see the number of backlinks the Top10 ranking pages for this keyword have is to put it into Ahrefs’ Keywords Explorer tool and scroll down to the “SERP overview” report:

The “Domains” column shows how many unique websites link to a given page. And it’s not that hard to see a general pattern: the more sites link to a page, the higher it ranks in Google.

In fact, we’ve studied the correlation between a page’s number of referring domains to its position in Google across 2 million keywords, and it turned out to be a strong ranking factor:

One other interesting takeaway from the above graph is that the number of referring domains to a page has a better correlation with Google rankings than just a raw number of backlinks. So, as a general rule, it’s better to get one link from 10 different websites than 10 links from a single website.

But apart from the sheer quantity, there’s also a quality factor in place: a small number of high‐quality links may trump a larger number of lower quality ones.

For that we have a metric called URL Rating (or “UR”).

You can see from the graph above that UR correlates with Google ranking much better than the raw number of linking domains. That’s because Ahrefs’ URLRating takes into account the quality of backlinks (to a certain extent) and was specifically designed to reflect the ability of a page to rank well in Google (read more about Ahrefs’ metrics here).

And yet, even with UR (which is the highest correlated metric in the SEOindustry) we’re only scratching the surface of how Google would process backlink factors.

There’s just too much to consider:

  • Where is the link located on the page?
  • Is that link likely to be noticed and/or send traffic?
  • What is the anchor text of that link?
  • What is the surrounding text of that link?
  • How many other backlinks are on the page?
  • At what pace is the page acquiring new backlinks?
  • & so on.

Authority of a domain

A lot of SEOs believe that the so‐called “domain authority” (or “domain rating”) has a big influence on a page’s ability to rank.

But at the same time many SEO professionals are convinced that such a thing as “domain authority” does not exist.

So who’s right and who’s wrong?

Well, here at Ahrefs, we’ve studied the correlation of domain‐level backlink factors across 2 million keyword searches and plotted them alongside some key page‐level factors:

As you can tell from our data above, domain‐level factors have significantly smaller correlation with rankings than page‐level factors. And yet that correlation is still quite solid.

Does this mean that Domain Rating helps you rank higher?

I’m afraid we can’t confirm that based only on this correlation. Correlation ≠ causation.

But what our data suggests is that you should be able outrank high‐DR websites if you have more links coming to your page.

And this wraps up my very brief overview of how to approach keyword difficulty from a backlinks standpoint.

Usually SEOs won’t go too deep in reviewing a given SERP: they will just look at the number of linking domains and UR/DR of the top‐ranking pages and settle with that information. But for some important keywords you may want to go as far as reviewing the actual backlinks, where they come from and what would it take to replicate them.

Content

It is true that you can easily outrank pages with vastly more backlinks if they’re lacking relevance to the search query.

Here’s a keyword that perfectly illustrates what I mean: “chocolate lab”

Looks like the pages with only 6–20 referring domains are outranking the pages with 900‑1000 referring domains.

How is that possible?

Well, if you open that Wikipedia page with over a thousand referring domains, you’ll see that “chocolate Labrador” is only a small sub‐section of a very big article:

Meanwhile, the articles ranking above that Wikipedia page are entirely dedicated to this specific breed:

 

This is a perfect illustration of how relevant content can outrank even the strongest backlink profile.

But don’t get too excited about it just yet.

What we see in this example is called “lack of relevant content.” The top‐ranking results are targeting a broader search query (Labrador retriever), rather than a very specific one that people are searching for (chocolate lab).

That is a massive opportunity for relevant content to shine, and that’s how those two articles got to the top without a lot of backlinks.

But you don’t see this kind of thing very often.

Usually what you get in the SERP is “slightly imperfect content” (at best). The top‐ranking results are 100% relevant to a search query, but they could do a slightly better job of giving visitors what they’re looking for.

This kind of SERP won’t give you the same level of competitive advantage as “lack of relevant content,” where you can rank without backlinks.

So how do you know if the search results for your keyword are lacking relevant content, giving you a good chance to beat them without links?

And how do you make your own page 100% relevant to a given keyword in the eyes of Google?

Let me try to address these two things.

Conventional on‐page SEO” vs topical relevance

Imagine you put your target keyword in Google and see that the top‐ranking pages don’t use that keyword in their Title/URL/Headline.

This indicates that you can easily outrank them if you just use the keyword in your page’s Title/URL/Headline, right?

Wrong!

The best practices of on‐page SEO in 2017 are not as straightforward as they were back in 2010.

Back then, Google didn’t have fancy things like Hummingbird and RankBrain, so it needed some very strong clues to understand what your page was about. Putting your exact‐match keyword in the Title/URL/Headline of your page gave a strong competitive edge over the pages that weren’t doing that.

But this trick doesn’t work anymore. Today, Google is smart enough to understand what your page is about even when a target keyword is never mentioned on the page.

In fact, by studying 2 Million keyword searches we have discovered that almost 75% of the pages that rank in Google’s Top10 don’t have a single mention of an exact‐match keyword in their content.

Check out the SERP for the keyword “guest writing” to see what I’m talking about:

Clearly, Google understands that things like “guest writing,” “guest blogging” and “guest posting” are closely related. So if you perfectly optimize your page for the keyword “guest writing” in accordance with these old‐school on‐page SEObest practices, that won’t give you any competitive edge at all.

How to make your page relevant

Or should I re‐phrase that as “how to make your page MORE relevant than the pages that currently rank in the Top10, so Google will rank your page higher even with fewer backlinks”?

Well, I’m afraid there’s no easy and straightforward way to do it.

In order to make your page perfectly relevant to Google, you first need to understand how Google interprets search queries and matches them to topics and entities that it extracts from web pages.

Sounds complicated, right? That’s because it is.

You can try studying things like latent semantic indexing (LSI), latent Dirichlet allocation (LDA) and other topic modeling algorithms, but most people obviously won’t go that deep.

And why should they?

Since Google is getting so smart that it almost “reads” the pages of your website, why should you even bother adjusting your pages to meet some complex criteria of its algorithms and not just “write for humans”?

Well, the important word here is “almost.” Despite its impressive complexity, Google is still a machine, and if you understand how it works and can adjust your pages accordingly, you’ll be one step ahead of everyone else.

We’re going to properly cover the topic of “new on‐page SEO” in one of our upcoming articles, so now I’ll leave you with 2 quick tips:

  1. Use Ahrefs’ Site Explorer tool to analyse top‐ranking pages for your target keyword and see what other keywords they also rank for. This will give you some clues as to what topics Google thinks they’re relevant to.
  2. Open the Top10 pages that rank for your target keyword and use one very sophisticated tool to extract topics from them – your brain. If you Google around and study all the pages related to your topic, you’ll naturally build a good thesaurus of words and topics that will help Google identify your own page as perfectly relevant.

User Intent

In most cases “relevance” and “user intent” go hand in hand. But sometimes a perfectly relevant search result might not give the user what he’s looking for.

In this case, Google will always favour “user intent” over “relevance.”

Sounds confusing? I have a great example for you.

If you search for “online survey” from the United States, you get search results that look like this:

Nine search results suggest tools for creating online surveys and 1 search result offers work‐from‐home “online survey jobs.”

But what happens when we search for exactly the same keyword from the United Kingdom?

This time only 5 of the search results offer tools for online surveys, while the other 5 offer “online survey jobs.”

This example shows that people might be looking for different things when they search for a general keyword that may have multiple meanings.

And Google has somehow identified that most people in the US are looking for an online survey tool, while a lot of people in the UK are also interested in making some money by participating in online surveys.

But how does Google know what people are looking for?

It hasn’t been officially confirmed, but the rumours are they might be looking at things like:

  • how long people stay on the page after clicking on it in the search results (a metric known as dwell time);
  • whether people click on any other search results or just settle with the one they picked first;
  • whether people get what they were looking for from their first search or if they will keep refining it and clicking on more search results.

And these kinds of things can sometimes outweigh the topical relevancy of a page.

I mean, if more people in the UK started clicking on search results related to “online survey jobs,” Google would see that and drop a bunch of “online survey tools” results from the front page – even if they were perfectly relevant to the keyword “online survey” and had tons of backlinks.