internal link pyramid

Internal links are links that go from one page on a domain to a different page on the same domain. Whereas Linkbuilding is the art of getting other websites to link to your website.Internal links are commonly used in main navigation.Internal link building is simply the art age-old of getting pages crawled and indexed by Google. It is the art of spreading real Pagerank about a site and also naturally emphasising important content on your site in a way that actually has a positive, contextual SEO benefit for the ranking of specific keyword phrases in Google SERPs (Search Engine Results Pages).

External backlinks to your site are far more powerful than internals within it, but internal links have their use too.

Traditionally, one of the most important things you could do on a website to highlight your important content was to link to important pages often, especially from important pages on your site (like the homepage. for instance).

Highlighting important pages in your site structure has always been important to Google from a CRAWLING, INDEXING and RANKING point of view. It is also important for website users from a USABILITY, USER EXPERIENCE and CONVERSION RATE perspective.

Most modern CMS in 2018 take a headache out of getting your pages crawled and indexed. Worrying about your internal navigation structure (unless it is REALLY bad) is probably unnecessary and is not going to cause you major problems from an indexation point of view.

There are other considerations though apart from Google finding your pages.

The important thing is to link to important pages often.

Google has said it doesn’t matter where the links are on your page, Googlebot will see them:

 “So position on a page for internal links is pretty much irrelevant from our point of view.  We crawl, we use these mostly for crawling within a website, for understanding the context of individual pages within a website.  So if it is in the header or the footer or within the primary content, it’s totally more up to you than anything SEO wise that I would worry about.” John Mueller, Google 2017

When it comes to internal linking on your website you should know:

  • where you place links on a page is important for users
  • which pages you link to on your website is important for users
  • how you link to internal pages is important for users
  • why you link to internal pages is important for users

Internal Linking is important to users, at least, and evidently, it is important to Google, too, and it is not a straight forward challenge to deal with optimally.

 

Take note that Google has lots of patents related to links and anchor text, for instance:

Anchor Text Indexing

 “Using anchor text for links to determine the relevance of the pages they point towards.”  12 Google Link Analysis Methods That Might Have ChangedBill Slawski

Propagation of Relevance between Linked Pages

 “Assigning relevance of one web page to other web pages could be based upon distance of clicks between the pages and/or certain features in the content of anchor text or URLs. For example, if one page links to another with the word “contact” or the word “about”, and the page being linked to includes an address, that address location might be considered relevant to the page doing that linking.”  12 Google Link Analysis Methods That Might Have Changed – Bill Slawski

Ranking based on ‘changes over time in anchor text

In one embodiment of the invention, the time-varying behavior of anchortext (e.g., the text in which a hyperlink is embedded, typically underlined or otherwise highlighted in a document) associated with a document may be used to score the document. For example, in one embodiment, changes over time in anchor text corresponding to inlinks to a document may be used as an indication that there has been update or even change of focus in the document; a relevancy score may take this change(s) into account.” The Original Historical Data Patent Filing and its Children –Bill Slawski

Ranking based on ‘Unique words, bigrams, phrases in anchor text

QUOTE: “In one embodiment, the link or web graphs and their behavior over time may be monitored and used for scoring, spam detection or other purposes by a search engine. Naturally developed web graphs typically involve independent decisions. Synthetically generated web graphs-usually indicative of an intent to spam a search engine are based on coordinated decisions; as such, the profile of growth in anchor words/bigrams/phrases is likely to be relatively spiky in this instance. One reason for such spikiness may be the addition of a large number of identical anchors from many places; another possibility may be addition of deliberately different anchors from a lot of places. With this in mind, in one embodiment of the invention, this information could be monitored and factored into scoring a document by capping the impact of suspect anchors associated with links thereto on the associated document score (a binary decision). In another embodiment, a continuous scale for the likelihood of synthetic generation is used, and a multiplicative factor to scale the score for the document is derived.” The Original Historical Data Patent Filing and its Children – Bill Slawski

Rank assigned to ‘a document is calculated from the ranks of documents citing it’ – ‘Google Pagerank

 “DYK that after 18 years we’re still using* PageRank (and 100s of other signals) in ranking?” Gary Illyes from Google – Search Engine Roundtable 2017

We can only presume Google still uses Pagerank (or something like it) in its ordering of web pages.

 “A method assigns importance ranks to nodes in a linked database, such as any database of documents containing citations, the world wide web or any other hypermedia database. The rank assigned to a document is calculated from the ranks of documents citing it. In addition, the rank of a document is calculated from a constant representing the probability that a browser through the database will randomly jump to the document. The method is particularly useful in enhancing the performance of search engine results for hypermedia databases, such as the world wide web, whose documents have a large variation in quality.” The Original PageRank Patent Application – Bill Slawski

“A high pagerank (a signal usually calculated for regular web pages) is an indicator of high quality and, thus, can be applied to blog documents as a positive indication of the quality of the blog documents.”  Positive and Negative Quality Ranking Factors from Google’s Blog Search (Patent Application) – Bill Slawski

Evidently Google does not throw the baby out with the bathwater. If Google still uses Pagerank, then perhaps they still use tons of other legacy methods of ranking websites that over time are obfuscated to protect the secret sauce.

A ‘measure of quality’ based on ‘the number’ of links:

 “A system can determine a measure of quality for a particular web resource based on the number of other resources that link to the particular web resource and the amount of traffic the resource receives. For example, a ranking process may rank a first web page that has a large number of other web pages that link to the first web page higher than a web page having a smaller number of linking web pages.” Did the Groundhog Update Just Take Place at Google? Bill Slawski

A ‘measure of quality’ based on ‘traffic received by use of those links’

However, some a resource may be linked to by a large number of other resources, while receiving little traffic from the links. For example, an entity may attempt to game the ranking process by including a link to the resource on another web page. This large number of links can skew the ranking of the resources. To prevent such skew, the system can evaluate the “mismatch” between the number of linking resources and the traffic generated to the resource from the linking resources. If a resource is linked to by a number of resources that is disproportionate with respect to the traffic received by use of those links, that resource may be demoted in the ranking process.” Did the Groundhog Update Just Take Place at Google? Bill Slawski

A ‘measure of quality’ based on link ‘selection quality score’

The selection quality score may be higher for a selection that results in a long dwell time (e.g., greater than a threshold time period) than the selection quality score for a selection that results in a short dwell time (e.g., less than a threshold time period). As automatically generated link selections are often of a short duration, considering the dwell time in determining the seed score can account for these false link selections. Did the Groundhog Update Just Take Place at Google? Bill Slawski

Google certainly gives some weight to anchor text, including the anchor text it finds on your own site.

Think of Links Like Lasers

‘links-are-like-lasers’ analogy give beginners a simpler understanding of Google PageRank.

  1. Links Are Lasers
  2. Linking To A Page Heats Up A Page
  3. Pages Get Hot Or Cold Depending On Number & Quality Of The Links To It
  4. Cold Pages Don’t Rank For Anything
  5. Hot Pages Rank!

There was a time when you could  structure a certain page  specifically to rank using nothing but links – and while you can,always follow that practice Google will pick the page on your site that is MOST RELEVANT TO THE QUERY and best meets USER EXPECTATIONS & USER INTENT.

What it means is that– you can choose to link all you want to any particular one page, but if Google has a problem with that page you are trying to make rank or thinks there’s a better page on your site (with a better user satisfaction score, for instance) – it will choose to rank that other page, before the ‘well-linked-to’ page.

In the past, Google would flip-flop between pages on your site, when there were multiple pages on the site targeting the same term, and rankings could fluctuate wildly if you cannibalised your keywords in this way.

Google is much more interested presently in the end-user quality of the page ranking, and the trust and quality of the actual website itself, than the inbound links pointing to a single page or a clever internal keyword rich architecture that holds content ‘up’.

It’s much more important in 2018 for a page to meet the user intent (as Google has defined it) of a specific key phrase and those intents can be complex keyword phrase to keyword phrase.

Internal link building works best when it is helping Google identify canonical pages to rank on your site.

As John Mueller points out:

we do use internal links to better understand the context of content of your sites” John Mueller, Google 2015

…but if you are putting complicated site-structure strategy before high-quality single-page content that can stand on its own, you are probably going to struggle to rank in Google organic listings in the medium to long-term.

So the point is that a keyword rich anchor text system on your site IS useful, and is a ranking signal, but don’t keyword stuff it.

What it means is that you should concentrate on introducing as many unique and exactly relevant long-tail keyword phrases into your internal link profile as you can. This will certainly have better results for you than having one page on your site having only one anchor text phrase in its profile.

How you proceed is going to be very much dictated by the site and complexity of your site, and how much time you are willing to spend on this ranking signal.

There is no single best way to build internal links on your site, but there are some efficiencies to be had, especially if your site is of a good quality in the first place. There are some really bad ways to build your site for search engines. For example, do not build your website with frames.

Internal Links Help Google Discover Other Pages On Your Website

Just because Google can find your pages easier in 2018 doesn’t mean you should neglect to build Googlebot a coherent architecture with which it can crawl and find all the pages on your website.

Pinging Google blog search via RSS (still my favourite way of getting blog posts into Google results fast ) and XML sitemaps may help Google discover your pages, find updated content and include them in search results, but they still aren’t the best way at all of helping Google determine which of your pages to KEEP INDEXED or EMPHASISE or RANK or HELP OTHER PAGES TO RANK (e.g. it will not help Google work out the relative importance of a page compared to other pages on a site, or on the web).

While XML sitemaps go some way to address this, prioritisation in sitemaps does NOT affect how your pages are compared to pages on other sites – it only lets the search engines know which pages you deem most important on your own site. I certainly wouldn’t ever just rely on XML sitemaps like that….. the old ways work just as they always have – and often the old advice is still the best especially for SEO.

XML sitemaps are INCLUSIVE, not EXCLUSIVE in that Google will spider ANY url it finds on your website – and your website structure can produce a LOT more URLs than you have actual products or pages in your XML sitemap (something else Google doesn’t like.

Keeping your pages in Google and getting them to rank has long been assured by simple internal linking practices.

Traditionally, every page needed to be linked to other pages for Pagerank (and other ranking benefits) to flow to other pages – that is traditional, and I think accepted theory, on the question of link equity.

Common reasons why web pages might not be reachable, and thus, may not be indexed:

Links in Submission-Required Forms

Forms can include elements as basic as a drop–down menu or elements as complex as a full–blown survey. In either case, search spiders will not attempt to “submit” forms and thus, any content or links that would be accessible via a form are invisible to the engines.

Links Only Accessible Through Internal Search Boxes

Spiders will not attempt to perform searches to find content, and thus, it’s estimated that millions of pages are hidden behind completely inaccessible internal search box walls.

Links in Un-Parseable Javascript

Links built using Javascript may either be uncrawlable or devalued in weight depending on their implementation. For this reason, it is recommended that standard HTML links should be used instead of Javascript based links on any page where search engine referred traffic is important.

Links in Flash, Java, or Other Plug-Ins

Any links embedded inside Flash, Java applets, and other plug-ins are usually inaccessible to search engines.

Links Pointing to Pages Blocked by the Meta Robots Tag or Robots.txt

The Meta Robots tag and the robots.txt file both allow a site owner to restrict spider access to a page.

Links on pages with excessive Links

The search engines all have a rough crawl limit of 150 links per page before they may stop spidering additional pages linked to from the original page. This limit is somewhat flexible, and particularly important pages may have upwards of 200 or even 250 links followed, but in general practice, it’s wise to limit the number of links on any given page to 150 or risk losing the ability to have additional pages crawled.

Links  I-Frames

Technically, links in both frames and I-Frames are crawlable, but both present structural issues for the engines in terms of organization and following. Only advanced users with a good technical understanding of how search engines index and follow links in frames should use these elements in combination with internal linking.

By avoiding these pitfalls, a webmaster can have clean, spiderable HTML links that will allow the spiders easy access to their content pages. Links can have additional attributes applied to them, but the engines ignore nearly all of these, with the important exception of the rel="nofollow" tag.

Want to get a quick glimpse into your site’s indexation? Use a tool like Moz Pro, Link Explorer, or Screaming Frog to run a site crawl. Then, compare the number of pages the crawl turned up to the number of pages listed when you run a site:search on Google.

Rel=”nofollow” can be used with the following syntax:

<a href="/" rel="nofollow">nofollow this link</a>

By adding the rel="nofollow" attribute to the link tag, the webmaster is telling the search engines that they do not want this link to be interpreted as a normal, juice passing, “editorial vote.” Nofollow came about as a method to help stop automated blog comment, guestbook, and link injection spam, but has morphed over time into a way of telling the engines to discount any link value that would ordinarily be passed. Links tagged with nofollow are interpreted slightly differently by each of the engines.

Categories: Digital Marketing, SEO

Leave a Reply

Your email address will not be published. Required fields are marked *