How does Google work for SEO? As an SEO consultant in Nice, this is one of the first things I explain to my clients. If you want to work on your website’s referencing, gain more visibility and traffic, there are heaps of SEO techniques and criteria to take into account.
The thing is, if you don’t know how Google works, all of that won’t serve you any purpose. For me, it is therefore the first question to ask yourself before working on your SEO.
Let’s get straight to the point, for natural referencing, Google and SEO work in 3 steps:
- 1. Crawling
- 2. Indexing
- 3. Ranking
The 3 steps of Google’s operation
Step 1: crawling
This somewhat barbaric name simply translates to “visiting” or “scanning”. The first step is therefore an analysis of the website. When a page is put online, Google will send “bots” or “spiders” to visit the page. These bots will start from the page and will follow the links found there to navigate 100% of the site’s pages. This is the crawl.Step 2: indexing
Second step, the bots will verify that each page, one by one, respects the rules of the giant Google. If all goes well, all pages are properly indexed. Google will then copy the content of each web page and store it on its servers. Each indexed page is then accessible in the search engine. It is an essential step to monitor via Google Search Console because Google has more and more bugs related to page indexing.Step 3: ranking
Ranking and indexing are very close phases since Google automatically ranks a website after its indexing. I often separate the two to explain to my clients the importance of keywords used on their website. Indeed, Google ranks websites according to users’ queries + the site’s SEO level + competition + a whole bunch of criteria. The search engine will now test the published page to ultimately assign it a rank. To summarize how Google works for SEO, it analyzes all the web pages of a site via its bots. It verifies that the pages are well adapted for the web and that they respect a whole bunch of criteria then it ends up ranking each page by relevance. You now know how Google works, you see, it’s not very complicated. Let’s now see what to remember.How to control and optimize Google’s 3 steps?
Control and optimize crawling
We said that Google bots always start from one web page to analyze the entire site. In general, it starts from the oldest page, the one it discovered first. It’s not always the homepage but it’s often the case. Starting from this page and to find others, the bots will follow the links present on the pages. Hence the importance that each page of a site receives at least 1 link from another page (I always recommend 3 incoming links minimum), otherwise how could Google find it? (it can but not always). Google’s crawling is controlled via the robots.txt file. It is a simple text file (notepad) present in the folders of a website (at the root). This file allows indicating to Google the pages that we do not wish to see analyzed (and therefore indexed). For this you must use the command “disallow: “. For example, if I want to prevent Google from analyzing my quote request page: https://redback-optimisation.fr/en/contact-florian-zorgnotti-seo-freelancer/ I will add in the robots.txt file the following text: Disallow: en/request-a-quote/ Controlling crawling is a technique especially on large websites that wish to save crawl budget.- Internal linking: we have seen it, Google uses links to go from page to page. So relevant links must be added to the pages. Above all, these links must not be broken.
- Depth: if it takes too many clicks to reach a page, the bots risk not making it there (users won’t either, for that matter). Each page must be accessible in fewer than 3 clicks.
- Download time: the bots only have a limited time at each visit to crawl a site. If a page is too heavy, the bots will take a long time to analyze it and will have less time for others. They can therefore miss pages that will thus never be indexed.
Control and optimize indexing
In most cases, a crawled page is well indexed. At this level, you mostly need to ask yourself which pages you wish to keep and which pages you do not wish to index. You are going to ask me the question: “why should pages not be indexed?” Google will give a global score to your website. It is an average quality score based on the quality of each indexed page. It’s exactly like at school: the overall average is the result of averages from each subject. For SEO, each poor quality page will lower the global score (This is Google’s Panda algorithm). These low-quality pages must then be removed to keep only the best (see the technique). This is the case for example with “legal notices” or “T&Cs” pages which are not relevant to the site. Moreover, no one will type “legal notices + your company” in Google. These pages have no interest for organic traffic. In my SEO audits, I go much further by analyzing each of Google’s quality criteria, this allows me to calculate the average SEO score of each page and to, either remove these pages from the index, or improve them.Once the page is indexed, it is visible on Google
To control indexing, there are also several techniques:- The No-index tag: this is the simplest method. You can add it to a page in 30 seconds via Yoast SEO or another SEO plugin if you use WordPress. This tag indicates to Google not to index a page.
- The canonical tag: on a set of pages, this tag allows indicating to Google that a page is the original one. This avoids duplicate content by not indexing duplicate or too similar pages. We find this technique mainly on e-commerce sites.
Control and optimize ranking
It is at this level that the work of an SEO consultant makes complete sense. It is WHERE I spend the most time. To obtain the best ranking, hundreds of SEO criteria divided into 4 main SEO pillars must be taken into account:Frequently Asked Questions
How to index a web page quickly?
Has your site just gone online or do you want to index your articles as soon as they are posted? How does Google work? Two solutions are available to you:- The backlink: if you manage to place a “Do Follow” link on a popular site already indexed. Google will follow it fairly quickly and index your web page. It is the most complicated and expensive technique.
- Submitting your page to Google: you simply go through Google Search Console – URL Inspection. You will submit your page to Google which will come crawl and index it generally within the following 3 days.
How to remove a page from Google’s index?
As we have seen. It is necessary not to index certain pages, those of low quality. If your pages are already indexed, you can go through Google Search Console – Index – Removals. Then, you must go via FTP to your “robots.txt” file in order to add this URL to indicate to Google’s bots not to come back and crawl this page. Be careful, without step 2, it is useless to remove the page from the index because Google will re-index it. The second part is a bit technical, do not hesitate to contact me so we can discuss it.How to know the number of indexed pages of my site?
Simply via Google search: site:domainname. It is the fastest method but not always the most reliable. The best is to consult the “Index” report in Google Search Console and filter the data in “All submitted pages”. 100% of the pages you wished to index must be indexed, otherwise perform an SEO audit.How does Google work? Conclusion
You will have understood, we can control the 3 steps of Google’s operation:- Crawling or Google’s analysis
- Indexing of pages
- Ranking of pages


