On an entry level they work by having 3 basic functions:
-
Crawling the internet to find new web pages and documents
-
Putting that content in a giant index / database
-
Ranking that content on various factors
We are going to be going into this in a bit more detail now.
Crawling: How Does A Search Engine Crawl The Web?
Search engines discover new content by sending out search engine spiders, or crawlers, to find it.
Crawlers are computer programs or robots that find new content like web pages, PDF files, videos, and images by visiting links on web pages.
These crawlers can visit web pages very quickly which allows them to discover new websites, pages, and other content.
When creating new content, linking to it from existing pages on your site or from another site is a good way to make sure it gets discovered by the search engines.
Crawlers also tend to visit popular websites that create new content more frequently than smaller unknown websites. Getting a link from a popular website could result in your content getting discovered more rapidly
Creating a sitemap also helps search engines crawl your site. A good sitemap will link to every page on your site.
Signing up for a Google Console account is a good step to take if you want to see more data on pages that get Google has crawled. You can also see any crawling errors that may have occurred.
A few issues that might cause pages to not get crawled include poor navigation structure, redirect loops and server errors.
In the past, it was popular to “submit” your site to search engines, but this is no longer needed as they have become much more advanced at detecting new content that is published on the web!
Indexing: How Does A Search Engine Read and Store Website Information?
When crawlers discover new pages and content, they store the information in an index.
You can think of an index as a very large database containing all the web pages on the Internet that a search engine has found.
Search engines will analyse and fetch content from their index when searchers enter a search query.
By default, search engines will crawl and try to index every page on your site that they can find.
However, if you have pages you don’t want web searchers to be able to find through search engines, like private member-only pages, then use can use Robots Meta Tags will help.
You may also want to exclude pages that aren’t useful like tag and category pages in WordPress.
Ranking: How Does A Search Engine Rank Websites?
Search engines use algorithms to analyse websites and decide how to rank them for various search queries.
These algorithms assign scores to various ranking factors and then rank web pages with the best scores from highest to lowest.
Search engine algorithms also change over time in an effort to improve search results. Keep in mind that the goal of search engines is to provide quality content so that their users are satisfied with search results and keep using their search engine.
So what factors do search engines use to determine what content ranks at the top?
We discuss the top ranking factors in the next chapter!