Many of us may have imagined that when we look for a word or phrase on Google, some small virtual workers go in search of the result, contrasting the information of hundreds of millions of results to reach the optimal one with the information we request.
Of course, not everything is as we imagine, and the reality is that search engines they are an index (a database) created by the search engine so that the results are visible in moments.
So, the better results we get, and the faster we can display them on the screen, the more satisfied users will continue to use that search engine, be it Google, Bing, Yahoo or one of the other search engines. millions that exist. That's how it is; few consider (or know) that there are thousands of different search engines, and some of them very original, by the way. However, when we want to search for something on the internet, the vast majority of the world's population uses Google. And this is because it is fast, precise and minimalist.
It must be understood that for a search engine to be able to show us results of a search on the screen, the existence of its own information file huge, gigantic, available so that the crawler robots can get the results. Did you think that Google surfs the internet completely every time you need to find something? Nah. The engines do not search the gigantic virtual universe for each query, but they have their own archive of all the world's websites in one place, compiled in such a way that this work can be done in fractions of a second for each search.
In addition, each of the search engines, Google, Bing or Yahoo among others, have their own proper methods and rules to gather and prioritize web content. They are not the same methods or incentives used by different search engines, but each one implements its own rules so that the searches give the most accurate results possible.
You can try searching for the same thing (in the example we searched for "search results") in Google and Yahoo, and you will notice the difference. But, although each search engine has its own rules, the reality is that all search engines carry out the famous process called "indexing". That indexing implies that each search engine systematically scans the entire online universe constantly, and the database created allows automatic display of results when a search query is entered.
In other words, and as in the Matrix movie, each search engine has "spiders" known as robots or crawlers; these digital robots continually scan the web to find websites and index content, new or changed. Also I know nourish the links that exist on each web page to find new sites, and so on, ad infinitum.
So, through the robots or crawlersThe major search engines like Google, Bing, and Yahoo are constantly, minute by minute, second by second, at this very moment, indexing hundreds of millions of web pages. But then the search engine, the robot army bossfilter the information. To start with, consider different areas of the same site to be able to determine the theme of a website, and how to prioritize its content.
Take into account that search robots scan each page of your website in search for clues what topics the content covers and, in addition, they perform deep readings of the labels, descriptions and instructions that you have written. Therefore, it is important that each portal has the proper labels and the completed descriptions (read meta-descriptions).
Another key point is to try to make your articles original and not based on similar ones. Exclusivity allows us to gain external links, that is, from other sites or networks to us. The greater the amount of incoming links that we have, the more influence or authority we will have against Google and the rest of the search engines. Promptly, each incoming link is counted as a vote for specific content on our site, affecting the whole, although with less incidence.
Here it is worth noting that each of the links we receive have different weights in relevance; If we are quoted by a prestigious newspaper, undoubtedly the SEO influence is greater compared to a link from a small blog.
Way back, before the days of Google, search engines were based solely on in the content of the website indexed, and on the density of keywords (how many times our note said, for example, “bicycle sales”) to be able to position the sites. This form of primitive search was exploited as seo tactic, better known as “black-hat”, where site managers stuffed their web pages with keywords so that they would rank at the top of search engine results for a specific term (again eg. , “bicycle sales”). Obviously, things have changed since those years, and now it is necessary post quality content and original to get higher on the mountain. In fact, overpopulating an article with a repetition of a phrase can be penalized (that is, a search engine de-indexes us, and our site no longer appears for any type of result).
Leave a Reply