Search engines normally work in the following manner
- By Crawling
- By using Deep Crawling Depth
- By using Fresh Crawling Breadth
- By indexing
- By searching
Actually search engines stock up information on different websites in the form of Webpages, which are recovered from World Wide Web. All these web pages are retrieved by a system known as web crawler or a spider. A WebCrawler is an automatic web browser which tracks each and every link which it observes. Some omission can be made by using robots.txt file. After tracking all the web pages they are examined to decide how they should be indexed. The data collected on web pages is kept in an index like database which can be used later.
A few search engines for example Google, store the full or a part of the basic page also known as cache and also other information about the pages and other search engines stock up every word which it gathers, such as the case of AltaVista. The cached page at all times keeps the definite search text because it was the only one which was really indexed, because it will be very helpful when the content from the current page is revised and the words which are searched are not available on it any longer. Such a type of difficulty is measured as a small form of linkrot and Google’s managing of it enhances usability by pleasing user outlook that the searched keywords will be available on the comeback web page. This reduces the astonishment on the part of the user because he anticipates for the searched term to be available on the returned page. So improved relevance to the search terms, constructs the cached pages to be very helpful, even though they have data in them which may not be available any more.
So whenever any user visits a search engine and types any query in that, the search engine finds up the index and gives a best matching list of pages, which shows the title and a brief description about the searched term. A lot of search engines make use of the Boolean terms which are AND, OR and NOT to additionally state the search query. There is also a superior feature called as proximity search, which permits to determine the distance among keywords
A search engine becomes useful only if it gives the results which are pertinent to the searched keywords of a user. There will be millions of website page which may have a specific keyword or a long tail keyword; a few web pages will be more specific, more popular or more authoritative among others. So how a search engine makes a decision on which pages match the best query and in which order the pages should be shown, is totally dependent upon the different search engines. There is no fixed rule and the process of deciding Webpages to rank changes often depending on the usage by the users and the use of newer technologies.
All of the search engines are mostly made for profit and therefore they display advertisements along the search results.
A lot of search engines use proprietary algorithms and closely guarded databases. Most famous among them are Google, Yahoo and Bing Search. Apart from these there are also available some open source search engines.
Latest posts by Sunny (see all)
- How to Improve You Web Page Titles? - November 5, 2016
- Why Is Search Engine Optimization an Important Tool? - October 26, 2016
- How to Grow Your Online Business with SEO? - October 18, 2016