A web surface is an open place. Almost all the websites on the surface can be accessed by several search engines e.g. if we search something in Google, a vast number of results can be obtained from it. But, what if the web designers create something on their website and don’t want Google or other search engines to access it? This is where we need Robots.txt. Robots.txt is a text file which you put on your site to tell search robots which pages you would like them not to visit. It is not HTML. Robots.txt is not so compulsory for search engines but generally, search engines fulfill what they are asked not to do.
Need of robots.txt file
The most important reason for this is to keep the entire sections of a website private so that no robots can access it. It also helps to forbid search engines from indexing certain files. Moreover, it also specifies the location of the sitemap.
Structure of a Robots.txt File
The structure of a robots.txt is very simple – it is a continuous list of user agents and disallowed files and directories. Basically, the syntax is as follows:
“User-agent” are search engines crawlers. Disallow section lists the files and directories to be excluded from indexing. In addition to “User-agent:” and “disallow:” entries, you can add comment lines – just put the # sign at the beginning of the line:
# All user agents are prevented to see the /temp directory.
Reason to use the robots.txt file
It blocks the contents from search engines.
It tunes access to the site from reputable robots.
It is used in currently developing the website, which need not show in search engines.
It is used to make contents available to specific search engines.