Robots.txt is a text file located in the site’s root directory that specifies for search engines crawlers and spiders what website pages and files you want or don’t want them to visit. Usually, site owners strive to be noticed by search engines, but there are cases when it’s not needed: For instance, if you store sensitive data or you want to save bandwidth by not indexing excluding heavy pages with images.
When a crawler accesses a site, it requests a file named '/robots.txt' in the first place. If such a file is found, the crawler checks it for the website indexation instructions.
NOTE: There can be only one robots.txt file for the website. A robots.txt file for an addon domain needs to be placed to the corresponding document root.
Robots.txt and SEO
The default robots.txt file in some CMS versions is set up to exclude your images folder. This issue doesn’t occur in the newest CMS versions, but the older versions need to be checked.
This exclusion means your images will not be indexed and included in Google Image Search, which is something you would want, as it increases your SEO rankings.
Robots.txt for WordPress
WordPress creates a virtual robots.txt file once you publish your first post with WordPress. Though if you already have a real robots.txt file created on your server, WordPress won’t add a virtual one.
A virtual robots.txt doesn’t exist on the server, and you can only access it via the following link: http://www.yoursite.com/robots.txt
By default, it will have Google’s Mediabot allowed, a bunch of spambots disallowed and some standard WordPress folders and files disallowed.
So, in case you didn’t create a real robots.txt yet, create one with any text editor and upload it to the root directory of your server via FTP.