Duplicates are pages of a site with the same or almost identical content. The presence of such pages can negatively affect the interaction of the site with the search engine.
Among the negative consequences you may face as a Webmaster if you’re using duplicate content and targeting the Yandex search audience:
- Slow indexing of the necessary pages. If there are many identical pages on the site, the robot will visit them all separate from each other. This can affect the speed of crawling the desired pages because it will take more time to visit exactly the right pages.
- Difficulty interpreting web analytics data. A page from a group of duplicates is automatically selected by the search engine, and this selection may change. This means that the address of a duplicate page in the search may change with updates to the search base, which can affect the page in the search (for example, the recognition of the link by users) and make it difficult to collect statistics.
If there are identical pages on the site, they are recognized as duplicates, and then only one page will be shown in the search query. But the address of this page in the issue can change for a very large number of factors. These changes may make it difficult to collect analytics and affect search results.
Duplicates can appear on the site as a result of automatic generation or as a result of incorrect settings.
For example, with incorrectly configured relative links on the site, links to addresses that do not physically exist can appear, and they give the same content as the desired pages of the site. Or the site is not configured to return a 404 HTTP response code for inaccessible pages – they receive a “stub” with an error message, but they remain available for indexing.
You can read Yandex’s official documentation on duplicate pages/content here.
Leave a Reply