URLs with feeds do not have canonical pages

#810128
Viewing 5 replies - 1 through 5 (of 5 total)
  • Hello,

    Thank you so much for getting in touch.

    Feed pages don’t need canonical since these are XML format pages. We suggest not submitting your feed URLs to your GSC’s URL inspection tool since that tool is only for pages that are meant to be shown to the visitors and XMLs are not for visitors.

    Hope that helps and please do not hesitate to let us know if you need our assistance with anything else.

    Unfortunately Search Console crawled the feeds. What do I do with Search Console? Is a URL test sufficient on problematic feed paths?

    Hello,

    If you don’t want Google to crawl those URLs then you can add the following disallow rule to your robots.txt file.

    Disallow: */feed/
    

    Here’s a guide on how to edit your robots.txt using Rank Math:
    https://rankmath.com/kb/how-to-edit-robots-txt-with-rank-math/

    Once done, allow Google some time to re-crawl your site to reflect the changes.

    Looking forward to helping you.

    What happens if I don’t add this rule? It could be a worse solution, as I would not be able to validate the error on Search console, because it is blocked by robots.txt. I’m not convinced by this solution.

    Hello,

    Feed URLs are not created for humans but for RSS feed crawlers and readers. They are only basic code versions of your actual content pages.

    Those are not meant to be indexed and hopefully, you won’t want these pages to be indexed as Google doesn’t like them and would most likely not show them to users anyway.

    The disallow rule will prevent Google from attempting to crawl the feed URLs. This should save your crawl budget and after the next few crawls, the warnings should be removed from GSC.

    Hope that helps, and please do not hesitate to let us know if you need our assistance with anything else.

Viewing 5 replies - 1 through 5 (of 5 total)

You must be logged in to reply to this ticket.