SOLVED: Must-Dos to prevent Rankmath SEO contradictions

  • This post is purely for the benefit of all Rankmath users, I put in enormous time so I trust it gets posted:

    Originally this here should be added to this post: https://support.rankmath.com/ticket/rankmath-inconsistency-discovered/
    which unfortunately was closed before resolving the issue at hand.

    So then I had opened the post: https://support.rankmath.com/ticket/todd-kindly-dont-close-tickets-as-resolved-lol/
    but now ADDING THIS SOLUTION there would get lost since the unfortunate post title doesn’t suggest users to look there…

    So to help Rankmath users and Google: Must-Dos to prevent Rankmath SEO contradictions
    ————————————————————————————–

    Issue: RM users can add under “Sitemap Settings” > “Exclude Posts” all the post/page/custompost ids that are to be excluded from the sitemap index. Which is GREAT, but don’t rely on that alone.

    The SEO contradiction then originates from the fact that RM does set <meta name=”robots” content=”index,follow”/>
    for each and every post, page, and custom post type – OR if set by wordpress(?) – at least RM does not make a correction after you excluded posts per sitemap from being indexed.

    There exist (at least) 3 places that affect indexing, and obviously for SEO benefits all three (or more) must be consistent:
    – Sitemap
    – individual post/page’s meta tag, like eg <meta name=”robots” content=”index,follow”/>
    – robots.txt

    So when you excluded posts/pages in RM’s feature under “Sitemap Settings” > “Exclude Posts”, make sure those posts/pages DON’T get the meta tag: <meta name=”robots” content=”index,follow”/>

    How?
    – If you have only a couple of posts/pages to be excluded from indexing, then easiest is to go to Edit page, and down under “Rankmath SEO”, in the “Advanced” tab, under “Robots Meta”, tick the relevant boxes as desired.
    – If you have thousands though, like Brian Dean or any other authority site?

    Here’s how to solve this using phpmyadmin, accessing the database. More elegant would be with php, but you need to ask a programmer for that.
    The meta_key we are looking for is: rank_math_robots
    This places a *serialized value* in meta_value. This isn’t a problem in our case because we will simply add the correct serialized string into the post ids’ meta_values that we need to correct.

    First search if any such key exists already: simple, using phpmyadmin “search” feature.
    For posts and pages where the meta_key does not yet exist, we INSERT a new meta_id (gets numbered up automatically).
    For posts and pages where the meta_key exists and the meta_value just needs to be corrected, we UPDATE the existing meta_key for the relevant post_id.

    The sql I entered in phpmyadmin to INSERT new meta_ids:
    INSERT INTO wp_postmeta(post_id, meta_key, meta_value) VALUES (306,’rank_math_robots’,’a:3:{i:0;s:7:”noindex”;i:1;s:9:”noarchive”;i:2;s:9:”nosnippet”;}’)
    (306 is an example post_id)

    Or where we want, even all five tickboxes:
    INSERT INTO wp_postmeta(post_id, meta_key, meta_value) VALUES (306,’rank_math_robots’,’a:5:{i:0;s:7:”noindex”;i:1;s:8:”nofollow”;i:2;s:9:”noarchive”;i:3;s:12:”noimageindex”;i:4;s:9:”nosnippet”;}’)

    All this programmatically using notepad++ for all posts/pages/customposttypes to be excluded.
    Again there will be more elegant solutions for mysql commands too, but if I were a programmer I wouldn’t have asked here for a solution in the first place. 😉

    —-
    Of interest not only to Todd: The guru Brian Dean showed here: https://backlinko.com/hub/seo/robots-txt
    that he EXCLUDES 1,490 pages
    and wants indexed only the prime time 124 pages that Google then indexed.
    He explained the rationale behind this strategy in numerous posts on his site.

    You may also want to read: “What Happened When Proven Deleted 10k Dead Weight Pages”(!!), it’s here: https://backlinko.com/seo-checklist

    In short, so YES, there is a strong need to exclude vasts numbers of pages from most any website from getting indexed, and YES, any such exclusions must be consistent throughout the site.

    With this solution posted here they will be, else as per CURRENT Rankmath version they won’t.

    (By the way: Rankmath’s support pages here at https://support.rankmath.com/ticket come up with “About 10,200 results”, and tons of those pages aren’t any good for indexing… and so they all water down RM’s overall site ranking and DA… 😉

Viewing 1 replies (of 1 total)
  • As expected the software changed some of the sql command characters, sorry I don’t know how to prevent that.

    Either way, the point is that say for example menu pages and sub-pages and sub-sub-pages (lol, if you have such detailed menus too)
    should NOT be indexed
    – they do not aim to attract visitors: no lengthy quality content, just a menu
    – meta tags etc for menu pages are suboptimal for user search queries anyways
    – and random google searches shall find the ultimate gems that are IN THE MENU, not the menu itself.

    Example: Dog Food Menu page: https://mygermanshepherd.org/food/
    which itself (obviously) is a sub-menu of dog care: https://mygermanshepherd.org/care/
    neither should be indexed.
    And I just checked: now all such pages indeed have the new meta tag combination: <meta name=”robots” content=”noindex,follow,noarchive,nosnippet”/>
    It worked! 🙂

    Now this is consistent with RM’s post exclusions under: “Sitemap Settings” > “Exclude Posts”
    and since our robots.txt doesn’t make exclusions, now all *should be* consistent. (fingers crossed)

    RM is extremely powerful, and I am glad to have found it, hence why I didn’t give up. IMHO RM is way better than yoast and AIO SEO and SEO Framework and SEOPress, but that’s just my personal opinion after reviewing/using these. (non-affiliated with any)

Viewing 1 replies (of 1 total)

You must be logged in to reply to this ticket.