Now, recognizing this depth and complexity is necessary if you want to rank well in the long run, but it doesn’t prevent you from suffering at the hands of trivial issues. Whether because you don’t know to check for them, or you’re busy focussing on broad on-page SEO, it’s perfectly possible to completely miss something that’s working behind the scenes to undermine all the other work you’re doing on your website.
As such, even if you’re very confident about your general SEO abilities, you can’t ever take the basics for granted. The tiniest mistake at the foundational level will cause problems with everything built on top of it.
In this piece, we’re going to take a look at 5 common technical SEO mistakes that you might actually be making right now, because they don’t generally cause alarms or lead to catastrophic failure. They just sap your efforts and leave you with worse rankings than your website deserves. Let’s expose them so you can know what to look for.
1: Overlooking metadata
When you create pages and write content for your website, you can easily forget about metadata entirely. After all, it doesn’t seem all that important — if it mattered so much, why would it be so commonly automatically generated? But metadata does matter, and depending on the field you’re ignoring, you could be losing out on a lot of ranking potential.
The title of a page, for instance, is quite significant for its ranking judgment. That field is supposed to clearly and distinctly state what the page is about, and that information can be very important for a search crawler if it’s finding it hard to figure things out based on the content alone. It also typically determines the page name that appears in SERPs, affecting click-through rates alongside the meta description.
And then you have alt text for images. If you have an image-heavy website with very minimal text, a lack of appropriate alt text for the images can make it very difficult for a search bot to tell what the site contains — and Google isn’t in the habit of serving mysterious result pages. Following metadata best practices won’t win you great rankings, but it is essential nonetheless.
2: Having missing resources
If you have a very small website with a handful of pages, you’re very likely to notice if one of the pages goes down. But what if you have a larger website with numerous pages beyond the core navigation? Do you check each and every page on a regular basis to confirm that all is well? Or do you tend to assume that pages working right now will continue to work fine indefinitely?
Something that can easily happen to a company with a medium/large website is the steady accumulation of missing resources. Linked-to external resources go down, CMS updates leave certain pages non-functional, technological developments cause features to fail (think about how much of the web ran on Flash before the rise of HTML5). And every hole in the fabric of your content is a checkmark against its reputation, evidence for Google’s crawlers that the quality of your website is on a decline.
To avoid this suffering a slide in the rankings, be sure to schedule frequent sitewide reviews. Use a site crawler to check every indexed page for errors and identify any other issues that may have come up recently. You can then use that information to fix things, replacing any removed resources with fresh links and updating pages to meet current standards.
3: Using a confusing URL hierarchy
There’s a decent chance that the average website owner doesn’t even think about URL structures — they probably add pages through their CMS using the suggested structures, and take it as granted that they’re appropriately categorized and positioned. And in some cases, this works out fine. It all depends on how the CMS is configured and what its default settings are.
But in many cases (particularly involving old and/or custom content management systems), URL structures don’t make all that much sense. The structure of a URL for any given page of a site should make it clear to readers and search bots alike where that page stands in relation to the rest of the site. Here’s an example:
clothingsite.com/products/coats/leather/leather-coat-1.html. It’s easy to parse that kind of URL, seeing that the page exists in a leather category, which exists in a coat category, which exists in a product category.
Now imagine that the URL were actually something like clothingsite.com/pr/5/lc1.html. Neither a crawler nor a person would be inclined to guess that “pr” stands for “products”, or that “lc” stands for “leather coat”. Sloppy URL structuring damages rankings and makes addresses harder to understand for users, lowing their likelihood of returning through direct entry.
4: Stuffing content with keywords
Keyword optimization is incredibly important for SEO. The meta keywords tag may have lost its ranking significance a long time ago, but on-page keywords are still essential, and it’s difficult to foresee a scenario in which they won’t contribute to rankings. Knowing this, website owners everywhere commit time and effort to researching relevant keywords, making sure they include the main terms that their intended visitors are searching for.
The problem is that there’s a fairly thin line between optimizing and over-optimizing, and it’s easy to unintentionally end up doing the latter because there’s no red warning light that will appear to let you know you’ve gone too far. Unless you get a full-blown penalty from Google, you’ll be unable to tell whether your keywords are coming across as unnatural and thus losing you ranking power.
The best solution for the problem of keyword stuffing is to focus on writing good copy that sounds natural (Joost de Valk of Yoast SEO fame stated as much in his appearance on Marketing Speak earlier this year). Don’t “stuff” keywords at all — use them when they’re justified by the context. You can still sweep your content to add in keywords, but stick to adding them only when you think they’d provide the intended reader with valuable clarity.
5: Not implementing canonical tags
Duplicate content doesn’t help users, and is often abused to pad page lengths, so it makes complete sense that Google judges it so harshly. Fill up your site with duplicate content and you’ll inevitably see your rankings suffer massively — you won’t receive a technical penalty, but the perceived quality and usefulness of your site will be vastly reduced.
That said, it isn’t always possible to ensure that two given pages are sufficiently distinct to avoid content duplication (particularly when you have multiple versions of the same page through different addresses), which is why you have access to the rel=canonical tag to indicate the definitive and primary version of a page from which all other versions have taken.
If you have several pages with identical (or mostly-duplicated content), you should select the main page and use it as the canonical URL. That way, Google’s crawlers will know that you’re not trying to mislead them into thinking you have more content than you really do. You should still avoid duplicate content wherever possible, naturally, but canonical tags are worth including in all circumstances just so you’ll be protected if there’s any duplication.
There’s a decent chance that your CMS is either already generating canonical tags or is capable of it, so you might not need to make any major changes — but you absolutely need to check so you can know for sure.
For different reasons, these 5 technical SEO mistakes are easy for SEO beginners and experts alike to miss. It ultimately comes down to being vigilant and keeping an eye on the foundation of your website as you develop and expand it. Get the essentials right, and you’ll benefit more extensively from the higher-level SEO work you carry out. If you need help managing your local SEO, contact us for a free consultation.