Search engine optimization is the process of improving your website’s search engine visibility. It encompasses a wide array of ranking factors that include on-page elements, inbound links from other websites and social signals that are indicative of a site’s popularity. Under each of those three areas is a boatload of other sub-elements that you have to fine tune for best results. It’s a lot of ground to cover, but hey, you have to start somewhere. The most logical kickoff point is always the one you have direct control over: your own webpages.
A sound website architecture, smooth search engine bot crawls and proper formatting are the hallmarks of good on-page SEO. Search engine spiders have to be able to drill down through your site’s pages easily so they can crawl and index even sections that lie deep within your navigation paths. The thing is, this is easier said than done. Not knowing what you’re doing, not being aware of on-page SEO best practices and having system generated glitches can create crawl errors that can severely impact your site’s search ranking performance. In mild cases, this can disrupt the movement of search engine bots and the circulation of PageRank in your website. At worst, it can leave valuable pages in your website unindexed and unsearchable by your target audience, depriving you of precious business leads.
But what exactly bogs down on-page SEO? I’ve listed down five of the most common SEO-killing on-page errors that can plague crawl processes and user experiences. In each item, I talk about what they are, how they affect your website and what you can do to detect and address them. Here goes:
- Broken Links – Search engine bots and human users navigate the Web by going from link to link. Bots replicate themselves with each hyperlink they find, crawl the page where the link points to, cache and index its contents, then move on to the links within the same page. The process repeats itself ad infinitum, allowing search engines to map out the open Web over a long enough period of time.
But aside from crawl facilitation and indexing, links serve another important purpose: they pass relevance and authority signals from page to page so search engines can rank the best pages at the top of their results. As you may guess, crawling, indexing and authority-passing is disrupted if a page’s link to another page is not working properly. When this happens to your site, you have yourself a broken link that you have to address promptly.
Human error and system-generated glitches can create broken links in your website. It’s up to you to find and correct them. To do this easily, you can use tools like the XENU Link Sleuth or the Screaming Frog spidering tool. These are desktop-based apps that you can install and run. They’ll perform a crawl of your websites and generate reports of all the links they find. From there, you can see what’s broken and what’s working. You can then go to your CMS and fix the links or have your web developer do it for you.
- Illogical Site Navigation Paths – It’s in the best interest of every search engine to rank websites that provide great user experiences at the top of their SERPs. A big part of good user experience in a website is giving visitors the ability to find what they’re looking for quickly and easily. For that, navigation paths should be set up in a clear and logical manner.
Grouping pages together with topical relevance in mind is the first step towards achieving this. If you’re running a site that sells a lot of products, grouping them into categories and subcategories will help users narrow down their product searches easily. For example, if you have an ecommerce site that sells men’s shoes, you can categorize your items by brand, style, material used and so on. That means under the Nike category, you’ll have basketball shoes, golf shoes, tennis shoes, cross-trainers, etc. as subcategories. The subcategories can be divided further into more specific segments based on product attributes This also helps users and search engines form a better idea of which pages are related to each other and which ones are most relevant to queries entered by their users.
Just remember that categories and subcategories should link internally to each other in a cascading, pyramid-like pattern. That is, you can think of your home page as the pyramid’s tip, the categories as the next layer underneath, then the subcategories and the product pages as the succeeding layers. This setup helps bots establish a sense of importance and hierarchy within your site’s pages.
The opposite, of course, is what you don’t want to have in your website. If pages are scattered all over your domain without a sense of order, people will find your content confusing and search engines will struggle in trying to figure out what your site really is about.
- 404 Not Found Pages – If you delete or move a page, you can’t expect search engines to deduce your intentions unless you give them some kind of signal that can guide them. When a page goes off the web, a 404 Not Found message is given back by the server to a user or search engine that tries to access it. Search engines keep track of pages they’ve indexed even after they’ve gone offline and when they can’t find them in succeeding routine crawls, they chalk them up as crawl errors.
This is bad news regardless if you intentionally took the page down or some error is causing it to not show properly. Each webpage carries a certain amount of PageRank and when it goes offline, whatever link juice it passes to other pages is cut off. The page itself also suffers a decline in search engine visibility, depriving you of chances to be found by people looking for the 404 page’s content.
When moving or deleting pages, make sure search engines know what you’re doing by using 301 redirects to show bots where the page’s replacement can be found. If a page was deleted with no replacement, it can be redirected to a higher-level category page. This ensures that whatever link juice the page receives externally or internally is preserved and passed on to other pages. This hastens the ranking process of replacement pages and improves your site’s overall quality in the eyes of human users and search engine algorithms.
Again, you can use XENU to detect 404 pages in your site. Alternatively, you can use Google Webmaster Tools to get a list of crawl errors that the search engine has detected.
- No XML Sitemap – An XML sitemap is a web document in your website that lists all the pages you’ve published within your domain. Each time a new page goes live, the XML sitemap is automatically updated and alerts search engines that there’s something new to crawl and index. While it’s not absolutely essential for getting good SEO rankings, you’ll want to have it there just to make sure bots don’t miss out on anything that you want them to see.
Your web developer can help you set up an XML sitemap. Alternatively, most CMS platforms have plugins that will create an XML sitemap automatically for you. Be sure to include the sitemap’s setup in your development to-do list so you don’t deprive your site of its full indexing potential.
- Inappropriate Crawl Restrictions – There are several legitimate reasons why you wouldn’t want some pages in your site to be crawled and indexed by search engines. Your pages could be under construction, you might want to keep them private or you may be holding them out for a time-specific launch. Whatever the case may be, there are several ways to restrict bot access. The best of which is through the robots.txt file. This is a text file where you can specify which pages are off limits. Alternatively, you can add HTML tags such as “noindex” or “noarchive” to tell spiders to go away.
However, this can work against you if restrictions are inappropriately applied to pages that you want to be found for in search engines. It’s not uncommon for some developers to unwittingly leave restrictions active even after a website’s construction is over. If you notice that some or all of your site’s pages are not being indexed for most search engines, you’ll want to examine robots.txt and the <head< part of each affected web page’s source. See if anything is blocking spider access and have your developer undo them if you spot anything odd.
Being aware of these five technical errors should help you stay away from most quirks that can put a damper on your search engine optimization campaign. When everything’s a-okay within your turf, you can proceed to the off-page optimization portion to start building up your site’s popularity. Good luck!
About the Author
Glen Dimaandal is a blogger, a search marketer and the Founder/CEO of SEO Company Philippines.