Title tags—technically called title elements—define the title of a document. Title tags are often used on search engine results pages (SERPs) to display preview snippets for a given page, and are important both for SEO and social sharing.
The title element of a web page is meant to be an accurate and concise description of a page’s content. This element is critical to both user experience and search engine optimization. It creates value in three specific areas: relevancy, browsing, and in the search engine results pages.
Primary Keyword – Secondary Keyword | Brand Name
Optimal Length for Search Engines
Google typically displays the first 50-60 characters of a title tag, or as many characters as will fit into a 512-pixel display. If you keep your titles under 55 characters, you can expect at least 95% of your titles to display properly. Keep in mind that search engines may choose to display a different title than what you provide in your HTML. Titles in search results may be rewritten to match your brand, the user query, or other considerations.
Title Emulator Tool
Enter your title below to see how it would appear in Google’s search results.
Enter Your Full Title Text:
Enter Search Phrase (optional):
This is an example of how your Title Tag will appear in search results
This is your page description. The font and size of the description has not changed in the latest redesign. Descriptions get cut off after roughly 160 characters …
Why Title Tags are Important for SEO
A title tag is the main text that describes an online document. Title elements have long been considered one of the most important on-page SEO elements (the most important being overall content), and appear in three key places: browsers, search engine results pages, and external websites.
Title tags often show up in both the top of a browser’s chrome and in tabs.
Title Tag in Browser
2. Search Engine Results Pages
When you use keywords in the title tag, search engines will highlight them in the search results if a user has performed a query including those keywords. This gives the user greater visibility, and generally means you’ll get a higher click-through rate.
Title tag on search result
3. External Websites
Many external websites—especially social media sites—will use the title tag of a web page as its link anchor text.
Title tag on external site
Optimizing Your Titles
Because title tags are such an important part of search engine optimization, implementing best practices for title tags makes for a terrific low-effort, high-impact SEO task. Here are critical recommendations for optimizing title tags for search engine and usability goals:
Be Mindful of Length
As stated above, search engines will truncate titles in search results that exceed a certain length. For Google, this length is usually between 50-60 characters, or 512 pixels wide. If the title is too long, engines will show an ellipsis, “…” to indicate that a title tag has been cut off. That said, length is not a hard and fast rule. Longer titles often work better for social sharing, and many SEOs believe search engines may use the keywords in your title tag for ranking purposes, even if those keywords get cut off in search results. In the end, it’s usually better to write a great title that converts and get clicks than it is to obsess over length.
Place Important Keywords Close to the Front of the Title Tag
According to Moz’s testing and experience, the closer to the start of the title tag a keyword is, the more helpful it will be for ranking—and the more likely a user will be to click them in search results.
Many SEO firms recommend using the brand name at the end of a title tag instead, and there are times when this can be a better approach. The differentiating factor is the strength and awareness of the brand in the target market. If a brand is well-known enough to make a difference in click-through rates in search results, the brand name should be first. If the brand is less known or relevant than the keyword, the keyword should be first.
Consider Readability and Emotional Impact
Creating a compelling title tag will pull in more visits from the search results. It’s vital to think about the entire user experience when you’re creating your title tags, in addition to optimization and keyword usage. The title tag is a new visitor’s first interaction with your brand when they find it in a search result; it should convey the most positive message possible.
Meta descriptions are HTML attributes that provide concise explanations of the contents of web pages. Meta descriptions are commonly used on search engine result pages (SERPs) to display preview snippets for a given page.
<meta name="description" content="This is an example of a meta description. This will often show up in search results.">
Optimal Length for Search Engines
Roughly 155 Characters
What is a Meta Description?
Meta description tags, while not important to search engine rankings, are extremely important in gaining user click-through from SERPs. These short paragraphs are a webmaster’s opportunity to advertise content to searchers and to let them know exactly whether the given page contains the information they’re looking for.
The meta description should employ the keywords intelligently, but also create a compelling description that a searcher will want to click. Direct relevance to the page and uniqueness between each page’s meta description is key. The description should optimally be between 150-160 characters.
<meta name="description" content="Here is a description of the applicable page">
SEO Best Practices
Write Compelling Ad Copy
The meta description tag serves the function of advertising copy. It draws readers to a website from the SERP and thus, is an extremely important part of search marketing. Crafting a readable, compelling description using important keywords can improve the click-through rate for a given webpage. To maximize click-through rates on search engine result pages, it’s important to note that Google and other search engines bold keywords in the description when they match search queries.
Meta descriptions can be any length, but search engines generally truncate snippets longer than 160 characters. It is best to keep meta descriptions between 150 and 160 characters.
Avoid Duplicate Meta Description Tags
As with title tags, it is important that meta descriptions on each page be unique. One way to combat duplicate meta descriptions is to create a dynamic and programmatic way to make unique meta descriptions for automated pages.
Not a Google Ranking Factor
Google announced in September of 2009 that neither meta descriptions nor meta keywords factor into Google’s ranking algorithms for web search. Google uses meta descriptions to return results when searchers use advanced search operators to match meta tag content, as well as to pull preview snippets on search result pages, but it’s important to note that meta descriptions do not to influence Google’s ranking algorithms for normal web search.
Quotes Cut Off Descriptions
Any time quotes are used in a meta description, Google cuts off the description. To prevent meta descriptions from being cut off, it’s best to remove all non-alphanumeric characters from meta descriptions. If quotation marks are important in your meta description, you can change them to single quotes rather than double quotes to prevent truncation.
Sometimes it is Okay to Not Write Meta Descriptions
Although conventional logic would hold that it’s universally wiser to write a good meta description, rather than let the engines scrape a given web page, this isn’t always the case. Use the general rule that if the page is targeting between one and three heavily searched terms or phrases, go with a meta description that hits those users performing that search. If the page is targeting long-tail traffic (three or more keywords)—for example, with hundreds of articles or blog entries, or even a huge product catalog—it can sometimes be wiser to let the engines extract the relevant text, themselves. The reason is simple: When engines pull, they always display the keywords and surrounding phrases that the user has searched for. If a webmaster forces a meta description, they can detract from the relevance the engines make naturally. In some cases, they’ll overrule the meta description anyway, but a webmaster can not always rely on the engines to use the more relevant text in the SERP.
When choosing whether or not to add a meta description, also consider that social sharing sites like Facebook commonly use a page’s description tag when the page is shared on their sites. Without the meta description tag, social sharing sites may just use the first text they can find. Depending on the first text on your page, this might not create a good user experience for users encountering your content via social sharing.
Meta Keywords: Should we use them or not?
Meta keywords are no longer being used by most search engines, especially Google. I suggest removing all meta keyword data because the only thing it does is it allows competitors to see what keywords you are trying to target.
Most popular search engines don’t use this to rank anymore, and it is best to not do this.
Actually Google does not mention anything about meta keywords only that they do not use this as a ranking factor. Bing however looks at this as an attempt to manipulate their SERPs. Also, using meta keywords will give your competitors a free link research goto for your site. I would not recommend using them.
This is my first YOUmoz post, and I would greatly appreciate your feedback. I will be actively responding to comments, and I know that we will get a great discussion going. Please comment with any critique, questions, or random thoughts that you may have. If you would rather skip the statistics, feel free to jump ahead to the discussion section.
Google’s PageRank is, indeed, slightly correlated with their rankings (as well as with the rankings of other major search engines). However, other page-level metrics are dramatically better, including link counts from Yahoo and Page Authority.
I was intrigued by the study, and vowed to investigate the metric using my own data set. Because all of my data are at the root domain level, I chose to focus on the homepage PageRank of each domain.
I averaged three months of data (November, 2009 – January, 2010), collected on the last day of each month for 1,316 root domains. Using Quantcast Media Planner, I selected websites that had chosen to make their traffic data public. To be included, websites had to have an average of at least 100,000 unique US visitors during this time period.
The domains selected for this study do not approximate a random sample of websites. Because of the way in which they were selected, they will bias in favor of sites with many US visitors, and against sites with very few. There may also be differences between Quantified sites with public traffic data, and non-Quantified websites. For example, Quantified domains are probably more likely to include advertising on their pages than sites without the Quantcast script.
PageRank (PR) can only take eleven values (0-10). It is an ordinal variable meaning that the difference between PR = 8 and PR = 9 is not the same as the difference between PR = 3 and PR = 4. Like mozRank, it probably exists on a log scale.
The median and mode PageRank among websites in this study were PR = 6, with a minimum of PR = 0, and a maximum of PR = 9. However, only ten websites had PR < 3, and only seven had PR = 9.
Using Spearman’s correlation coefficient, I compared PageRank to several SEOmoz root domain metrics. DomainmozRank (linearized) was strongly correlated with PR (r = 0.62)*. This correlation was somewhat smaller than the 0.71 that SEOmoz reported in May, 2009. The disparity may be due to differences in methodology; SEOmoz used Pearson’s correlation coefficient, and did not linearize mozRank. Additionally, PR data in my study were probably measured over a smaller range of values, potentially weakening the observed dependencies.
*All reported correlations are significant at p < .01.
MozTrust was also highly correlated with PageRank (r = .62), with Domain Authority somewhat less-so (r = .55). The latter has since undergone some major changes, and this result may not reflect the metric as it exists today.
Search Engine Indexing
I performed [site:example.com] queries using Google, Yahoo, and Bing APIs to approximate the number of pages indexed by each search engine. Much to my surprise, PageRank shared the strongest correlation with the number of pages indexed by Bing (r = .52), instead of Google (r = .30), or Yahoo (r = .24). My first thought was that Google might not have reported accurate counts, a phenomenon often noted by SEO professionals. However, there is some evidence that may indicate otherwise.
If Google’s reported indexation numbers are inaccurate, we would expect the metric to have lower correlations with similar metrics. However, indexation numbers reported by Google and Yahoo share a fairly high Pearson’s correlation coefficient (r = 0.38). Both appear to share smaller correlations with Bing: 0.34, and 0.26 respectively. Even more interesting, SEOmoz metrics seem to have much stronger correlations with Bing’s indexed pages than the numbers reported by Google or Yahoo.
If Google is failing to accurately report the size of its index, we might expect that similar queries would also return inaccurate data. However, PageRank shares a high Spearman’s correlation coefficient with the number of results returned by a Google [link:example.com] query (r = 0.65). The strength of this relationship appears similar to those between SEOmoz metrics and PR mentioned earlier. PR’s correlation with the results of a Yahoo [linkdomain:example.com -site:example.com] query is somewhat smaller (r = 0.53).
If the number of pages Google reports having indexed is a relatively poor metric, we would also expect to find more variation between months than other search engines. However, I did not find this to be the case. In fact, Bing had by far the highest average percent change in the number of pages indexed, a whopping 355% increase per month. Google averaged an increase of 61%, and Yahoo an increase of only 2%.
While it is still possible that the number of pages on each domain that Google reports to have indexed is inaccurate, I see another potential explanation. Moreso than Yahoo or Google, the number of pages that Bing will index on any given domain is related to the quantity and quality of links to that domain. Perhaps, at least when it comes to indexation, Bing follows more of a traditional PageRank-like algorithm. After all, Google claims that PR is only one of more than 200 signals used for ranking pages. This theory is supported by the results of SEOmoz’s comparison ofGoogle’s and Bing’s ranking factors.
PageRank even shares fairly strong correlations with social media metric such as how many of a domain’s pages are saved on Delicious (r = 0.49), how many stories it has on Digg (r = 0.38), and even the number of Tweets linking to one of its pages as measured by Topsy (r = .38).
Last, but certainly not least, PageRank predicts website traffic with somewhat surprising strength. As reported by Quantcast, monthly page views, visits, and unique visitors are all significantly correlated with PR. Google’s little green bar even correlates with visits per unique visitor (r = 0.18), but not page views per visit. However, putting this in context shows the value of a metric like Domain Authority.
So what exactly does all of this mean, and why is it important?
First, despite being a page-level metric, homepage PageRank is actually a fairly good predictor of many important domain-level variables relevant to SEO, social media, and website traffic.
For instance, on average, websites with a PR = 7 homepage had 2.6 times as many unique visitors as those with a PR = 6 homepage, which in turn had 1.5 times as many unique visitors as those with a PR = 5 homepage.
Second, homepage PageRank is sometimes used as a proxy for a hypothetical “domain PageRank.” While technically inaccurate, this study supports the idea that the PR of a website’s homepage provides information about the domain as a whole.
While it may be limited to just eleven possible values, PR it is surprisingly good at predicting the relative number of inbound links to a domain reported by Google and Yahoo, as well as the relative number of pages indexed by Bing. The key word here is “relative.” As an ordinal variable, PR cannot be used to predict the actual values of continuous variables.
Finally, this study provides evidence that SEOmoz’s domain-level metrics may be good (and possibly better than PageRank) predictors of variables important to search, social media, and web analytics. This, as well as all of the results of this study should be interpreted within the context of the included domains (high-traffic, US-centric, and publicly Quantified).
8 Reasons Why Your Site Might Not Get Indexed
I’ve recently had to deal with several indexing problems that a few clients were experiencing. After digging deeper into the problems, I figured I’d write a post for SEOmoz to share my experience so others don’t have to spend as much time digging for answers to indexation problems. All it means is that your site or parts of it are not getting added to the Google (or one of the other guys) index, which means that nobody will ever find your content in the search results.
Let’s move on to see how we keep this guy happy shall we?
Identifying Crawling Problems
Start your investigation by simply typing site:yoursite.com into the Google search bar. Does the number of results returned correspond with the amount of pages your site has, give or take? If there’s a a large gap in the number of results VS the actual number of pages, there might be trouble in paradise. (note – the number given by Google is aballpark figure, not an exact amount). You can use the SEO Quake plugin to extract a list of URLs that Google has indexed. (Kieran Daly made a short how-to list in the Q&A section on this).
The very first thing you should have a look at is your Google Webmaster Tools dashboard. Forget about all the other tools available for a second. If Google sees issues with your site, then those are the ones you’ll want to address first. If there are issues, the dashboard will show you the error messages. See below for an example. I don’t have any issues with my sites at the moment, so I had to find someone else’s example screenshot. Thanks in advance Neil 🙂
The 404 HTTP Status code is most likely the one you’ll see the most. It means that whatever page the link is pointing to, cannot be found. Anything other than a status code of 200 (and a 301 perhaps) usually means there’s something wrong, and your site might not be working as intended for your visitors. A few great tools to check your server headers are URIvalet.com and the Screaming Frog SEO Spider and of course the SEOmoz crawl-test tools although that last one is for Pro member, and limited to two crawls per day.
Fixing Crawling Errors
Typically these kinds of issues are caused by one or more of the following reasons:
- Robots.txt – This text file which sits in the root of your website’s folder communicates a certain number of guidelines to search engine crawlers. For instance, if your robots.txt file has this line in it; User-agent: * Disallow: / it’s basically telling every crawler on the web to take a hike and not index ANY of your site’s content.
- .htaccess – This is an invisible file which also resides in your WWW or public_html folder. You can toggle visibility in most modern text editors and FTP clients. A badly configured htaccess can do nasty stuff like infinite loops, which will never let your site load.
- Meta Tags – Make sure that the page(s) that’s not getting indexed doesn’t have these meta tags in the source code: <META NAME=”ROBOTS” CONTENT=”NOINDEX, NOFOLLOW”>
- Sitemaps– Your sitemap isn’t updating for some reason, and you keep feeding the old/broken one in Webmaster Tools. Always check, after you have addressed the issues that were pointed out to you in the webmaster tools dashboard, that you’ve run a fresh sitemap and re-submit that.
- URL Parameters – Within the Webmaster Tools there’s a section where you can set URL parameters which tells Google what dynamic links you do not want to get indexed. However, this comes with a warning from Google: “Incorrectly configuring parameters can result in pages from your site being dropped from our index, so we don’t recommend you use this tool unless necessary.”
- You don’t have enough Pagerank – lolwut? Matt Cutts revealed in an interview with Eric Enge that the number of pages Google crawls is roughly proportional to your pagerank.
- Connectivity or DNS issues – It might happen that for whatever reason Google’s spiders cannot reach your server when they try and crawl. Perhaps your host is doing maintenance on their network, or you’ve just moved your site to a new home, in which case the DNS delegation can stuff up the crawlers access.
- Inherited issues – You might have registered a domain which had a life before you. I’ve had a client who got a new domain (or so they thought) and did everything by the book. Wrote good content, nailed the on-page stuff, had a few nice incoming links, but Google refused to index them, even though it accepted their sitemap. After some investigating, it turned out that the domain was used several years before that, and part of a big linkspam farm. We had to file a reconsideration request with Google.
Some other obvious reasons that your site or pages might not get indexed is because they consist of scraped content, are involved with shady linkfarm tactics, or simply add 0 value to the web in Google’s opinion (think thin affiliate landing pages for example).
XML Sitemaps: Guidelines on Their Use
Over the past couple of days I have been putting together some internal guidelines on various aspects of our jobs. This should ensure that we are giving consistent information to our various clients. Most of these guidelines have been fairly straightforward with nothing in them to write home about. However, one of the hardest guidelines to write has been the one talking about xml sitemaps. So, rather than horde my thoughts, I’m going to open them up to all of you.
What are xml sitemaps?
Sitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URLs for a site along with additional metadata about each URL… http://www.sitemaps.org
On the surface this seems to be a great addition to any website’s armoury. However, before you rush away and create your sitemap, there are a number of pros and cons you should be aware of.
Benefits to using a xml sitemap
The first set of benefits revolve around being able to pass extra information to the search engines.
- Your sitemap can list all URLs from your site. This could include pages that aren’t otherwise discoverable by the search engines.
- Giving the search engines priority information. There is an optional tag in the sitemap for the priority of the page. This is an indication of how important a given page is relevant to all the others on your site. This allows the search engines to order the crawling of their website based on priority information.
- Passing temporal information. Two other optional tags (lastmod and changefreq) pass more information to the search engines that should help them crawl your site in a more optimal way. “lastmod” tells them when a page last changed, and changefreq indicates how often the page is likely to change.
Being able to pass extra information to the search engines *should* result in them crawling your site in a more optimal way. Google itself points out the information you pass is considered as hints, though it would appear to benefit both webmasters and the search engines if they were to use this data to crawl the pages of your site according to the pages you think have a high priority. There is a further benefit, which is that you get information back.
- Google Webmaster Central gives some useful information when you have a sitemap. For example, the following graph shows googlebot activity over the last 90 days. This is actually taken from a friend of ours in our building who offers market research reports.
Negative aspects of xml sitemaps
- Rand has already covered one of the major issues with sitemaps, which is that it can hide site architecture issues by indexing pages that a normal web crawl can’t find.
- Competitive intelligence. If you are telling the search engines the relative priority of all of your pages, you can bet this information will be of interest to your competitors. I know of no way of protecting your sitemap so only the search engines can access it.
- Generation. This is not actually a problem with sitemaps, but rather a problem with the way a lot of site maps are generated. Any time you generate a sitemap by sending a program to crawl your site, you are asking for trouble. I’d put money on the search engines having a better crawling algorithm than any of the tools out there to generate the sitemaps. The other issue with sitemaps that aren’t dynamically generated from a database is that they will become out of date almost immediately.
XML sitemap guidelines
With all of the above in mind, I would avoid putting a sitemap on a site, especially a new site, or one that has recently changed structure. By not submitting a sitemap, you can use the information gathered from seeing which pages Google indexes, and how quickly they are indexed to validate that your site architecture is correct.
There is a set of circumstances that would lead to me recommending that you use a sitemap. If you have a very large site and have spent the time looking at the crawl stats, and are completely happy with why pages are in and out of the index, then adding a sitemap can lead to an increase in the number of pages in the index. It’s worth saying that these pages are going to the poorest of the poor in terms of link juice. These pages are the fleas on the runt of a litter. They aren’t going to rank for anything other than the long tail. However, I’m sure you don’t need me to tell you that even the longest of the long tail can drive significant traffic when thousands of extra pages are suddenly added to the index.
One question still in my mind is the impact of removing an xml sitemap from a site that previously had one. Should we recommend all new clients remove their sitemap in order to see issues in the site architecture? I’m a big fan of using the search engines to diagnose site architecture issues. I’m not convinced that removing a sitemap would remove pages that are only indexed due to the xml sitemap. If that is the case, that’s a very nice bit of information. *Wishes he’d kept that tidbit under his oh so very white hat*
So I guess let the discussions start: do you follow amazon.co.uk (who does have a sitemap), or are you more of an ebay.co.uk (which doesn’t)?
Site Speed – Are You Fast? Does it Matter for SEO?
When Google made their “page speed is now a ranking factor” announcement, it wasn’t a significant new ranking factor, but it is significant because it means Google wants to use usability metrics to help rank pages. Your site speed should be a priority as slow sites decrease customer satisfaction and research has shown that an improvement in site speed can increase conversions.
To better understand how fast the web is (as of February 2011), I collected site speed data from approximately 100 different sites. This data allowed me to create a very close approximation of the equation that Google currently uses to report (in Webmaster Tools) how fast sites are relative to each other:
The x axis in this graph shows the page load time (in seconds) and the y axis represents the per cent of sites that the corresponding time is faster than. So if a page loads in 4.3 seconds, it is faster than 31% of other pages on the web.
- If your site loads in 5 seconds it is faster than approximately 25% of the web
- If your site loads in 2.9 seconds it is faster than approximately 50% of the web
- If your site loads in 1.7 seconds it is faster than approximately 75% of the web
- If your site loads in 0.8 seconds it is faster than approximately 94% of the web
How Important is Site Speed?
My Unscientific Experiment
How to Improve Your Site Speed
- Minimize HTTP Requests – Your pages will load faster if they have to wait for fewer HTTP requests. This means reducing the number of items that need to be loaded, such as scripts, style sheets, and images.
- Use CSS sprites whenever possible – This combines images used in the background into one image and reduces the number of HTTP requests made.
- Make sure your images are optimized for the web – If you have Photoshop, this can be done by simply clicking “save for web” instead of “save”. By optimizing the formats of the images you are essentially formatting the images in a smarter way so that you end up with a smaller file size. Smashing Magazine has a nice article onoptimizing png images.
- Use server side caching – This creates a html page for a URL so that dynamic sites don’t have to build a page each time that URL is requested.
- Use Gzip – Gzip will significantly compress the size of the page sent to the browser which then uncompresses the information and displays it for the user. Many sites who use Gzip are able to reduce the file size by upwards of 70%. You can see if sites are using Gzip and how much the page has been compressed by using GID Zip Test.
- Use a Content Delivery Network – Using a CDN allow your users to download information in parallel, helping your site to load faster. CDNs are becoming increasingly affordable with services like Amazon CloudFront.
- Reduce 301 Redirects – Don’t use 301 redirects if possible; definitely don’t stack 301’s on top of each other. 301 redirects force the browser to a new URL and require the browser to wait for the HTTP request to come back.
If you want to do further research on improving your site speed, Google has a good list of helpful articles for optimizing your page speed here that are much more in-depth than the above suggestions. To get suggestions specific to your website, tools like YSLOW and the HTML suggestions in Google Webmaster Tools are great resources.
Are H1 tags important or influential?
We are in the process of correcting our site in hopes that Google will rank us higher in the SERP. We have many pages that have multiple H1 tags or no H1 tag at all.
How important is the H1 tag?
Will it help us increase our ranking on Google?
Yes, it will help improve rankings if your content is lacking relevance. They are no where near as important of an influence on SEO as they used to be but are still influential. Think of them as in content title tags.
Each page should have a unique H1 tag that describes the pages content and contains the target keyword.
H1 tags are pretty important as they are meant to be the header of what the entire pages is about. Every page should have 1 and only 1 H1 tag and should definitely contain your most targeted keyword.
It’s OK to have multiple H2, H3, H4 tags though. Although they don’t count as much as an H1 does, it’s still important to include your keywords in these tags when applicable too. Make sure that you’re not “keyword stuffing” either.
So to answer your question, yes, if you don’t have H1 tags on a page, and then add them in with the appropriate keywords, they will def help your on page optimization!
On-Page factors are the aspects of a given web page that influence search engine ranking.
<body>, <div>, <p>, <span>, no tag
<img src="http://www.example.com/example.png" alt="Keyword">
What are On-Page Factors?
There are several on-page factors that affect search engine rankings. These include:
Content of Page
The content of a page is what makes it worthy of a search result position. It is what the user came to see and is thus extremely important to the search engines. As such, it is important to create good content. So what is good content? From an SEO perspective, all good content has two attributes. Good content must supply a demand and must be linkable.
Good content supplies a demand:
Just like the world’s markets, information is affected by supply and demand. The best content is that which does the best job of supplying the largest demand. It might take the form of an XKCD comic that is supplying nerd jokes to a large group of technologists or it might be a Wikipedia article that explains to the world the definition of Web 2.0. It can be a video, an image, a sound, or text, but it must supply a demand in order to be considered good content.
Good content is linkable:
From an SEO perspective, there is no difference between the best and worst content on the Internet if it is not linkable. If people can’t link to it, search engines will be very unlikely to rank it, and as a result the content won’t drive traffic to the given website. Unfortunately, this happens a lot more often than one might think. A few examples of this include: AJAX-powered image slide shows, content only accessible after logging in, and content that can’t be reproduced or shared. Content that doesn’t supply a demand or is not linkable is bad in the eyes of the search engines—and most likely some people, too.
Title tags are the second most important on-page factor for SEO, after content. You can read more information about title tags here.
Along with smart internal linking, SEOs should make sure that the category hierarchy of the given website is reflected inURLs.
The following is a good example of URL structure:
This URL clearly shows the hierarchy of the information on the page (history as it pertains to video games in the context of games in general). This information is used to determine the relevancy of a given web page by the search engines. Due to the hierarchy, the engines can deduce that the page likely doesn’t pertain to history in general but rather to that of the history of video games. This makes it an ideal candidate for search results related to video game history. All of this information can be speculated on without even needing to process the content on the page.
The following is a bad example of URL structure:
Unlike the first example, this URL does not reflect the information hierarchy of the website. Search engines can see that the given page relates to titles (/title/) and is on the IMDB domain but cannot determine what the page is about. The reference to “tt0468569” does not directly infer anything that a web surfer is likely to search for. This means that the information provided by the URL is of very little value to search engines.
URL structure is important because it helps the search engines to understand relative importance and adds a helpful relevancy metric to the given page. It is also helpful from an anchor text perspective because people are more likely to link with the relevant word or phrase if the keywords are included in the URL.
SEO Best Practice
Content pages are the meat of websites and are almost always the reason visitors come to a site. Ideal content pages should be very specific to a given topic—usually a product or an object—and be hyper-relevant.
The purpose of the given web page should be directly stated in all of the following areas:
- Title tag
- Content of page
- Image alt text
Here is an example of a well-laid-out and search engine–friendly web page. All of its on-page factors are optimized.
The content page in this figure is considered good for several reasons. First, the content itself is unique on the Internet (which makes it worthwhile for search engines to rank well) and covers a specific bit of information in a lot of depth. If a searcher had question about Super Mario World, there is a good chance, that this page would answer their query.
Aside from content, this page is laid out well. The topic of the page is stated in the title tag (Super Mario World – Wikipedia, the free encyclopedia), URL (http://en.wikipedia.org/wiki/Super_Mario_World), the page’s content (the page heading, “Super Mario World”), and within the alt text of every image on the page.
The following example is of a poorly optimized web page. Notice how it differs from the first example.
This figure shows a less search engine–friendly example of a content page targeting the term “Super Mario World.” While the subject of the page is present in some of the important elements of the web page (title tag and images), the content is less robust than the Wikipedia example, and the relevant copy on the page is less helpful to a reader.
Notice that the description of the game is suspiciously similar to copy written by a marketing department. “Mario’s off on his biggest adventure ever, and this time he has brought a friend.” That is not the language that searchers write queries in, and it is not the type of message that is likely to answer a searcher’s query. Compare this to the first sentence of the Wikipedia example: “Super Mario World is a platform game developed and published by Nintendo as a pack–in launch title for the Super Nintendo Entertainment System.”. In the poorly optimized example, all that is established by the first sentence is that someone or something called Mario is on an adventure that is bigger than his or her previous adventure (how do you quantify that?) and he or she is accompanied by an unnamed friend.
The Wikipedia example tells the reader that Super Mario World is a game developed and published by Nintendo for the gaming system Super Nintendo Entertainment System–the other example does not. Search results in both Bing and Google show the better optimized page ranking higher.
An Ideally Optimized Web Page
An ideal web page should do all of the following:
- Be hyper-relevant to a specific topic (usually a product or single object)
- Include subject in title tag
- Include subject in URL
- Include subject in image alt text
- Specify subject several times throughout text content
- Provide unique content about a given subject
- Link back to its category page
- Link back to its subcategory page (If applicable)
- Link back to its homepage (normally accomplished with an image link showing the website logo on the top left of a page)
Contact us today!