Can we find any definitive proof online that says you need to use Heading Tags (H1, H2, H3, H4, H5, H6) or that they improve rankings in Google, and I have seen pages do well in Google without them – but I do use them, especially the H1 tag on the page. For me, it’s another piece of a ‘perfect’ page, in the traditional sense, and I try to build a site for Google and humans.
I still generally only use one heading tag in my keyword targeted pages – I believe this is the way the W3C intended it to be used in HTML4 – and I ensure they are at the top of a page above relevant page text and written with my main keywords or related keyword phrases incorporated. I have never experienced any problems using CSS to control the appearance of the heading tags making them larger or smaller.
You can use multiple H1s in HTML5, but most sites I find I work on still use HTML4. I use as many H2 – H6 as is necessary depending on the size of the page, but I use H1, H2 & H3. You can see here how to use header tags properly (basically, just be consistent, whatever you do, to give your users the best user experience).
How many words in the H1 Tag? As many as I think is sensible – as short and snappy as possible usually. I also discovered Google will use your Header tags as page titles at some level if your title element is malformed. As always be sure to make your heading tags highly relevant to the content on that page and not too spammy, either.
Alt Tags are counted by Google (and Bing), but I would be careful over-optimizing them. I’ve seen a lot of websites penalised for over-optimising invisible elements on a page. Don’t do it. ALT tags are very important and I think a very rewarding area to get right. I always put the main keyword in an ALT once when addressing a page. Don’t optimise your ALT tags (or rather, attributes) JUST for Google! Use ALT tags (or rather, ALT Attributes) for descriptive text that helps visitors – and keep them unique where possible, like you do with your titles and meta descriptions.
Don’t obsess. Don’t optimise your ALT tags just for Google – do it for humans, accessibility and usability. If you are interested, I conducted a simple test using ALT attributes to determine how many words I could use in IMAGE ALT text that Google would pick up. And remember – even if, like me most days, you can’t be bothered with all the image ALT tags on your page, at least, use a blank ALT (or NULL value) so people with screen readers can enjoy your page.
About Alt Tags:
alt attribute should be used to describe the image. So if you have an image of a big blue pineapple chair you should use the alt tag that best describes it, which is alt=”big blue pineapple chair.” title attribute should be used when the image is a hyperlink to a specific page. The title attribute should contain information about what will happen when you click on the image. For example, if the image will get larger, it should read something like, title=”View a larger version of the big blue pineapple chair image.”
As the Googlebot does not see the images directly, we generally concentrate on the information provided in the “alt” attribute. Feel free to supplement the “alt” attribute with “title” and other attributes if they provide value to your users! So for example, if you have an image of a puppy (these seem popular at the moment ) playing with a ball, you could use something like “My puppy Betsy playing with a bowling ball” as the alt-attribute for the image. If you also have a link around the image, pointing a large version of the same photo, you could use “View this image in high-resolution” as the title attribute for the link.
Link Title Attributes, Acronym & ABBR Tags
Does Google Count Text in The Acronym Tag?
From my tests, no. From observing how my test page ranks – Google is ignoring keywords in the acronym tag. My observations from a test page I observe include;
- Link Title Attribute – no benefit passed via the link either to another page, it seems
- ABBR (Abbreviation Tags) – No
- Image File Name – No
- Wrapping words (or at least numbers) in SCRIPT – Sometimes. Google is better at understanding what it can render in 2016.
It’s clear many invisible elements of a page are completely ignored by Google (that would interest us SEO).
Some invisible items are (still) apparently supported:
- NOFRAMES – Yes
- NOSCRIPT – Yes
- ALT Attribute – Yes
Unless you really have cause to focus on any particular invisible element, I think the **P** tag is the most important tag to optimise in 2016.
Search Engine Friendly URLs (SEF)
Clean URLs (or search engine friendly URLs) are just that – clean, easy to read, simple. You do not need clean URLs in site architecture for Google to spider a site successfully (confirmed by Google in 2008), although I do use clean URLs as a default these days, and have done so for years.
It’s often more usable. Is there a massive difference in Google when you use clean URLs?
No, in my experience it’s very much a second or third order affect, perhaps even less, if used on its own. However – there it is demonstrable benefit to having keywords in URLs. The thinking is that you might get a boost in Google SERPs if your URLs are clean – because you are using keywords in the actual page name instead of a parameter or session ID number (which Google often struggles with).
I think Google might reward the page some sort of relevance because of the actual file / page name. I optimise as if they do. It is virtually impossible to isolate any ranking factor with a degree of certainty. Where any benefit is slightly detectable is when people (say in forums) link to your site with the URL as the link.
Then it is fair to say you do get a boost because keywords are in the actual anchor text link to your site, and I believe this is the case, but again, that depends on the quality of the page linking to your site. That is, if Google trusts it and it passes Pagerank (!) and anchor text benefit. And of course, you’ll need citable content on that site of yours.
Sometimes I will remove the stopwords from a URL and leave the important keywords as the page title because a lot of forums garble a URL to shorten it. Most forums will be nofollowed in 2016, to be fair, but some old habits die-hard. Sometimes I prefer to see the exact phrase I am targeting as the name of the URL I am asking Google to rank.
It should be remembered it is thought although Googlebot can crawl sites with dynamic URLs; it is assumed by many webmasters there is a greater risk that it will give up if the URLs are deemed not important and contain multiple variables and session IDs (theory).
As standard, I use clean URLs where possible on new sites these days, and try to keep the URLs as simple as possible and do not obsess about it. That’s my aim at all times when I optimise a website to work better in Google – simplicity. Google does look at keywords in the URL even in a granular level.
Having a keyword in your URL might be the difference between your site ranking and not – potentially useful to take advantage of long tail search queries – for more see Does Google Count A Keyword In The URI (Filename) When Ranking A Page?
Absolute Or Relative URLs
My advice would be to keep it consistent whatever you decide to use. I prefer absolute URLs. That’s just a preference. Google will crawl either if the local setup is correctly developed.
- What is an absolute URL? Example – http://www.hobo-web.co.uk/search-engine-optimisation/
- What is a relative URL? Example – /search-engine-optimisation.htm
Relative just means relative to the document the link is on. Move that page to another site and it won’t work. With an absolute URL, it would work.
Subdirectories or Files For URL Structure
Sometimes I use subfolders and sometimes I use files. I have not been able to decide if there is any real benefit (in terms of ranking boost) to using either. A lot of CMS these days use subfolders in their file path, so I am pretty confident Google can deal with either. I used to prefer files like .html when I was building a new site from scratch, as they were the ’end of the line’ for search engines, as I imagined it, and a subfolder (or directory) was a collection of pages.
I used to think it could take more to get a subfolder trusted than say an individual file and I guess this sways me to use files on most websites I created (back in the day). Once subfolders are trusted, it’s 6 or half a dozen, what the actual difference is in terms of ranking in Google – usually, rankings in Google are more determined by how RELEVANT or REPUTABLE a page is to a query.
In the past, subfolders could be treated differently than files.
Subfolders can be trusted less than other subfolders or pages in your site, or ignored entirely. Subfolders *used to seem to me* to take a little longer to get indexed by Google, than for instance .html pages. People talk about trusted domains but they don’t mention (or don’t think) some parts of the domain can be trusted less. Google treats some subfolders….. differently. Well, they used to – and remembering how Google used to handle things has some benefits – even in 2016.
Some say don’t go beyond four levels of folders in your file path. I haven’t experienced too many issues, but you never know.
UPDATED – I think in 2016 it’s even less of something to worry about. There’s so much more important elements to check.
Visit the Kairos webiste https://cabinet.kairosplanet.com/register/#111b0e