September 2021 - Premium eCommerce marketing services

WebMaster Hangout – Live from September 10, 2021

Quote Websites

Q. Websites with the same quotes rank separately and don’t get penalised for having the same quotes

  • (00:40) Pages from different websites with quotes rank separately because quotes are usually one or two lines of text, and there is so much more information on a web page that quotes do not define those pages as the same. John also points out that quotes are from authors who have written something long ago and is  public information – it doesn’t matter who said it first or which website posted it first – it’s not like whoever said it first will appear first in the search results, and if it did, it probably wouldn’t have been from an average quote based website. And, subsequently, since quotes are public information, they don’t get penalised as well.

Geotargeting

Q. To target different countries, create subdomains or subdirectories and set geotargeting on Search Console individually

  • (05:08) Search Console supports only one country at a time, it’s not possible to make it target different countries at the same time. John suggests that when you want to target several countries at the same time, you should create subdomains or subdirectories of your website for these countries and add them to Search Console separately. Once you have them added, if they’re generic top-level domains, you can set the geotargeting individually. For example, yourwebsite.com and the “/” and “fr” for France, and you can add that to Search Console and in the geotargeting section you can indicate it’s for France. If you have a country code top level domain like .in or .uk and etc, then you can’t do geotargeting.

New CSS Property

Q. New CSS property might have only indirect effect on Core Web Vitals and subsequent ranking

  • (06:40) John doubts that the new content-visibility CSS property will come to play a big role in website assessment. He says that there are two points where implementing the new CSS might come into play: the first being the fact that Google uses the modern Chrome browser when rendering the pages. But since HTML is the factor taken into account for indexing, and it is already loaded, implementing the new CSS might be a matter of shifting things around in terms of the layout, but it wouldn’t play a big role in rendering – Google would still index the content normally. The second thing is the speed the users see the content in – that could play a role for Core Web Vitals, because for the Core Web Vitals Google uses field data, the speed that the users see. If for users who access the pages with the modern version of Chrome the pages are loading faster due to the implementation of the new CSS that will be reflected in the field data, and overtime that might be taken into account. To make it more clear whether implementing would actually change anything, John suggests implementing the new CSS on a test page and test the page with the lab testing tools before and after and see if it makes a difference. If the difference is big, then probably it should be implemented, otherwise – not really.

Lead Generation Form

Q. Lead Generation forms affect SEO when they’re located above the fold and treated as ads

  • (10:49) John says that in general, lead generation forms don’t affect the SEO side of the page that much. However, he points out that the algorithms look for the ads that appear above the fold and push the main content below the fold, and the lead generation form might be treated as an ad. That might not always be the case, as what the lead gen form is for and what the page is trying to rank for also matter. For example, if the page is about car insurance and the lead gen form signs people up for car insurance then that probably wouldn’t be treated as an ad, but if the page is about something completely different, like oranges, then the car insurance lead gen form on top of the page would be seen as an ad.

Images on the Page

Q. Image placement on the page doesn’t really affect anything

  • (12:53) Whether the image at the top of the article or somewhere in the middle doesn’t matter that much for the SEO side of the website. John points out that even for the Image Search it is not really important.

Q. Google discovers and indexes the pages that no follow link points to

  • (13:34) With no follow link, Google essentially tries to stop the passing of signals, but what can happen is that Google still discovers the page to which the link points, crawls and indexes it separately. Sometimes people use internal no follow links to point at a specific page within their website, and they expect that internal page to not show up in search because it’s linked only with no follow link. However, from Google’s point of view, if they can discover the page, crawl it and index it, they might still index it independently.

Q. In addition to creating a good content, it’s essential to spread a word about it to pop up in search

  • (14:41) Creating a good quality content might not always be enough. To appear in search results, it’s important to spread a word about your content, try to find people interested in that type of content and talk them into writing about it. However, it’s common to buy guest posts, and John argues that it’s not the best strategy, and is potentially damaging to the website. If someone from the website team looks at something on a website, and they see that there is a wide variety of different kinds of links, and even when some of them potentially might have a weird story behind them, overall there are still lots of reasonable links, the website team will let the algorithm handle it. But if the website team sees that all the links look a lot like guest posts, and they’re not labelled as such, that might be something they would take action on. So, it’s important to create good content, find people who might be interested in it, and try to stay away from problematic strategies.

Promoting a Unique Website

Q. There are several things to remember when trying to promote a website with a new type of analytical service

  • (18:23) The person asking the question described the situation with his website: they analyse a number of real estate agents, and whenever someone searches for the best realtor in his area, their list of top realtors pops up. He says that 90 percent of their pages is not indexed, and he is not sure what exactly to do to rank in SERPs. John points out that there are several things that need to be considered to make the startup successfully rank. 
    First, before things go too far, John suggests it would be a good idea to check whether the website pages are actually useful for people and are not just a recompilation that come as a result of back end data analysis that spits out some metrics for individual locations. For example, to make sure that when a city has 10 realtors, and someone searches for top realtors the result it’s just those 10 realtors that are in the phone book. Basically, it’s essential to make sure the website is actually providing some unique value for the users. John advises doing user studies to figure out what the best UX for these kinds of pages is, what kind of information people need, and that the content is trustworthy. That’s the first thing to do because if the content is of low quality that is a bigger problem than having a lot of good content and trying to get them indexed.
    As for getting them indexed, it’s something that happens naturally over time.
    Secondly, John says that it’s useful to decide what kind of pages are currently the most important ones on the website and which ones you want Google to focus on through internal linking and making sure that they’re high-quality pages. So even if, currently 90 pages on the website are not indexed, and 10 are, it’s reasonable to make sure those 10 are fantastic pages that a lot of people like and recommend. As a result, over time Google will crawl more pages and more frequently, and crawl a bit deeper.
    John points out that it’s always tricky, especially with a website like this to create an internal linking structure in the beginning that focuses on things that are important, because it’s very easy to just list all the postal codes in numerical order, and Google might end up crawling the pages that have low value for the website overall. So from the beginning it’s good to create a more funnelled web structure and then expand step by step until Google ends up actively indexing all the content on the website.

Q. It’s enough for Google that most, even if not all, sources that link to the website as affiliates follow the guidelines

  • (26:34) It’s hard to make everyone who links to a website as affiliates to follow the guidelines, and for some website owners that might seem problematic. But John points out that they understand the situation, so they just want to make sure that the things that are published or said on the website match the guidelines. Also, it’s okay if only a part of those who link to the website follow the guidelines. 
    Some might suggest that disallowing crawling of the affiliate parameters could be a solution to this, but John argues that that will result in those pages being indexed without any content. He advises focusing on normal canonicalisation, even though that wouldn’t affect how the value of those links are being passed on. 
    He also shares that some websites set up something like an affiliate domain that is separate and that is blocked from crawling and indexing or just blocked from crawling that redirects to the actual website. It works good for a large-scale affiliate sites, but for an average website that might be an overkill.
    In short, he says that as long as the website owner is giving the right recommendation and the significant part of the users are following them, there shouldn’t be any need to do anything past that.

Embedding Videos on Website

Q.It’s not necessary to switch to a different video format on a website just for SEO purposes – there are ways to make everything neat and efficient

  • (29:10) The person asking the question is concerned with embedding YouTube videos on his website, as they slow his website loading speed down – he thinks about switching to the mp4 format because it doesn’t create such problems. John argues that might be unreasonable to do just for the SEO purposes – there are different ways of embedding videos, and there are different formats of videos. In particular, when it comes to YouTube, there are ways to embed videos that use a kind of lazy loading where there is an image of a placeholder, and the video is activated by clicking on the placeholder. This way, the page will load pretty quickly. Also, the YouTube team is working on improving the default embed, so that might improve over time.
    Hosting the videos on the website itself might also be reasonable. However, the thing to watch out for is the fact that Google is able to recognize the metadata for these things, and with normal YouTube embed it can pull that out fairly well. When embedding videos manually or with the help of custom players, it’s important to check that the structured data on the page is appropriate as well.

Hosting and Crawling

Q. By default, the type of hosting one has doesn’t affect the efficiency or amount of crawling that the Google can do

  • (31:08) The type of hosting that one has doesn’t affect the crawling, however the hosting can be bad or slow – it’s not an attribute to the specific kind of hosting. Shared hosting, VPS or any kind of hosting can be slow or fast regardless of the type, and that’s something that holds the importance.

Crawling

Q. If Google seems to be crawling your website slowly and rarely, make sure the quality of the few important pages is good first, and then grow your website in terms of page quantity

  • (35:31) When lots of pages on a website don’t get crawled, that is something that falls into the category of crawl budget. Not that it’s a problem for the website, but there are always two sides when it comes to crawling: capacity for crawling and demand for crawling. Capacity for crawling is about how much Google can crawl: if there is a small website, probably Google can crawl everything. Demand for crawling is something that a website owner can help Google with: it’s the things like internal linking to let Google know about the relative importance of pages, and also it’s something that Google can pick up over time by recognising that there is a lot of good and important content that needs more time and more crawling.
    If a situation arises that there are a lot of pages that haven’t been indexed, it might mean that there were too many pages created, and it’s better to focus on fewer pages to make sure they’re better first, and once they start getting indexed quickly, to create more pages and grow the website step by step.

Changing URLs

Q. If website URLs are not configured according to the best practices, it’s not reasonable to change them unless there is a bigger revamp to be done

  • (39:17) When a large portion of URLs across the website is changed, it might create some fluctuations for a couple of months, and by the end of that period the results will most likely be the same as before – so you have a period when everything is worse than before, and then it becomes just the way it was. However, if there is a bigger revamp to come that will make things worse and confusing for a period of time anyway, then it is worth cleaning up URLs. If it’s a completely new website that is to be created, it would be good to make sure URLs are intact from the very beginning.

Homepage Ranks Higher Than Individual Related Pages

Q. When the homepage ranks higher for certain queries than the website pages that fit the query better, it’s good to help Google understand the relative importance of the individual page

  • (41:15) If a website’s homepage ranks higher for certain queries than the website pages that actually match the query better, John argues, it’s not always a problem – the homepage ranking high at least means that the SEO side of the website is already quite good. However, he explains that it shows one of the issues with the Google system – that is, Google can’t understand the relative importance of certain pages. What most likely happens is that the homepage is often a strong page that includes the keywords in the search query, and while the individual pages get recognition as well, the system doesn’t understand that it’s a better match for the query and that it is more important. With internal linking, better quality across the pages, more focused information on these pages can help to improve that.

Passage Ranking

Q. Passage ranking is different from the automatic jump to the part of the text relevant to the search query after clicking on a result in SERPs

  • (43:20) With passage ranking, Google tries to understand on a longer page which has several unique parts of a page within the same page and rank it (not pulling out a sentence and ranking it or showing differently). So a really long (often not super SEO optimised) page might contain several intents, and Google recognises a certain part of the page being relevant to a search query, so it ranks the whole page in the SERPs, even if there are a lot of irrelevant parts to the page. So passage ranking, which is more about ranking, should not be confused with pointing out a specific part of the page. At the same time, Google is trying to understand things like anchor within a page as well. For example, there can be a hash editorial part at the end of a URL, which could point to a different part of the page, and Google tries to understand it and include in the site links, when they show the page itself. Or sometimes they take things that they show in a featured snippet and link to that directly on the page using some new HTML API, that allows them to add something to the URL so that it goes to that specific text segment of the page. So passage ranking and jumping to a specific part of a page are different things.

Q. Dynamic menu and related posts work well for both users and crawlers.

  • (53:35) Creating a dynamic menu on the website that takes into account the action of a user doesn’t create a problem for Google crawlers, as on the Google’s side everything is static, and it can understand the connection between the links. However, if there is something within the navigation that depends on how the user navigates through the website, and that needs to be reflected in search, that is trickier. Since Google crawls without cookies, it can’t keep that trail.
    Linking related content on a content page is also a good approach, since it works well for users, and gives Google more context when it crawls the page.

Flexible Sampling

Q. Flexible Sampling should be used to index the whole page for the pages that have gated content

  • (57:34) Flexible Sampling allows the website owners to use a structured data markup on gated pages to let Google know which parts of the pages were gated and which were not. After that, these pages can be dynamically served to Google Bots slightly differently than they would be served to an average user. That means that when something like the URL inspection tool is used the whole content together with markup can be seen and the whole page with these markups would be indexed, and when a user goes to that page he would see gated or limited access set up.
    This feature is also documented in Google as subscription & Paywall Content, and different types of CSS selectors need to be specified for the different types of content.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

SEO Glossary – Popular SEO Terms & Definitions

Let’s face it, SEO is full of cryptic, unusual words and terms that might confuse anyone entering the field. But just like in any other fast-developing profession, in SEO it is important to communicate in the language of SEO Analysts. You don’t need to memorise the exhaustive list of all the field related words in one sitting though, for the beginning, it’s enough to know the very basics. Here’s our guide of the most common SEO words and terms to help you get started!

Advanced search operators –  additional features and commands that can be typed into the search bar to make the query more specific.

Anchor text – a piece of text that links to another page.

Backlinks (or “inbound links”) – links from one website page that point to another website page.

Black Hat SEO – SEO practices that violate Google’s guidelines.

Bots (also known as “crawlers” or “spiders”) – the program that crawls the Internet, finds the content and assesses its quality to place it on the search results.

Cache – a saved static version of a web page that helps to not access the website’s database for every query and thus avoids time-consuming operations.

Channel – different types of media and means to attract attention and traffic, such as organic search, email marketing, social media.

Cloaking – displaying content to search engine bots differently than to real users of the website.

Crawling – the process of finding pages on your website through search engine bots and processing them. It’s the first step in the process of your pages showing on the search engine results page.

Crawl budget – the number of pages a search engine bot will crawl on a website in a certain period of time.

CSS (Cascading Style Sheets) – a programming language for structuring the website mainly in terms of fonts, colours and layouts.

CTR (clickthrough rate) – the ratio of clicks on your ad to impressions the ad got. 

De-indexing – a page or group of pages being removed from the process of indexing.

DA (Domain Authority) – a ranking score that measures how relevant the domain is for a specific industry or subject area, typically seen in the Moz tool.

Duplicate Content – big pieces of content that are shared across different domains or different pages of a single domain. Having duplicate content is bad for the ranking of a website.

Engagement – a measure of searchers’ extent of interaction with a website from search results.

Featured snippets (often referred to as “Zero Position”) – informative answer boxes that appear at the very top of search results for some queries.

Google My Business listing – a way for the business’ potential customers to get all company contact info through a listing that appears at the top of Google results when a customer searches for business name or services catered by certain businesses.

Google Search Console – a program developed by Google that allows site owners to track how their website is doing in SERPs.

Hreflang – an HTML attribute that helps Google understand what the language of a website is so that a user finds the version of the website in his language.

HTML (HyperText Markup Language) – the set of codes used to communicate how to display texts and images on a website.

Image Carousels – scrollable images that appear at the top of some SERPs.

Indexing – organising and storing content found during crawling.

Index Coverage report – a report that shows the indexation status of all the URLs on a website.

JavaScript – a programming language used for integrating complicated non-static elements to web pages.

KD (Keyword Difficulty) – an estimation (typically out of 100) of how hard it would be to rank high for a certain keyword.

Keyword stuffing – a black hat SEO practice that involves the overuse of important keywords in content and links to try and rank for these words.

Lazy Loading – a way of optimising a page load in such a way that the loading of certain parts of the page or objects are delayed until they’re actually needed.

Local pack – a listing of three local businesses that appear for local-intent searches, like the ones that typically include “near me” in the search query.

Long-tail keywords – keywords that contain more than three words and are longer and more specific than short keywords. For example, “cotton summer dresses UK buy online” as compared to short-tail “cotton summer dresses”.

Organic – placement in search results obtained without paid advertising.

Private Blog Network – This is an artificial way of creating content and websites to generate fake backlinks to trick Google. This is, unfortunately, a common practice by most agencies and link providers and can jeopardise revenue and rankings in the longterm

People Also Ask boxes – an element in some SERPs that shows questions related to the search query.

Pruning – is a process of taking down low-quality pages to improve the website’s overall quality.

Scroll depth – a way of measuring how far visitors scroll down the website page.

Search Volume – an estimation of how many times a keyword was searched during the last month.

Sitemap – a list of page URLs that exist on your website.

Personalisation – a search engine feature that customises the results a user gets for his query based on the location of the user and his previous search history.

Redirection – a way of sending search engines and users to a URL that is different from the one primarily requested.

Rendering – an automatic process of turning code into a viewable, usable website page. 

SERP features – search results that appear in a non-standard format. For example Zero Position, Image Carousel, People Also Ask, Adwords etc.

SERPs – short for “search engine results page” – the page with relevant info and links that appears as a response to the user’s query.

SSL certificate (SSL – Secure Sockets Layer) – a certificate that enables encrypted connection between a web browser and a web server. It makes online transactions more secure for customers.

User Intent – the kind of results on SERP users want or expect to see when typing their queries.

Webmaster guidelines – information provided by search engines like Google and Bing on how site owners can optimise their websites and create content that is found and indexed appropriately so that the content does well in search results.

White hat SEO – SEO practices that comply with Google’s guidelines.

Puzzled about SEO terms? Why not get in touch?

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH