November 2021 - Premium eCommerce marketing services

WebMaster Hangout – Live from November 05, 2021

Core Updates

Q. Core Updates are more about website’s relevance rather than its technical issues

  • (00:43) Core Updates are related to figuring out what the relevance of a site is overall and less related to things like, for example, spammy links, 404 pages and other technical issues. Having those wouldn’t really affect Core Updates, it’s more about the relevance and overall quality.

Indexing On Page Videos

Q. Google doesn’t always automatically pick up videos on a website page that work on Lazy Load with facade, and there are other ways to get those videos indexed

  • (05:10) With Lazy Load with facade, where an image or div is clicked, and then they load the video in the background, it can be the case that Google doesn’t automatically pick it up as a video when it views the page. John says that he got feedback from the Video Search Team that this method is not advisable. The nest approach there is to at least make sure that with structured data Google can tell that there’s still a video there. There should be a kind of structured data specifically for videos that can be added: video sitemap is essentially very similar in that regard in that the website owner tells Google that there is a video on the page. John hopes that over time the Youtube embed will get better and faster and less of an issue where this kind of tricks need to be done.
    Also the fact of marking up content that isn’t visible on the page is not an issue here, and it is not perceived by Google as misleading as long as there’s actually video on the page. The point of structured data is to help Google pick up the video when the way it is embedded wouldn’t let Google from picking it up automatically.

Discover

Q. Discover is a very personalised Google feature, and ranking there is different from ranking in SERPs

  • (18:31) John says that there is probably a sense of ranking in the Google Discover, but he doesn’t think it’s the same as traditional web ranking in that Discover is very personalised. It’s not something, where it would make sense to have the traditional notion of ranking and assessment. There is a sense in trying to figure out what is most important or most relevant to a user when browsing Discover internally within the product. John doesn’t think any of that is exposed externally.
    It’s basically a feed, and the way to think about it is that it keeps going, so this would be a kind of personal ranking which only involves a user’s personal interests.
    There are lots of things that go into even kind of the personalised ranking side, and then there are also different aspects of maybe geo-targeting and different formats of web pages, more video or less video, more images, fewer images affecting that. The best way to handle this is to follow the recommendations published by Google. John also suggests going on Twitter and searching for info among a handful of people who are almost specialised on Discover – they have some really great ideas. They write blog posts on what they’ve seen, the kind of content that works well on Discover and etc. However, John still says, that from his point of view, Discover is such a personalised feed, it’s not that someone can work to improve his ranking in there because it’s not the keyword that people are searching for.

301 Redirects

Q. Google doesn’t treat 301 redirects the same way browsers do

  • (22:23) The person asking the question is in a situation where he wants to use 301 redirects in order to pass page rank in the best and fastest way possible, but the dev team doesn’t like to implement 301s, as they are stored in browser forever. In the case of a misconfigured redirect people might not ever be able to lose incorrect 301 redirects. He wonders if Google treats redirects the same way browser does. John says that the whole crawling and indexing system is essentially different from browsers in the sense that all of the network side of things are optimised for different things. So in a browser it makes sense to cache things longer but from the Google’s point of view on crawling and indexing side, it has different things to optimise for, so it doesn’t treat crawling and indexing the same as browser. Google renders pages like a browser but the whole process of getting the content into its system is very different.

Image Landing Page

Q. Having a unique image landing page is useful for image search

  • (25:06) It’s useful to have a separate image landing page for those who care about image search. For image search, having something like a clean landing page where when a users enters URL, they land on a page that has the image front and centre maybe has some additional information for that image on the side, is very useful because that is something that Google’s systems can recognise as being a good image landing page. Whether to generate that with JavaScript or static HTML on the back end is more up to a website owner.

Noindex Pages, Crawlability

Q. The number of noindex pages don’t affect the crawlability of the website

  • (32:47) If a website owner chooses to noindex pages, that doesn’t affect how Google crawls the rest of the website. The one exception here is the fact that for Google to see a noindex, it has to crawl that page first. So, for example, if there are millions of pages and 90 percent of them are noindex, and a hundred are indexable, Google has to crawl the whole website to discover those 100 pages. And obviously Google would get bogged down with crawling millions of pages. But if there is a normal ratio of indexable to non-indexable pages where Google can find indexable pages very quickly and there are some non-indexable pages on the edge, there shouldn’t be an issue. 

302 Redirects

Q. There are no negative SEO effects from 302 redirects

  • (34:22) There are no negative SEO effects from 302 redirects. John highlights that the entire idea of losing page rank when one does 302 redirects is false. Even though the issue comes up every now and then, the main reason why this happens, he thinks, is because 302 redirects are by definition different in the sense that with a 301 redirect an address is changed and a person doing it wants Google systems to pick up the destination page, and with a 302 redirect, the address is changed but Google is asked to keep the original URL while the address is temporarily somewhere else. So if one is purely tracking rankings of individual URLs, 301 will kind of cause the destination page to be indexed and ranking, and a 302 redirect will keep the original page indexed and ranking. But there’s no loss of page rank or any signals assigned there. It’s purely a question of which of the two URLs is actually indexed and shown in search. So sometimes 302 redirects are the right thing to do, sometimes 301 redirects are the right thing to do. If Google spots 302 redirects for a longer period of time, where it thinks that maybe this is not a temporary move, then it will treat them as 301 redirects as well. But there are definitely no hidden SEO benefits of using 301 redirects versus 302 redirects – they’re just different things.

Publish Center and WebP Images

Q. Google image processing systems support WebP format

  • (37:46) In Google’s image processing systems, WebP images are supported, and Google essentially uses the same image processing system across the different parts of search. In case it seems like some kind of image is not being shown in the Publisher Center, John suggests, it could be the case that the preview in Publisher Center is not a representation of what Google actually shows in search. A simple way to double-check would be to see what these pages show up as in search directly, and if they look okay then there is just a bug in Publisher Center.

Unique Products with Slight Variations

Q. In case there is a unique product with slight variations, that has the same content on every page, it’s better to canonicalise most of these pages

  • (43:10) The person asking the question is worried about canonicalising too many product pages and leaving, for example, only two out of ten would “thin out” the value of the page. However, John says that the number of products in a category page is not a ranking factor. So from that point of view, it’s not problematic. Also, on a category page even if there are only two pages that are indexable that are linked from there, there are still things like the thumbnails, products descriptions and etc, that are also listed on the category page. So having the category page with ten products and only two of them being indexable is not a problem.

Changing Canonicals

Q. It’s okay to change canonicals to another product in case the original canonical product page is out of stock

  • (45:43) Canonicals can be changed over time. The only thing that could happen is that it takes a while for Google systems to recognise that because the real canonical is being changes and Google systems generally try to keep the canonical stable. The kind of situation that should especially be avoided is the one where Google fluctuates between two URLs as canonicals just because the signals are kind of similar, so probably there will be some latency involved in switching over.

Q. Even if Google Alerts tells that there are spammy backlinks to a website, Google still recognises spammy backlinks and doesn’t index them

  • (49:55) John says, that based on his observations, Google Alerts essentially tries to find content as quickly as possible and alert the website owner of that. And the assumption is that it picks up things that can be seen for search before Google does a complete spam filtering. So if these spammy links are not being indexed, if they don’t show up on other tools, John suggests simply ignoring them

Ads on a Page

Q. Too many ads on a page can affect user experience in such a way, that the website doesn’t really surface anywhere

  • (57:50) The person asking the question talks about a case of a news website that looks good but has too many ads on the pages and that doesn’t surface anywhere. He wonders if the overabundance of ads might cause such a low visibility, even though usually that is affected by many different factors at the same time. John says that while it is hard to conclude for sure, it could have an effect, and maybe even a visible effect. So in particular, within the page experience algorithm, there is a notion of above default content, and if all of that is ads, then it’s hard for the Google systems to recognise useful content. That might be true especially with regards to news content, when the topics are very commoditised in that there’re different outlets reporting on the same issue. That could push Google systems over the edge and if it’s across the site on a bigger scale, there might be an effect on the website. Another participant of the hangout adds that it also might affect the loading speed and contribute to poor user experience from that side too.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

WebMaster Hangout – Live from October 29, 2021

Landing Page, AHREF Lang

Q. If you have a doorway page to different country options, set that up as the X-Default (apart of HREF Lang tags) so Google use that URL where there isn’t a Geo targeted landing page

  • (00:40) Some websites that have multiple versions for different regions and languages might run into a problem, where the landing page from which Google is supposed to redirect users according to their geographic and language settings gets picked up as the main landing page. John suggests that hreflang would be the right approach in these kinds of situations, as well as making sure that default “landing page” is set as an x default. The idea here is that Google then understands that it’s a part of the set of pages, whereas if an x default isn’t specified, the reason being the other pages having content and this one kind of being like a doorway, then Google will treat it as a separate page. Google in a way views it as a situation where it could show a content page or the main page/global home page and then it might show the latter. So the x default tag is put on the directory page, default page that applies if none of the specific country versions apply.

No-index pages, Google’s evaluation

Q. Google doesn’t take no-index pages into account when evaluating a website

  • (04:39) Google doesn’t take no-index pages into account. It really focuses on the content that it has indexed for a website and that’s the basis it has with regards to all of its quality updates. Google doesn’t show no-index pages in search and doesn’t use it to promise anything to users who are searching, so from Google’s point of view it’s up to the website’s owner as to what they want to do with those pages. The other point is that if Google doesn’t have these pages indexed and doesn’t have any data for these pages, it can’t aggregate any of that data for its systems across the website. From that point of view – if pages are no-index, Google doesn’t take them into account.

Country Code Top Level Domain

Q. Country Code Top Level Domain does play a role in rankings

  • (09:19) Country Code Top Level Domain is used as a factor in geo targeting, in particular if someone is looking for something local and Google knows that the website is focused on that local market. Google will then try to promote that website in the search results and it uses the top level domain if it’s a country code top level domain. If it’s not a country code top level domain, then it will check the Search Console settings to see if there are any countries specified there for international targeting. So if the top-level domain of the website is generic, John advises to focus on a specific country by setting that up in Search Console. Google uses that for queries where it can tell that the user is looking for something local. For example, if someone is searching for something such as a washing machine repair manual, the person probably isn’t looking for something local, whereas if someone is just searching for washing machine repair, they’re probably looking for something local. So it makes sense to look at the website and think if it’s worth targeting these local queries or something to cover a broader range of people searching globally.

Google Update on Titles

Q. Google changing titles is on a per page basis, purely algorithmic and can help to rearrange things on the page appropriately

  • (12:43) One of the big changes regarding titles is that titles are no longer tied to the individual query – it’s now on a per-page basis. On one hand, it means that titles don’t adapt dynamically, so it’s a little bit easier to test. On the other hand, it also means that it’s easier for website owners to try different things out, in the sense that they can change things on the pages and then submit through the indexing tool and see what happens in Google Search Results: what does it look like now? Because of that, John suggests trying different approaches. When there are strange or messy titles on the pages, try a different approach and see what works for the type of content that is there. Based on that, it’s then easier to expand this to the rest of the website.
    It’s not the case that Google has any manual list to decide how to display the title – it’s all algorithmic.

Title Tags

Q. Although titles do play a minor factor in ranking, it’s more about what’s on the website page

  • (15:37) Google uses titles as a tiny factor in rankings. That’s what John says that although it’s better not to make titles that are irrelevant to what’s on the page, it’s not a critical issue if the title that Google shows in the search results don’t match what’s on the page. From Google’s perspective, that’s perfectly fine and it uses what is on the page when it comes to search. Other things like the company name and different kinds of separators are more a matter of personal taste and decoration. The only thing is that users like to have an understanding of the bigger picture of where does the page fit and sometimes it makes sense to show the company name or a brand name for the website title links (title tags are called title links now).

Q. Disavow tool is purely a technical tool – there isn’t any kind of penalty or black flag associated with that. 

  • (17:59) Disavow tool can be used whenever there are links pointing at the website that the website owner doesn’t want Google to take into account: it doesn’t necessarily mean for Google that the owner created those links. So, there isn’t any kind of penalty or black flag or mark for anything associated with the disavow tool – it’s just a technical tool that helps to manage the external associations with the website.
    With regards to Google Search, in most cases if there are random links coming to the website, there is no need to use a disavow tool. But if there’s something where the website owner knows they definitely didn’t do and if they think that if someone from Google was to manually look at the website and assume that they did this, then it might make sense to use the disavow tool. From that point of view, it doesn’t mean that the owner did this and they’re admitting to doing link games in the past, again, for Google it’s purely technical.

Manual Action

Q. One manual action is resolved, the website is back to being treated like any other website. Google doesn’t memorise the past manual actions and evaluates websites from that point of view.

  • (19:57) John reveals that in general, if the manual action on the website is resolved and if the issue is cleaned up, then Google treats the website as it would treat any other website. It’s not like it has some kind of memory in the system that would remember the manual action taking place at some point and see the website as a shady one in the future as well.
    For some kinds of issues, it does take a little bit longer for things to settle down, just because Google has to reprocess everything associated with the website, and that takes a bit of time. However, that doesn’t go to show that there is some kind of a grudge in the algorithms that’s holding everything back.

Same Content in Different Languages

Q. Same content in different languages isn’t perceived as duplicate content by Google, but there are still things to double-check for a website run in different languages

  • (22:12) Anything that is translated is perceived as completely different content and it’s definitely not something where Google would say that is duplicate just because it’s a translated version of a piece of content. From Google’s point of view, duplicate content is really about the words and everything matching. In cases like that, it might pick one of these pages and show and not show the other one. But if they’re translated, they’re completely different pages. The ideal configuration here would be to use hreflang between these pages on a per page basis to make sure users with the wrong language don’t go to the wrong page. That can be checked in the Search Console in the Performance Report when looking at the queries that reach the website, especially the top queries. By estimating which language the queries are in and looking at the pages that were shown in the search results or that were visited from there, it can be seen whether Google shows the right pages in the search results. If Google already shows the right pages, there is no need to set up hreflang, but if it shows the wrong pages in the search results, then definitely hreflang annotations would help.
    This is usually an issue when people search generic queries like a company name, because based on that, Google might not know which language the user is searching for and might show the wrong page.

Copying Content

Q. There are different factors that come into play when deciding whether to and how to take down content copied from another website

  • (28:34) Some websites don’t care about things such as copyright and take content from other people and republish that and the way to handle that is nuanced and includes lots of things.
    The first thing to consider for a site owner seeing their content has been copied is to think about whether or not this is a critical issue for the website at the moment. If it’s a critical issue, John advises seeing if there are legal things to help the site owner solve the problem, for example, DMCA.
    There are some other things that come into play when content gets copied. Sometimes copies are relevant in a sense that when it’s not a pure one-to-one copy of something but rather someone is taking in a section of a page and writing about this content, they might be creating something bigger and newer. For example, that can be often seen with Google blog posts – other sites would take the blog posts and include either the whole blog post or large sections of it, but they’ll also add lots of commentary and try to explain what Google actually means here or what is being said between the lines and so on. On the one hand, they’re taking Google’s content and copying it, but on the other hand, they’re creating something useful, and they would appear in the search results too, but they would provide a slightly different value than the original content.
    The person asking the question was wondering if Google takes into account the time when the content was indexed and see that the original was there earlier. However, John sheds some light on situations from the past when spammers or scrapers would be able to get content indexed almost faster than the original source. So, if Google was to purely focus on that factor, it could accidentally favour those who are technically better at publishing content and sending it into Google, compared to those who publish their content naturally.
    Therefore, Google tries to look at the bigger picture for a lot of things when it comes to websites and if it sees that a website is regularly copying content from other sources, then it’s a lot easier for it to understand that the website isn’t providing a lot of unique value on its own and Google will treat it appropriately.
    The last thing to mention is that in the case that another website is copying content and it really causes problems, spam reports can be submitted to Google to let them know about these kinds of issues.

Social Media Presence and SEO

Q. Social media presence doesn’t affect the  SEO side of the website, except when the social media page is a webpage itself

  • (34:54) For the most part, Google doesn’t take into account the social media activity when it comes to rankings. The one exception that could play a role here is when sometimes Google sees social media sites as normal web pages and if they’re normal web pages and have actual content on them with links to other pages, then Google can see them as any other kind of web page. For example, if someone has a social media profile and it links to individual pages from the website, then Google can see that profile page as a normal web page and if those links are normal HTML links that Google can follow then it will treat those as normal HTML links that it can follow. Also, if that profile page is a normal HTML page, it can be something that can be indexed as well. It can rank in the search results normally like anything else.
    So, it’s not a matter of Google doing anything special for social media sites or social media profiles, but rather that in many cases these profiles and these pages are normal HTML pages as well, and Google can process those HTML pages like any other HTML page. But Google wouldn’t go there and see that the profile has many likes and therefore rank the pages that are associated with this profile higher. It’s more about the page being a HTML page and having some content and maybe being associated with other HTML pages and linking together. Based on this, Google gets a better understanding of this group of pages. Those pages can be ranked individually, but it’s not based on the social media metrics.

Penguin Penalty

Q. For Google to lose trust in a website, it takes a strong pattern of spammy links rather than a few individual links

  • (37:06) For the most part, Google can recognise that something is problematic when spammy links cannot be ignored or isolated. If there is a strong pattern of spammy links across a website, then it can be that algorithms lose trust with the website and at the moment based on the bigger picture on the web, Google has to take almost a conservative side when it comes to understanding a website’s content and ranking it in the search results, then there can be a drop in the visibility. But the web is pretty messy and Google recognises that it has to ignore a lot of the links out there.

Zero Good URLs

Q. If Google doesn’t have data on a website’s core web vitals then it cant take it into account for ranking

  • (45:50) When the 0 good URLs problem occurs, there can be two things at play. On the one hand, Google doesn’t have data for all websites – especially Core Web Vitals, that rely on field data. Field data is what people actually see when using the website and what is reported back through Mobile Chrome etc. So, Google needs a certain amount of data before it can understand what the individual metrics mean for a particular website. When there is no data at all in Search Console with regards to the individual Core Web Vital metrics, usually that means there isn’t enough data at the moment and from the ranking point of view, that means Google can’t really take that into account. That could be the reason for 0 good URLs issue – Google just has 0 URLs that it’s tracking for the Core Web Vitals at the moment for this particular website.

Web Stories

Q. For a page to appear in the Web Stories, it has to be integrated within the website as a normal HTML page and have some amount of textual information

  • (47:54) When it comes to appearing in the Web Stories, there are two aspects that need to be considered. On the one hand, Web Stories are normal pages – they can appear in the normal search results. From a technical point of view, they’re built on AMP,, but they’re normal HTML pages. That also means that they can be linked normally within the website, which is critical for Google to understand that these are part of the website and maybe they’re an important part of the website. To show that they’re important they need to be linked in an important way, for example, from the home page or some other pages which are very important for the website.
    The other aspect here is that since these are normal HTML pages, Google needs to find some text on these pages that can be used to rank them. Especially with Web Stories that is tricky because they’re very visual in nature, and it’s very tempting to show a video or a large image in the Web Stories. When that is done without also providing some textual content, there is very little that Google can use to rank these pages.
    So, the pages have to be integrated within the website like a normal HTML page would and also have some amount of textual content so that they can be ranked for queries.
    John suggests checking out Google Creators channel and blog – there is a lot of content on Web Stories and guides for optimising Web Stories for SEO.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH