February 2022 - Premium eCommerce marketing services

WebMaster Hangout – Live from February 04, 2022

Often changing title tags

Q. The problem with changing title tags very often lies in the fact that Google won’t be able to recrawl them that often.

  • (01:02) The person asking the question talks about his website in the mutual fund industry, where it is necessary that the title tag changes every day depending on the number representing the stock prices. John says that it’s something where Google wouldn’t give any special weight if the title tag keeps changing. But the thing is that if the website owner is to change the titles on a daily basis, Google might not re-crawl that page on a daily basis. So it might be the case that the titles are changed every day, but in the search results, the title that Google shows is a few days old just because that’s the last version that it picked up from that page. It’s more like a practical effect rather than a strategic effect.

Reusing a domain

Q. There’s no strict time frame within which Google catches up with a domain being reused for different purposes.

  • (04:04) Google catching up with the fact that a domain is being used for a different purpose is something that happens over time organically. If an existing domain is being reused, and there is new content that is different from the one before, then over time, Google will learn that it’s a new website and treat it accordingly. But there’s no specific time frame for that. 
  • There are two things to watch out for in situations like this. The first one is whether the website was involved in some shady practices before, like, for example, around external links. That might be something that needs to be cleaned up.
  • The other aspect is if there’s any webspam manual action, then, that’s something that needs to be cleaned up, so that the website starts from a clean slate. 
  • Its’ never going to be completely clean if something else was already hosted on the website before, but at least it puts the website in some sort of reasonable state where it doesn’t have to drag that baggage along.

Robots.txt and traffic

Q. Failure in traffic might not be necessarily linked to technical failures, for example, robots.txt failure.

  • (10:20) The person asking the question is concerned with lower traffic on his website and links it to several days of robots.txt failure and connectivity issues. John says there are two things to watch out for in situations like this. On the one hand, if there are server connectivity issues, Google wouldn’t see it as a quality problem. So it wouldn’t be that the ranking for the pages would drop. That’s the first step. So if there’s a drop in the ranking of the pages, then that would not be from the technical issue.
  • On the other hand, what does happen with these kinds of server connectivity problems is that if Google can’t reach the robots.txt file for a while, then it will assume that it can’t crawl anything on the website, and that can result in some of the pages from the website being dropped from the index. That’s kind of a simple way to figure out whether it’s from a technical problem or not. Are the pages gone from the index, and if so, that’s probably from a technical problem. If it is from a technical problem, if these pages are gone, then usually Google will retry those missing pages after a couple of days maybe and try to index them again.
  • If the problem has happened a while ago and there were steps taken in an attempt to fix that, and the problem keeps recurring, it is worthy to double-check with the Crawl Error section in Search Console to see if there is still, perhaps, a technical issue where sometimes maybe Googlebot is blocked.

Indexing the comment section

Q. It’s important to make sure that the way the comment section is technically handled on the page makes it easy for Google to index comments.

  • (16:47) John says that it is up to a website owner whether he wants the comments to show in SERPs or not, but comments are essentially a technical element on the page. So it’s not that there’s a setting in Search Console to turn it on or off. It’s basically there are different ways of integrating comments on web pages, and some of those ways are blocked from indexing and some of those ways are easy to index. So if there’s a need to have the comments indexed, then it’s important to make sure to implement them in a way that’s easy to index. The Inspect URL tool in Search Console will show a little bit of what Google finds on the page, so it can be seen whether Google can index the comments.

URL not indexed

Q. If Google crawls the URL, it doesn’t automatically mean it will index it.

  • (21:10) The person asking the question is concerned by the fact that even though his URLs get crawled, he gets the “URL discovered, not indexed”, or “URL crawled, not indexed” messages – he thinks that maybe the content is not good enough for Google to index it. John says that it is kind of an early easy assumption to say “Oh, Google looked at it but decided not to index it”. Most of the time Google crawls something, it doesn’t necessarily mean that it will automatically index it. John says these two categories of not indexed can be treated as a similar thing. It’s tricky because Google doesn’t index everything, so that can happen.

CDN

Q. Whenever a website moves to CDN or changes its current one, it affects crawling but doesn’t really affect rankings.

  • (26:28) From a ranking point of view, moving to a CDN or changing the current one wouldn’t change anything. If the hosting is changed significantly, what will happen on Google’s side is the crawling rate will move into a more conservative area first, where Google will crawl a little bit less first because it saw a bigger change in hosting. Then, over time, in, probably, a couple of weeks, maybe a month or so, Google will increase the crawl rate again to see where it thinks it will settle down. Essentially that drop and craw rate overall for the move to a CDN or change of a CDN that can be normal.
  • The crawl rate itself doesn’t necessarily mean that there is a problem, because if Google was crawling two million pages of the website before, it’s unlikely that these 2 million pages would be changing every day. So it’s not necessarily the case that Google would miss all of the new content on the website. It would just try to prioritise again and figure out which of these pages it actually needs to re-crawl on a day-to-day basis. So just because the crawl rate drops, it’s not necessarily a sign for concern.
  • Some other indicators, like, for example, the change in average response time would be of more priority. Because the crawl rate that Google chooses is based on the average response time. It’s also based on server errors etc. if the average response time goes up significantly, Google will stick to a lower crawl rate.

Changing the rendering and ranking

Q. There might be a couple of reasons why after changing the client-side rendering to a server-side rendering the website doesn’t recover from a drop in rankings.

  • (36:22) John says there might be two things at play whenever a website sees a drop in rankings. On the one hand, it might be that with the change of the infrastructure, the website layout and structure has changed as well. That could include things like internal linking, maybe even the URLs that are findable on the website. Those kind of things can affect ranking. The other thing could be that maybe there were just changes in ranking overall that were happening, and they just happened to coincide with when the technical changes were made.

HREF tag

Q. The image doesn’t matter as much as the Alt text, and Alt texts are treated the same way as an anchor text associated directly with the link.

  • (40:33) With regards to an image itself, Google would probably not find a lot of value in that as an anchor text. If there is an anchor text associated with the image, then Google would treat that, essentially, the same as any anchor text that has been associated with the link directly. So from Google’s point of view, the Alt text would, essentially, be converted into a text on the page and be treated in the same way. It’s not that one or the other would have more value or not. They’re basically equivalent from Google’s side, and the order doesn’t matter as much. John says that it probably doesn’t matter at all. It’s essentially just both on the page. However, one thing he advises against doing is removing the visible text purely for usability reasons, since the visible text doesn’t matter as much or the same as Alt text. Because other search engines might not see it that way, and it might also be for accessibility reasons that it actually makes sense to have a visible text as well.
  • So it’s not about blindly removing it to a minimum, but rather knowing that there’s no loss in having both of them there.

Moving Domains

Q. There are a couple of things that can be done to ensure that moving from one domain to another takes the value of the old domain with it.

  • (42:04) There are two things related to moving from one domain to another. On the one hand, if there’s a movement from one website to another, and the redirects are used to move things over, and the various tools such as the Change of Address tool in search Console are used, then that helps Google to really understand that everything from the old domain should just be forwarded to the new one. 
  • The other aspect there is on a per-page basis. Google also tries to look at cannonicalisation, and for that, it tries to look at a number of different factors that come in. on the one hand, redirects play a role, things like internal linking play a role, the rel=”canonical” on the pages play a role, but external links also play a role. So what could happen in probably more edge cases is that if Google sees a lot of external links going to the old URL and maybe even some internal links going to the old URL, it actually indeed the old URL instead of the new one. Because from Google’s point of view, it starts to look like the old URL is the right one to show, and the new one is maybe more of a temporary thing. Because of this, what John recommends for a migration from one domain to another is not to only set up the redirect and use the Change of Address tool, but also to go off and try to find the larger websites that were linking to the previous domain, and see if they can update those links to the new domain.

Robots.txt and indexing

Q. If the pages blocked by robots.txt are still indexed, it is not necessary to put a no-index tag on them.

  • (44:25) If the URL is blocked by robots.txt, Google doesn’t see any of the meta tags on the page. It doesn’t see the rel=”canonical” on the page because it doesn’t crawl that page at all. So if the rel=”canonical” or a no-index on the page needs to be taken into account, the page needs to be crawlable. 
  • The other aspect here is that oftentimes these pages may get indexed if they’re blocked by robots.txt, but they’re indexed without any of the content because Google can’t crawl it. Usually, that means that these pages don’t show up in the search results anyway. So if someone is searching for some kind of product that is sold on the website, then Google is not going to dig and see if there’s also a page that is blocked by robots.txt, which would be relevant because there are already good pages from the website that can be crawled and indexed normally that Google can show. On the other hand, if a suite query is done for that specific URL, then maybe the URL would be seen in the search results without any content. So a lot of times it is more of a theoretical problem rather than a practical problem, and theoretically, these URLs can get indexed without content, but in practice, they’re not going to cause any problems in search. And if they’re being seen showing up for practical queries on the website, then most of the time that’s more a sign that the rest of the website is really hard to understand.
  • So if someone searches for one of the product types, and Google shows one of these roboted kinds of categories or facet pages, then that would be a sign that the visible content on the website is not sufficient for Google to understand that the normal pages that could have been indexed are actually relevant here. 
  • That would be the first step there is to try and figure out whether normal users see these pages when they search normally. If they don’t see them, then that’s fine. It can be ignored. If they do see these pages when they search normally, then that’s a sign that maybe it is worth focusing on other things, on the rest of the website.

Google Favicon

Q. Google favicon picks up homepage redirects.

  • (47:04) If the homepage is redirected or if the Favicon file is redirected to a different part of the website, Google should be able to pick that up. Because practically what would happen here is Google would follow that redirect, but it would probably still index it as the homepage anyway. So from a practical point of view, if the name of the website is searched for, probably Google would show the root URL even though it redirects to a lower=level page.

Product Review Images

Q. To stand out in terms of images in product reviews, using original photos is ideal.

  • (52:49) The person asking the question wonders whether to stand out in terms of product review images, it is okay to have photoshopped versions of the images found online or it is better to upload original photos. John says that the guidelines that Google has for reviews recommend focusing on unique photos created of these products and not artificial review photos. He doesn’t think Google systems would automatically recognise that, but it’s something that Google would probably look at, at least on a manual basis from time to time. 

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

WebMaster Hangout – Live from January 28, 2022

Paid Links

Q. The way Google decides whether the link is a paid link or not does not just depend on someone reporting the link as a paid link.

  • (00:42) Google takes a lot of different things into account when deciding whether the link is a paid link. It does not give every link that it finds full weight. So even if it is not sure, something can be somewhere in between, but it is a number of things that it takes into account there. It’s not just someone reporting this as a paid link, because random people on the internet report lots of things that aren’t necessarily true. At the same time, sometimes it’s useful information. So it’s a lot of things that come together with regards to paid links.

Internal Link Placement

Q. The placement of the internal link doesn’t really matter, but the placement of the content itself matters.

  • (02:15) For internal links, on the one hand, Google uses them to understand the context better. So things like anchor text helps it, but another really important part is really just being able to crawl the website. For that, it doesn’t matter where that link is on a page to kind of crawl the rest of the website. Sometimes things are in the footer, sometimes in the header, sometimes in a shared menu or in a sidebar or within a body of content. All of those linked places are all fine from Google’s point of view. Usually what Google differentiates more with regards to location on a page is the content itself to try to figure out what is really relevant for this particular page. For that, it sometimes really makes sense to focus on the central part of the page, the primary piece of content that changes from page to page and not so much the headers and the sidebars and the footers or things like that. Because those are a part of the website itself, but it’s not the primary reason for this page to exist and the primary reason for Google to rank that page. That’s kind of the difference that Google takes when it comes to different parts of the page. As for links, it’s usually more to understand the context of pages and to be able to crawl the website, and for that Google doesn’t really need to differentiate between different parts of the page.

Product Review Websites

Q. There are not really strict guidelines or a checklist for websites to be classified as product page websites.

  • (04:34) The person asking the question has a fashion website, from which he started to link products in different stores, as his viewers started asking where they can buy the products from their articles, he now is not sure if his website would classify as a product review website or not. John says that he doesn’t think Google would differentiate that much with these kinds of websites. It’s not that there is a binary decision on which type of website something is. From his point of view, it sounds like there is some kind of review content and informational content on the person’s website, as well as some affiliate content. All of these things are fine. It’s not a case that website owners have to pick one type of website and say that everything on their website fits some kind of criteria exactly. In most cases, on the web, there is a lot of grey room between the different types of websites. From Google’s point of view, that’s fine. That is kind of expected. John says it shouldn’t be really worrisome whether Google would think the website is a product review website. Essentially, it’s good to use the information that Google gives for product review websites, but it’s not something where there is a checklist that one has to fulfill for anything that is classified exactly as a product review website.

Local Directories

Q. Making sure that the information about the business is correct and matching across different websites and tools is important for Google to not get confused.

  • (07:41) John is not sure how having the exact same business information across the web plays into Google Business Profile, local listings and that part of things. One place where he has seen a little in that direction, which might not be perfectly relevant for local businesses, but just generally in Google recognizing the entity behind a website or a business. For that, it does sometimes help to really make sure that Google has consistent information, that it can recognise that the information is correct because it found it in multiple places on the web. Usually, this plays more into the knowledge graph, the knowledge panel side of things, where if Google can understand, that this is the entity behind the website. If there are different mentions of that entity in different places, and the information there is kind of consistent, then Google can trust that information. Whereas if Google finds conflicting information across the web, then it’s harder. For example, if there is a situation where there is local business structure data on website pages with local profiles with opening hours or phone numbers, then on the website there is a marked up info conflicting with that. On Google’s side, it has to make a judgment call then and it doesn’t know what is correct. In those kinds of situations, it’s easy for Google’s systems to get confused and use the wrong information. Whereas if website owners find some way to consistently provide the correct information everywhere, then it’s a lot easier for Google to say what the correct information is.

Links

Q. Linking back to a website that has linked to you is not viewed as a linked scheme, as long as all the rules are followed.

  • (11:24) The person asking the question is in a situation where there are a few websites that are linking to his website. He doesn’t know whether he is getting any value from that, but assuming he is, he wonders if linking to those websites, following all the rules, not making any illegal link exchanges, would result in him losing some value. John says that it is perfectly fine and natural, especially if this is a local business linking to its neighbours. Or if the website is mentioned in the news somewhere, and the person mentions that on his website that would be okay. Essentially, he is linking back and forth. It’s kind of a reciprocal link, which is a natural kind of link. It’s not something that is there, because there is some crazy linked scheme. If that is done naturally and there isn’t any weird deal behind the scenes, it should be fine.

Cleaning Up Website After Malware Attacks

Q. There are a few things that can be done to make the unwanted links drop out of the index after a hacker attack on a website.

  • (24:10) The person asking the question has experienced a malware attack on his website that resulted in lots of pages that he doesn’t want to be indexed being indexed. He has cleaned up after the attack, but the results of the malware are still being shown and he can’t use a temporary removal tool, as there are too many links. John suggests that first of all he needs to double-check that the pages he removed were actually removed. Some types of website hacks are done in a way that if these are checked manually, it looks like the pages are removed. But actually, for Google, they are still there. It can be checked with the Inspect URL tool. Then for the rest, there are two approaches. On the one hand, the best approach is to make sure that the more visible pages are manually removed, that means searching for the company name, for the website name, primary products etc., seeing the pages that show up in the search results and making sure that anything that shouldn’t be shown is not shown. Usually, that results in maybe up to 100 URLs, where the website owner can say that these are hacked and need to be removed as quickly as possible. The removal tool takes those URLs out within about a day.
  • The other part is the URLs that are remaining – they will be recrawled over time. But usually, when it comes to lots of URLs on a website, that’s something that takes a couple of months. So on the one hand, those could be just left to be, as they are not visible to people unless they explicitly search for the hacked content or do a site query of the website. These will drop out over time in half a year. Then they can be double-checked afterwards to see if they’re actually completely cleaned up. If that needs to be resolved as quickly as possible, the removal tool with the prefix setting can be used. It is essentially trying to find common prefixes for these pages, which might be a folder name or a filename or something that is in the beginning and filtering those out. The removal tool doesn’t take them out of Google’s index, so it doesn’t change anything for the ranking. But it doesn’t show them in the results anymore.

Emojis

Q. Using emojis in title tags and meta descriptions doesn’t really affect anything.

  • (33:04) One can definitely use emojis in titles and descriptions of pages. Google doesn’t show all of these in the search results, especially if it thinks that it kind of disrupts the search results in terms of looking misleading perhaps and these kinds of things. But it’s not that emojis cause any problems, so it’s okay to keep them there. John doesn’t think they would give any significant advantage, because at most what Google tries to figure out is what is the equivalent of that emoji. Maybe Google will use that word as well associated with the page, but it’s not like the website will get an advantage from that. It doesn’t harm or help SEO in any way.

API and Search Console

Q. API and Search Console take their data from the same source but present it a little bit differently.

  • (34:15) The data in the API and the data in the UI is built from the exact same database tables. So it’s not that there is any kind of more in-depth or more accurate data in the API than in the UI. The main difference between the API and the URL is that in the API there are more rows of examples that can be retrieved when downloading things. So sometimes that is useful. The other thing that is perhaps a little bit confusing with the API and the data in Search Console is that when looking at a report in Search Console, there will be numbers on top that give the number of clicks and impressions overall. The data that is provided in the API is essentially the individual rows that are visible in the table below the overall data in Search Console. For privacy reasons and various other reasons, Google filters out queries that have very few impressions. So in the UI in Search Console on top with the number Google includes the aggregate full count, but the rows that are shown there don’t include the filtered information. So what can happen is that if one is to look at the overall total in Search Console it’ll be a different number than if the totals are taken from the API, where all of the rows are added up. That’s something where it’s a little bit confusing at first, but essentially it’s the same data presented in a slightly different way in the API.

FAQ In Rich Results

Q. There are three criteria that need to be followed by FAQ schema to have an opportunity to be featured in rich results.

  • (36:15) FAQ rich results are essentially similar to other types of rich results in that there are several levels that Google takes into account before it shows them in the search results. On the one hand, they need to be technically correct. On the other hand, they need to be compliant with Google policies. John says he doesn’t think there are any significant policies around FAQ-rich results other than that the content should be visible on the page. The third issue that sometimes comes into play here is that Google needs to be able to understand that the website is trustworthy, in that it can trust the data to be correct. That is sometimes something where kind of from a quality point of view, Google may be not convinced about a website and then it wouldn’t show the results. Those are the three steps to look at FAQ rich results.

Seasonal Pages

Q. Both removing seasonal pages when they are no longer relevant and leaving them be is okay. The thing to remember is to use the same URL every year.

  • (37:38) On Google’s side, it’s totally up to the website owner to choose how to deal with seasonal pages. Keeping the pages there is fine, removing them after a while is fine if they’re no longer relevant. Essentially, what would be seen is that traffic to these pages will go down when it’s not seasonal, but no one is missing out on any impressions there. If the pages are made to be no index or 404 for a while and then brought back later, that’s essentially perfectly fine. The one thing to watch out for is to reuse the same URL year after year. So instead of having a page that is called Black Friday 2021 and then Black Friday 2022, it’s better to just have Black Friday. That way, if the page is reused, all of the signals that were associated with that page over the years will continue to work in the website’s favour. That’s the main recommendation there. It’s okay to delete these pages when they’re not needed and just recreate the same URL later and it’s okay to keep those pages for a longer period of time.

CLS Scores

Q. There are no fixed rankings for CLS scores.

  • (40:19) There’s nothing like a fixed number with regards to how strong CLS scores work for websites. From Google’s point of view, these metrics are taken into account when it comes to the Core Web Vitals and the page experience ranking factor, and Google tries to look at them overall. Google tries to focus especially on the area where the website is in that reasonable area with regards to these scores. So if a website is not in, let’s call them “poor” or “bad” scores, bad section, then that’s something where Google decides it’s reasonable to take those into account. Google doesn’t have any fixed rankings or algorithmic functions where it would take ½ of FCP and ½ of CLS and take ⅓ of this into account.
  • It’s really something where Google needs to look at the bigger picture.

Geo-redirects

Q. Geo-redirects make it hard for Google to find and index content.

  • (53:26) Geo-redirects have a negative impact on the content being indexed, and that applies to all kinds of websites. From Google’s point of view, usually, the geo-redirects are more a matter of making it technically hard for Google to crawl the content. Especially if the users from the US are being redirected to a different version of a website, Google will just follow that redirect. Googlebot usually crawls from one location. Then it’s less a matter of quality signals or anything like that, it’s more that if Google can’t see the web pages, then it can’t index them. That’s essentially the primary reason why John says they don’t recommend doing these things. Maybe some big websites do a thing where they redirect some users and don’t redirect others, and maybe Googlebot is not being redirected – it’s possible. From Google’s point of view, it doesn’t do them any favours because it would usually end up in a situation where there are multiple URLs with exactly the same content in the search results. The website is competing with itself. Google sees it as a website duplicating itself with the content in multiple locations, Google doesn’t know which to rank best and makes a guess at that.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH