LION Digital, Author at Premium eCommerce marketing services - Page 3 of 8

Why is SEO documentation so confusing?

Intro

MARTIN SPLITT: Why do SEOs give strange recommendations to us developers sometimes?

MICHAEL KING: Why can’t the Google documentation be up to date?

MARTIN SPLITT: Why won’t all SEOs use their tools and documentation properly?

MICHAEL KING: Why is the Google documentation written so strangely?

MARTIN SPLITT: Hello, and welcome back to another episode of SEOs and Developers. Today, my guest is Michael King, who is not only running an SEO agency and has a technical background, but he’s also a rapper. So I’m really looking forward to see what we’ll be getting into today. 

MICHAEL KING: And I’m here with Googler Martin Splitt, who’s a diver, magician, and an individual who is as dynamic as his hair. Very excited to speak with him today.

Checklists, beginner SEOs, and tools

MARTIN SPLITT: (01:00) So I’m really, really excited to be joined in this episode by Mike King. And, Mike, I actually have a really good question for you. So you’re running your SEO agency. And I know that you have a technical background, so you maybe understand where I’m coming from when I say, as a developer, I have met so many SEOs who are basically barging into the development room and going to the team, like standing in front of the entire team, going, oh my god, stop everything that you’re doing. We have a massive issue. And then you’re like, so what’s the issue? We need to fix our canonicals. And you’re like, but why? Oh, you know, it’s going to break everything, and everything is going to be like we’re going to get penalties, and everything is going to shit, and we have to really get everything back up to speed. And, oh my god, this has to happen now, now, now. And I’m like, why is it that a bunch of people are operating like this? Where does that come from?

MICHAEL KING: (01:55) Well, it comes from the fact that everyone in SEO comes from a different background. Like, not too many people are as technical as someone like me or some of the other great people in the space. And so a lot of it is like, OK, I read this checklist. It tells me, OK, I need to do these things. I have a tool that tells me that everything is on fire. So I’m going to focus on the thing it says is most on fire. So what it really comes down to is differing levels of education. I think that there’s some difficulty with people understanding what priorities are or how they align with priorities and an actual business, and then also from the perspective of what it’s going to be the impact of doing any of these things. So it’s very easy for an SEO who is inexperienced to put together some sort of PDF report from a tool that they’re using that says, OK, these are the 10 things that must happen right now. But it doesn’t necessarily reflect what the impact is going to be of those things.

MARTIN SPLITT: (02:59) Right. Yeah, I’ve seen these PDF reports, and that has me wondering like, why can’t tools not just be doing the right things? Like, why are these tools generating these 30-page reports with all this stuff in it? How did we end up here? 

MICHAEL KING: (03:18) Yeah, I mean, that’s the thing like, when you build a generic tool around a series of SEO best practices, it’s not going to take into account the context of the actual website, right? So in the example that you gave with canonical tags, there may be a reason that you have so many duplicates. There may be a reason that the site needs that, right? Like, if you think about a site that has a bunch of franchises, and that content isn’t any different per franchise, it makes sense that you’re not canonicalizing those to one version of the page. Like, the business wants to have these different permutations of pages for different service areas. And there are any number of reasons why this may be of value to the actual business. So a tool is just going to say, well, the best practice is for every URL to be canonicalized so the version of it that’s very similar to it or is an exact duplicate. But it doesn’t know that that actually makes sense for that business. So I think that there’s an opportunity, I think this is generally true that technology for SEO is very much behind, of course, what Google is doing. But it’s also behind what it can actually do, right? I think that there needs to be some sort of layer that’s added to these SEO tools, where it’s like, I’m this type of business. We have this type of concern. These are our priorities. All right, now spit out something that is prioritized or takes into account what makes sense for this business. And so when you don’t have that, you need an expert that’s able to interpret it from a business-use-case perspective, from a perspective of, what can we technically do? And again, because you don’t have enough people in SEO that are able to interpret things that way, you get these reports straight out of the tool that’s like, again, everything is on fire. And so that’s what our job is, is to interpret that to the frame of what the business actually needs to do. 

Why does context matter for automation? 

MARTIN SPLITT: (05:18) All right, OK, I get that. So one takeaway from me, don’t be mad at me here is that from a developer’s perspective, I always thought this stuff can probably be automated away, right? We don’t really need that many actual people doing it. But it sounds like that’s not the case, right? There’s a bunch of stuff that depends on the context, and the tools can’t capture this, right?

MICHAEL KING: (05:43) Well, I’ll put that back on you. Like, at this point, we’ve also got tools that can automatically write code. We don’t need developers, right? It’s the same sort of thing. You know what I mean? Like, of course, we need developers. Like, there still needs to be that interpretation of, what are we trying to do here, and how do we account for the nuances of what we’re trying to do? So to your point, yes, I agree that a lot of SEO can be automated, but there are things that let’s say, for instance, we’re talking about an internal linking structure. That could be entirely automated. But if you don’t have the right rules in place, it could go crazy really quickly, right? Like, let’s say you even got it to the point where you’ve identified all the pages that own individual keywords. So let’s say you’ve got your whole keyword list, and you’re like, OK, there’s a mapping of keyword to URL. And then, you have something that crawls the site and looks for the instances of those keywords so that you can automatically build a keyword-relevant internal linking structure. But that could easily go crazy, where you have every word on the page has internal links on it now. And now it’s like a completely bad user experience, and there’s any number of filters that could be tripped as a result of that. And so you still always need that human interpretation so that we’re doing things right, and it just doesn’t go haywire.

MARTIN SPLITT: (07:08) Yeah, yeah, I see that. No, that makes perfect sense. 

Do tools give outdated recommendations? 

MARTIN SPLITT: (07:11) And another thing that I’m surprised by, let’s put it that way, is that there’s a lot of the guidelines you said, like, the best practices and the guidelines are there, and the tools are going along with them. But a bunch of the tools seem to be making proposals or suggestions that I think are either not quite there or actually completely wrong and outdated and no longer are a thing. How does that happen?

MICHAEL KING: (07:46) Yeah, you’ve got things like text-to-code ratio, or W3C compliance that are still, I mean, I’m embarrassed to see that type of stuff still, because it’s like, was that ever really a concern? Or was it just something that some SEO at some point was like, hey, I think this is a thing, and then every tool just has it as a result? But, yeah, I think no one’s really gone back and looked and taken a critical look at things and said, hey, what do we actually need to be looking at? What do we actually need to have rules around? I think it’s largely been like, you have to have feature parity with other tools. And so they’re just copying what they saw Moz do or what they saw SEMrush do, or whoever, and this just continued to persist. But I think that there needs to be. I think SEO as an industry probably just needs a product manager to stand up and be like, yo, let’s stop doing these dumb things.

MARTIN SPLITT: (08:48) Oh, man. I mean, I understand that that kind of cruft accumulates over time, but we have so much in terms of updates and documentation and reading material and guidance out there that we are trying to update whenever something changes. But yet, the tools are still spouting things. And for instance, the W3C thing that has been a tricky one because, obviously, writing compliant, semantic, and correct HTML is great because that makes it easier for us to understand what you’re trying to accomplish there in terms of the semantics of the content. But if you make mistakes, it’s not that we stop understanding the page and be like, oh, we don’t know what that is. We are out of here. We are still trying to make sense of it. We just might need to, like, we might lose confidence on the way, right? It’s like. This might be a headline, but we’re not sure.

MICHAEL KING: (09:40) Right, but realistically, if that was actually a requirement, I’m going to guess that over 90% of the web just wouldn’t load, you know? Because what is truly compliant across the web? And so to that end, obviously, you guys, the crawling capability is fantastic. And you’re rendering the full page anyway, so if my browser works, it’s likely that your crawler will work. And so just the fact that we’re still, like, even considering that is difficult. But at the same time, there are things that you do that achieve compliance that do help. So I agree with what you’re saying, but it’s not the metric that we should be looking at to determine it.

Documentation drift and doing your own research 

MICHAEL KING: (10:27) I think that there’s a lot of instances where, if we’re talking about documentation, where the documentation may be out of phase with where you are as an organisation. And I think you can say that not just from what’s public facing, I’m sure that’s happening internally as well. And so the reality of it is that it’s very difficult to look at that documentation as the single source of truth because things are changing so quickly. And so even if all the SEO tools were like, OK, let’s follow Google’s documentation perfectly, it still wouldn’t necessarily be the ideal state for how these tools would tell you things.

MARTIN SPLITT: (11:08) OK, I’m guessing this also aims a little bit in the direction of previous/next links, where we had this thing. OK, yeah. So what happened there was unfortunate. And you’re right; the docs are not always in phase. We are doing our best to work with the teams and help them to keep their documentation updated, but it does every now and then happen. In this case, a bunch of engineers in search of quality figured out, hey, hold on. We actually don’t really need the rel=”next” and rel=”prev” links anymore to figure out that there is pagination going on. We can figure that out from other things on the page by themselves. So they just removed the code. And then we were in this, and now, we go into our position, come to our side of the force, and what do you do? Do you either update the docs to like just quietly remove that part because it is no longer relevant? Or do you go like, hey, by the way, this is no longer necessary? And, truthfully speaking, it hasn’t been necessary in the last six months, knowing very well that people are reading the documentation, making recommendations based on it to other people, and these people then invest work and time and money into making that happen. And the alternative would just be to let it live there in the documentation. Even though it’s wrong, it doesn’t hurt. So we went with the full frontal way of going like, OK, here’s the thing this has been removed a while ago. We are sorry about that, but now our docs are updated. And I think none of the choices are easy or necessarily perfectly good, but it’s just what happened. So I think we are trying to keep the docs updated as much as possible.

MICHAEL KING: (12:50) Yeah and I get that it’s hard. Again, you have a large team, which is like an offshoot of another even larger team. You’re both moving quite quickly. I get it. It’s just very difficult from this side, where you’re making recommendations to people. And then you’ve got an engineer who’s second-guessing you. And then they find something in the documentation that’s out of date, and they’re like, no, you’re wrong. You don’t know what you’re talking about. Google says this right here. So it just makes our job a lot more difficult. So I get what you’re saying, but you also got to come to our side and see what we’re dealing with because I’m just Mike King. You guys are Google, right?

MARTIN SPLITT: (13:31) Yeah, no, no worries. That’s exactly why we are so transparent and say like, hey, by the way, this has been removed. We’re sorry, but the docs are now updated, because we understand that we have to make sure, to our best knowledge and to our best ability, that the docs are a source of truth. 

Who is the documentation written for?

MARTIN SPLITT: (13:37) Nonetheless, it is tricky, because of what you just said, like, oh, the engineer finds this piece of documentation and argues their way around it, it’s so hard to write these docs to the right audience.

MICHAEL KING: (14:03) Right. Yeah, and that’s the thing it seems like from what I’ve read and I’ve read most of it, from what I can tell it’s like, the writer’s aiming for the middle, but that doesn’t really support the use case, right? Like, it doesn’t necessarily align with the people that are actually going to use this. At the same time, I get that there’s a very wide audience of “webmasters” how many small businesspeople are really digging into documentation like this? So why are we necessarily writing for them? Or should it be the type of thing where we have a section that’s more for the advanced user or more for the enterprise user, where you’re able to speak to these use cases in more specific ways? There are a lot of situations in the documentation where there are just too many edge cases for you to say the things that are being said. Like, there’s something that came out more recently where it’s like, hey if you see a traffic trend or a click trend that looks like this, that means this is what happened. I’ve seen plenty of trends that looked like all four of those things that are shown in the documentation that wasn’t the thing that the documentation says it is. So now, I’ll have a client that’ll come to me and say, well, we saw this, and the documentation showed us a screenshot of this, so this must be why. And so they may not be so receptive to what’s actually going to need to happen in order to recover. So that’s the thing it just doesn’t really solve the problem in the way that we would hope it does.

MARTIN SPLITT: (15:38) And I understand that, and I understand that it’s tricky. And we learned that, and that’s why we relaunched our documentation at some point in the past. I think it was in November 2020 we relaunched or February 2021. I can’t remember when exactly we launched the new Dev site. But we are trying to group it differently because we realised that even though with the new grouping, we’re still seeing feedback to very technical things, like robots.txt, coming from small business owners being like, I don’t even know what any of this means. Ahh! And we’re like, OK, but this is a very technical thing. Like, how did you end up here? Why did you end up here? And what are you trying to accomplish? So it’s really, really tricky, and we have to try to write for the broad use case and then the edge cases. That’s a tricky place. Where do they live?

We read documentation feedback. Give us feedback! 

MARTIN SPLITT: (16:27) I did that for JavaScript. I have this extra fixed JavaScript issues page with all the edge cases where things might go wrong. But it’s tricky. It’s not easy to catch all these things. And that’s why giving us feedback on the docs and in the docs, there is an opportunity to give us feedback right there. We read this. It’s really weird because we can’t give a response back as easily. Like, you don’t know that we read it, but we do read it. And it’s quite interesting. Like, a lot of it is really useful, constructive, helpful feedback that allows us to improve our documentation. A bunch of it is people saying like, aw, you guys suck. And I guess that’s the reality of the internet– wherever you give people the opportunity to say things, they might say things that you might not want to hear, but that’s OK. If they think we suck, that’s, you know, I’m sorry.

MICHAEL KING: (17:15) Well, I do want to give some props because, especially around the JavaScript thing, I really appreciate what you did with that because that was very much something that we were very early on discovering at my agency. Like, we wrote a blog post a number of years ago, probably, like, 10 years at this point, called “Googlebot is Chrome,” where we introduced the SEO world to the idea of headless browsing. And I also wrote some subsequent stuff around that about how this could be done. And I appreciated that you especially came out and were like, no, here’s how we do it. Here are some specific things to know because it was a lot of speculation on our part and a lot of testing. But I appreciate that you were able to say like, no, no, here’s the way that we do it, and then we can refine it across the industry. So that’s what I mean. Like, there are definitely instances where the documentation has been incredibly valuable in shedding light on how we need to be thinking about things because, for a long time, we may have been going in the wrong direction entirely. So yeah, there’s definitely some value to it. I just think that there are instances where things are very vague or don’t really speak to the problem and create more problems for SEOs.

Why documentation causes misunderstanding

MARTIN SPLITT: (18:35) So with the create more problems, that’s exactly what we want to improve on and what we want to constantly get better at. And also, thank you very much for the positive feedback. And for the other bit, like, it’s very generic or very strangely written, that one is a tricky one because it is what you said it. You said it yourself– SEOs come from a wide variety of backgrounds. And they have a wide variety of different focuses, and they look at the same thing from very different angles, like the W3C validator thing. If you ask me like, so does our HTML need to be written well? My answer would be, yes, it has to be written well. But I don’t mean specifically compliant with W3C specs, which is what some people might hear who are coming from that angle. Someone else might be like, oh, so it doesn’t have to be compliant, but it needs to be well done? OK, fair enough. And it’s not just documentation where this is hard. I find that also with the tooling, it is incredibly hard to do that from the tooling that we provide. PageSpeed Insights, for instance, or Lighthouse, gives you a score. That’s so dangerous. I know, but some people need that score. 

MICHAEL KING: (19:45) But let’s dig into that a little bit. So one of the common things that I hear, and I heard it at a conference the other day. They’re like, oh, I ran it three times. Why is it different? People don’t understand that network conditions impact all of these scores here. And so if there was some sort of callout, maybe there is. Maybe it’s in a tooltip that no one clicks on, but I think there’s some value in helping them understand that because you’ll see your score is, like, 45 right now. Now, suddenly, it’s 52. And you’re like; these tools don’t work. I don’t trust these tools. And then also, let’s talk a little bit about the difference between a click in GSE versus a session in GA. Most people don’t know. Like, it was very widely misunderstood that those are not the same things. And so I ended up writing a blog post, going into very great detail. I did some patent diving and looked at some of your documentation and showed people, no, here’s why they are measured differently. One of these comes from logs. One of these comes from clickstream, and so on. And so if that information surfaced a bit better and again, I’m not saying you don’t have it. There was a section in there that talks about the differences to some degree like, what is average position versus what is a ranking? Things like that. These are things that are just not obvious to people that I think could be surfaced a bit better, so these questions don’t come up as much. 

Challenges with trust and supporting each other

MARTIN SPLITT: (21:09) That’s very, very good feedback. That’s exactly what we’re trying to do. And I know that especially the Lighthouse team, for instance, tries to be ridiculously transparent. Like, you can figure out how the score is being calculated and evaluate, and as well as how that changes over time because the same score might actually change even if everything else is the same. Over time, you might actually see different scores because the way that the different metrics are weighted is changing. It’s challenging, though.

MICHAEL KING: (21:39) Of course, of course. I think the bigger challenge, though, is that, again, sometimes a developer will come into the project. They’ll look into the documentation. And they’re like; this doesn’t match up with what the SEO has told me. And then they just don’t trust them. And then there’s also some messaging that I recall from a year or two ago, where there was a video talking about how you should choose an SEO. Obviously, that created a bit of a firestorm in our space because it felt like Google saying, this is the way that we need to be. Here are the expectations you should have. I wish there was one where y’all were like, hey, here are the expectations you should have of our documentation.

MARTIN SPLITT: (22:25) Yeah, I understand. I understand. Yeah, see, and this is exactly what happens because I like two things, particularly what you just said. Number one, this video created a firestorm amongst SEOs. It was not even meant for them. It was not even meant for their customers, necessarily. It was meant for small businesses that are like, I am running a kid’s clothing store in a back street in Zurich. And I have zero budget. I’m basically running it out of my basement, and people WhatsApp me or Facebook message me or FaceTime me or Hangout me or write me a text message or whatever to pick up their order or something like that. But I want to be taking part in the online world of e-commerce. How can I do that? And it was meant for this audience. Like, if you get an SEO, look for these very obvious red flags and things that might not be up for what you are trying to accomplish. And because of the wide variety of people, that is what happened. Like, it was misunderstood and misrepresented, and it wasn’t necessarily presenting itself very well. And the other thing was trust. We said like, a developer comes in and doesn’t trust what the SEO says based on what is in the documentation. And that seems to be the core of all this. I just realised, thanks to you, SEOs, and also developers, probably, as well, come from so many different backgrounds. And it’s unfortunate that we choose to use this, like, the trajectory that we come from, like, I’m coming from here. You’re coming from here. Instead of looking at the entire spectrum in between and learning from each other’s perspective, it seems to be more like, I come from this angle, and I see this. You come from this angle. You see this. You must be wrong.

MICHAEL KING: (24:33) Right. Yeah, I don’t think it should be that us-versus-them dynamic that we currently have. And I think that there’s so much beauty in the fact that SEOs come from so many different backgrounds. Like we said in our introduction, I rap. That’s what I did before I did SEO full-time. And there’s so many people that you meet with so many interesting perspectives on so many things, and I wish, when we work with engineers, we were able to come to the table better and be like, look. I want to know everything about what it is that you’re doing. And what can I learn here? How can we work together more effectively? And that’s very much my approach. I don’t try to come into it like; this is the way we must do it because it’s the way I think. I more try to come to it from the perspective of, like, hey, I know a little bit about what it is that you do, probably more than you might expect. But I want to learn this from your perspective, and I want to help support what you’re trying to do. One of the things that we do at the beginning of any of our audits, there’s this thing it’s like the prime directive from some agile thing that we read, where it’s something like, we’re just assuming that everyone did their best with what they knew and the whole thing. So that we’re coming from the perspective of a level playing field, where it’s like, I’m not just hating on your work. I’m just trying to say like, hey, from the perspective where I sit, these are the things that can be improved so that we can make things perform better. And at the same time, I need to understand your limitations. I need to understand what you need in a user story for this to work for you. I need to know what sort of supporting data and documentation you need to see in order for you to get this on your prioritisation schedule. And I think a lot of SEOs aren’t really thinking that way because a lot of how things are presented both in our space and from that us-versus-them dynamic is like, all right, well, you’re going to bring all this great stuff to the table. And no one’s going to do anything with it because they don’t trust you. And it’s all just humans working together, so what do we need to do so that we can all come to a space of trust and make this work and make it, so everyone gets bonuses at the end of the year?

MARTIN SPLITT: (26:54) That’s a nice way of looking at it. From a developer’s perspective, what I can say is that what I observed myself as well is that, as developers, we are used to know that we don’t know something, and then we have to get better at figuring it out. There are still a lot of developers who are like, I know everything, which is unfortunate. But usually, developers are curious creatures, and we are like, oh, I don’t know how that works or how that is supposed to be. And then we research, and then we prototype, and then we have something that solves the problem. We are problem solvers. So when someone comes in, and often, SEOs come in, and I think it’s also because there is such a wide variety of people’s backgrounds in SEO, they might feel inclined to say, oh, it’s like this, even though they are not sure about it, or they don’t know it. So they cling on to their PDF reports. They’re like; this report says this is a problem without necessarily thinking about it. And I would love to see more SEOs admitting, hm, I actually don’t know. Let’s find out together, or let’s test this together. Let’s do some research together.

Knowing, not knowing, and doing researching

MICHAEL KING: (28:01) Well, that is easier to do when you’re in-house. It’s harder to do when you’re a consultant because they don’t expect you to know everything when you’re in-house. They expect you to know enough and work through it. Whereas when you’re a consultant, they want you to have the answer, even if there isn’t a definitive answer. But as far as developers, I like to think of them on a spectrum. And I think I’ve mentioned this to you via email before. I think of it as the Anderson-Alderson spectrum, where Anderson is Thomas Anderson, like Neo from “The Matrix,” who hated his job, and didn’t want anything to do with anybody. And then you’ve got Elliot Alderson, “Mr Robot,” who was the guy that was working all hours of the night, just kind of doing work without anyone telling him, being super proactive. And so there are those developers like you’re saying, this side of the Anderson scale, it’s like, I know everything. You don’t know anything, and they present that way very much, even when they are typically more intellectually curious amongst their peers. And obviously, those people are very difficult to work with, and you’ve got to have documentation for everything. And again, that’s the person that’s going to find the Google documentation that says that you’re wrong, and that’s that. Yeah, exactly. Whereas on the Alderson side, I had a guy who’s a developer working with a client at one point. We were presenting the site audit on-site with them, walking through everything. And he was committing code, fixing things, as we were talking and asking questions. And probably, that’s not the best way to do development work. Of course, you need to run it through your whole process. But it was really good to see how receptive he was to what we were doing, and he was very collaborative. That’s the ideal situation: someone who’s going to ask questions, someone who’s going to help you clarify things so you can make the recommendation far more pointed and actually make it happen. And obviously, those are the extremes. But obviously, something in the middle is where things work best, where it’s like, hey, SEO, you understand that you don’t know everything about this website because you didn’t build it. And, hey, engineer, you understand you don’t know everything about SEO because those requirements are not in your day-to-day. So let’s all get together and figure out how to make this work and trust each other, and that’s it.

MARTIN SPLITT: (30:37) Sounds easier said than done, but I think, yeah, to sum it up, SEOs should value the difference of perspective from different people and be receptive to doing their own research. And also, developers need to be more trusting towards SEOs and take them on the journey with them and work together to figure things out together. And I think that would probably make things easier for everyone.

MICHAEL KING: (31:05) Yeah, I’ll tell you that no SEO is trying to ruin your website. Like, they’re not actively trying to mess things up for you. Their remit is helping it improve its visibility, and drive traffic, ultimately driving conversions. So the reality of the situation is it’s an antagonistic relationship because the whole job, or a lot of the job, is telling an engineer that they’ve done something wrong. And again, we need to reframe that so it’s a better working relationship.

MARTIN SPLITT: (31:39) Yeah, interesting. So let’s hope that people out there watching this conversation are finding their way of reframing it and actually getting onto a table and collaborating with each other rather than trying to prove each other wrong. Because I think if we try to just prove each other wrong, we’re not getting anywhere, and we have the same goal. We want better websites.

Conclusion

MARTIN SPLITT: (32:02) Awesome. Mike, thank you so much for being here with me and taking the time to talk this out. And I think I am full of new ideas, and I will definitely try to make our documentation better together with our lovely team and our fantastic tech writer, Lizzie Harvey, and all the others. And yeah, thanks a lot.

MICHAEL KING: (32:20) Thanks for having me, Martin. This has been great. And I, again, just want to really thank you guys for all the things you’ve been doing. I’ve been doing SEO for 15 years, and it’s been a dramatic improvement in how you’ve engaged with the community, the documentation you’re doing, the tools, and so on. So I definitely appreciate the progress and look forward to where this continues to go.

MARTIN SPLITT: (32:43) Thank you. Thanks a lot for the nice feedback. And thanks to everyone watching this. I hope that you had a good time. Stay safe and healthy, and bye-bye.

Sign up for SEO & Web Developers today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

WebMaster Hangout – Live from DECEMBER 29, 2022

WEBMASTER HANGOUT – LIVE FROM DECEMBER 29, 2022

Introduction

Lizzi: (00:00) Hello, hello, and welcome to the December edition of the Google SEO Office Hours, a monthly audio recording coming to you from the Google Search team answering questions about search submitted by you. Today, you’ll be hearing from Alan, Gary, John, Duy and me, Lizzy. All right, let’s get started.

How to reduce my site from 30,000 products to 2,500?

Alan: (00:22) Vertical web asks. My old site is going from 30,000 products down to two and a half thousand. I will generate 400 thousand 301 redirects. Is it better to start on a clean URL and redirect what needs to be redirected to the new site or do it on an old URL? 

  • A: (00:44) We generally recommend keeping your existing domain name where possible. We support redirecting to a new domain name as Google will recognise the 301 permanent redirects and so understand your content is moved. However, there’s a greater risk of losing traffic if a mistake is made in the migration project, it is fine to clean up old pages and either have them return a 404 or redirect to new versions, even if this affects lots of pages on your site.

Does Google ignore links to a page that was a 404?

Gary: (01:09) Sina is asking; it’s been formally asserted that Google ignores links to a 404 page. I want to know whether links to that page will still be ignored when it is no longer 404.

  • A: (01:22) Well, as soon as a page comes back online, the links will be counted again to that page after the linking pages have been recrawled and the fillings have been deemed still relevant by our systems.

Do speed metrics other than Core Web Vitals affect my site’s rankings?

John: (01:37) If my website is failing on the Core Web Vitals but performs excellently on the GTMetrix speed test, does that affect my search rankings?

  • A: (01:47) Well, maybe. There are different ways to test speed and different metrics, and there’s testing either on the user side or in a lab. My recommendation is to read up on the different approaches and work out which one is appropriate for you and your website.

Why doesn’t Google remove all spam

Duy: (02:06) Somebody asked, why does Google not remove spam webpages? 

  • A: (02:11) Well, over the years we blogged about several spam-specific algorithms that either demote or remove spam results completely. One such example is Spambrain, our artificial intelligence system that’s very good at catching spam. Sometimes for some queries where we don’t have any good results to show, you might still see low-quality results. If you see spam sites are still ranking, please continue to send them to us using the spam report form. We don’t take immediate manual actions on user spam reports, but we do actually use the spam reports to monitor and improve our coverage in future spam updates. Thank you so much.

Do too many 301 redirects have a negative effect?

John: (02:55) Lisa asked. I create 301 redirects for every 404 error that gets discovered on my website. Do too many 301 redirects have a negative effect on search ranking for a website? And if so, how many are too many?

  • A: (03:13) You can have as many redirecting pages as you want. Millions are fine if that’s what you need or want. That said, focus on what’s actually a problem so that you don’t create more unnecessary work for yourself. It’s fine to have 404 pages and to let them drop out of the search. You don’t need to redirect. Having 404 errors listed in Search Console is not an issue if you know that those pages should be returning 404.

How does Google determine what a product review is?

Alan: (03:42) John asks, how does Google determine what a product review is for the purposes of product review updates? If it’s affecting non-product pages, how can site owners prevent that?

  • A: (03:54) Check out our Search Central documentation on best practices for product reviews. For examples of what we recommend, including in product reviews. It is unlikely that a non-product page would be mischaracterized as a product review. And it is unlikely that it would have a significant effect on ranking, even if it was, it’s more likely to be other ranking factors or algorithm changes that have impacted the ranking of your page.

Should I delete my old website when I make a new one?

John: (04:23) I bought a Google domain that came with a free webpage. I now decided to self-host my domain, and I wanted to know if I should delete my free Google page.I don’t want to have two web pages. 

  • A: (04:37) If you set up a domain name for your business and have since moved on to a new domain, you should ideally redirect the old one to the new domain, or at least delete the old domain. Keeping an old website online when you know that it’s obsolete is a bad practice and can confuse both search engines and users.

Should paginated pages be included in an XML sitemap?

Alan: (04:59) Should paginated pages such as /category?page=2 be included in an XML sitemap? It makes sense to me, but I almost never see it.

  • A: (05:12) You can include them, but assuming each category page has a link to the next category page that may not be many benefits, we will discover the subsequent pages automatically. Also, since subsequent pages are for the same category, we may decide to only index the first category page on the assumption that the subsequent pages are not different enough to return separately in search results.

My site used to be hacked, do I have to do something with the hacked pages?

John: (05:37) Nacho asks, we were hacked early in 2022 and still see Search Console 404 error pages from spammy pages created by the hacker. These pages were deleted from our database. Is there anything else that I should do?

  • A: (05:55) Well, if the hack is removed, if the security issue is resolved, and if the pages are removed, then you’re essentially all set. These things can take a while to disappear completely from all reports, but if they’re returning 404, that’s fine. 

Does Google care about fast sites?

Alan: (06:11) Tarek asks, does Google care about fast sites?

  • A: (06:15) Yes. Google measures core web vitals for most sites, which includes factors such as site speed, and core web vitals is used as a part of the page experience ranking factor. While it’s not something that overrides other factors like relevance, it is something that Google cares about and equally important users care about it too.

Can Google follow links inside a menu that shows on mouseover?

Lizzi: (06:38) Abraham asks, can Google follow links inside a menu that appears after a mouseover on an item?

  • A: (06:45) Hey, Abraham. Great question. And yes, Google can do this. The menu still needs to be visible in the HTML, and the links need to be crawlable, which means they need to be proper A tags with an href attribute. You can use the URL inspection tool in Google Search Console to see how Google sees the HTML on your site, and check to see if the menu links are there. Hope that helps. 

Why did the reporting shift between my mobile and desktop URLs?

John: (07:10) Luki asked, we use sub-domains for desktop and mobile users. We found a strange report in Search Console in early August where the desktop performance has changed inversely with the mobile performance. And the result is that our traffic has decreased. 

  • A: (07:30) The technical aspect of the indexing and reporting, shifting to the mobile version of a site is normal and expected. This happens with mobile-first indexing and can be visible in reports if you look at the host names individually. However, assuming you have the same content on mobile and desktop, that wouldn’t affect ranking noticeably. If you see ranking or traffic changes, they would be due to other reasons. 

Does having many redirects affect crawling or ranking?

Gary: (07:56) Marc is asking, Do many redirects, let’s say twice as many as actual URLs affect crawling or ranking in any way? 

  • A: (08:05) Well, you can have as many redirects as you like on your site overall; there shouldn’t be any problem there. Just make sure that individual URLs don’t have too many hops in the redirect chains if you are chaining redirects, otherwise, you should be fine.

Can I use an organization name instead of an author’s name?

Lizzi: (08:21) Anonymous is asking, when an article has no author, should you just use organization instead of a person on author markup? Will this have a lesser impact on results?

  • A: (08:36) It’s perfectly fine to list an organization as the author of an article. We say this in our article, structured data documentation. You can specify an organization or person as an author, both are fine. You can add whichever one is accurate for your content. 

What can we do if someone copies our content?

Duy: (08:53) Somebody asked a competitor’s copying all of our articles with small changes. In time, it ranks higher than us. DMCA doesn’t stop them or seem to lower their ranking. What else can we do, if their site has more authority?

  • A: (09:09) If the site simply scrapes content without creating anything of the original value, that’s clearly a violation of our spam policies, and you can report them to us using our spam report form so that we can improve our algorithms to catch similar sites. Otherwise, you can start a thread on our Search Central Help community, so product experts can advise on what would be some of the possible solutions. They would also be able to escalate to us for further assessment.

Do URL, page title, and H1 tag have to be the same?

Lizzi: (09:35) Anonymous is asking: the URL page title and H1 tag. Do they have to be the same? 

  • A: (09:44) Great question, and no, they don’t need to be exactly the same. There’s probably going to be some overlap in the words you’re using. For example, if you have a page that’s titled “How to Knit a Scarf”, then it probably makes sense to use some of those words in the URL too, like /how-to-knit-a-scarf or /scarf-knitting-pattern, but it doesn’t need to be a word for word match. Use descriptive words that make sense for your readers and for you when you’re maintaining your site structure and organization. And that’ll work out for search engines as well.

Is redirecting through a page blocked by robots.txt a valid way to prevent passing PageRank?

John: (10:17) Sha asks, is redirecting through a page blocked by robots.txt still a valid way of preventing links from passing PageRank?

  • A: (10:28) Yes, if the goal is to prevent signals from passing through a link, it’s fine to use a redirecting page that’s blocked by robots.txt.

Why is my site flagged as having a virus?

Alan: (10:37) Some pages on my website collect customer information, but my site is always reported as being infected via a virus or deceptive by Google. How can I avoid this happening again without removing those pages?

  • A: (10:53) Your site might have been infected by a virus without you knowing it. Check out https://web.dev/request-a-review for instructions on how to register your site in Search. Console, check for security alerts, then request Google to review your site again after removing any malicious files, some break-ins hide themselves from the site owner so they can be hard to track down.

Is there any way to get sitelinks on search results?

Lizzi: (11:20) Rajath is asking, is there any way to get sitelinks on SERPs?

  • A: (11:25) Good question. One thing to keep in mind is that there’s not really a guarantee that sitelinks or any search feature will show up. Sitelinks specifically only appear if they’re relevant to what the user was looking for and if it’ll be useful to the user to have those links. There are some things that you can do to make it easier for Google to show sitelinks. However, like making sure you have a logical site structure and that your titles, headings, and link text are descriptive and relevant. There’s more on that in our documentation on sitelinks, so I recommend checking that out. 

Does having two hyphens in a domain name have a negative effect?

John: (11:59) My site’s domain name has two hyphens. Does that have any negative effect on its rankings? 

  • A: (12:06) There’s no negative effect from having multiple dashes in a domain.

How important are titles for e-commerce category page pagination?

Alan: (12:12) Bill asks, how important are unique page titles for e-commerce category product listing page pagination? Would it be helpful to include the page number in the title?

  • A: (12:25) There is a good chance that including the page number in your information about a page will have little effect. I would include the page number if you think it’s gonna help users understand the context of a page. I would not include it on the assumption it’ll help with ranking or increasing the likelihood of the page being indexed. 

Is it better to post one article a day or many a day?

John: (12:44) Is it better for domain ranking to regularly post one article every day or to post many articles every day or to post many articles every day? 

  • A: (12:53) So here’s my chance to give the SEO answer: it depends. You can decide how you want to engage with your users: on the downside, that means there’s no absolute answer for how often you should publish,on the upside, this means that you can decide for yourself.

What is the main reason for de-indexing a site after a spam update?

Gary: (13:12) Faiz Ul Ameen is asking, what is the main reason for de-indexing of sites after the Google spam update?

  • A: (13:19) Well, glad you asked. If you believe you were affected by the Google Spam update. You have to take a really, really deep look at your content and, considerably improve it. Check out our spam policies, and read more about the Google spam update on Search Central.

Can Google read infographic images?

John: (13:38) Zaid asks, can Google read infographic images? What’s the best recommendation there?

  • A: (13:45) While it’s theoretically possible to scan images for text, I wouldn’t count on it when it comes to a web search. If there’s a text that you want your pages to be recognized for, then place that as text on your pages. For infographics, that can be in the form of captions and all texts, or just generally, well, you know, text on the page.

Is it possible to remove my site completely if it was hacked?

Gary: (14:08) Anonymous is asking whether it’s possible to completely remove a site from Google Search because it has been hacked and leads to thousands of invalid links.

  • A: (14:20) Well, first and foremost, sorry to hear that your site was hacked. Our friends at Web.dev have great documentation about how to prevent this from happening in the future, but they also have documentation about how to clean up after a hack. To answer your specific question, you can remove your site from search by serving a 404 or similar status code, or by adding noindex rules to your pages. We will need to recrawl your site to see the status codes and noindex rules. But that’s really the best way to do it.

Why does my Search Console miss a period of data?

John: (14:54) I’m missing months of data from my domain property on Search Console from April 2022it connects directly to August 2022. What happened?

  • A: (15:07) This can happen if a website loses verification in Search Console for a longer period of time. Unfortunately, there is no way to get this data back. One thing you could try, however, is to verify a different part of your website and see if it shows some of the data there. 

How can I deindex some bogus URLs?

Gary: (15:25) Anonymous is asking, I want to deindex some bogus URLs. 

  • A: (15:30) There’s really only a handful of ways to deindex URLs: removing the page and serving a 404 or 410 or similar status code. Or by adding a noindex rule to the pages and allowing Googlebot to crawl those pages. These you can all do on your own site. You don’t need any specific tool. But Googlebot will need to recrawl those pages to see the new statuses and rules. If we are talking about only a couple of pages, then you can request indexing of these pages in the Search Console.

Why is some structured data detected only in the schema validator?

Lizzi: (16:04) Frank asks why is some structured data markup detected on the schema validator, but not on Google’s rich result test?

  • A: (16:14) Hey, Frank. This is a really common question. These tools are actually measuring different things. I think you’re referencing the Schema.org markup validator, which checks if your syntax in general, is correct, whereas the rich result test checks if you have markup that may enable you to get a rich result in Google Search. it doesn’t actually check every type that’s on schema.org, it only checks those that are listed in the list of structured data markup that Google supports, which is about 25 to 30 features, so it’s not fully comprehensive of everything that you’d see on Schema.org, for example. 

Do you have people who can make a website for me?

John: (16:47) Do you have people that I can work with to create a functioning site?

  • A: (16:52) Unfortunately, no. We don’t have a team that can create a website for you. If you need technical help, my recommendation would be to use a hosted platform that handles all of the technical details for you. There are many fantastic platforms out there now, everything from Blogger from Google to Wix, or Squarespace, Shopify, and many more. They all can work very well with search and usually, they can help you to get your site off the ground.

Why are some sites crawled and indexed faster?

Gary: (17:21) Ibrahim is asking why are some websites crawled and indexed faster than others?

  • A: (17:25) This is a great question. Much of how fast a site is crawled and indexed depends on how the site is perceived on the internet. For example, if there are many people talking about the site, it’s likely the site’s gonna be crawled and indexed faster. However, the quality of the content also matters a great deal. A site that’s consistently publishing high-quality content is going to be crawled and indexed faster. 

Why do Google crawlers get stuck with a pop-up store selector?

Alan: (17:51) Why do Google crawlers get stuck with a pop-up store selector? 

  • A: (17:56) It can depend on how the store selector is implemented in HTML. Google follows a href links on a page. If the selector is implemented in JavaScript, Google might not see that the other stores exist and so not find the product pages for those stores.

How can I verify my staging site in Search Console?

Gary: (18:13) Anonymous is asking if we have a staging site that is allow-listing only specific developers’ IP addresses, if we upload a Search Console HTML file, which I suppose is the verification file, will Search Console be able to verify that site?

  • A: (18:30) Well, the short answer is no. To remove your staging site from search, using the removal tool for site owners first, you need to ensure that Googlebot can actually access the site, so you can verify it in Search Console. We publish our list of IP addresses on Search Central. So you can use that list to allow-list the IPs that belong to Googlebot so it can access the verification file. Then you can use the removal tool to remove the staging site. Just make sure that the staging site, in general, is serving a status code that suggests it cannot be indexed, such as 404 or 410.

How can I get a desktop URL indexed?

John: (19:08) How can we get a desktop URL indexed?  The message Search Console says the page is not indexed because it’s a page with a redirect. We have two separate URLs for our brand, desktop and mobile.

  • A: (19:21) With mobile-first indexing. That’s normal. Google will focus on the mobile version of a page. There’s nothing special that you need to do about that, and there’s no specific trick to index just the desktop version…

Is it possible to report sites for stolen content?

Lizzi: (19:36) Christian is asking, is it possible to report sites for stolen content, such as text, original images, that kind of thing?

  • A: (19:46) Yes, you can report a site. Do a search for “DMCA request Google”, and use the “report content on Google” troubleshooter to file a report. 

Is adding Wikipedia links a bad practice?

John: (19:57) Is adding Wikipedia links to justify the content bad practice?

  • A: (20:03) Well, I’d recommend adding links to things that add value to your pages. Blindly adding Wikipedia links to your pages doesn’t add value.

Is there any difference if an internal link is under the word “here”?

Lizzi: (20:14) Gabriel is asking, is there any difference if an internal link is under the word “here” or if it is linked in a keyword?

  • A: (20:23) Hey Gabriel, good question. It doesn’t matter if it’s an internal link to something on your site or if it’s an external link pointing to something else, “here” is still bad link text. It could be pointing to any page, and it doesn’t tell us what the page is about. It’s much better to use words that are related to that topic so that users and search engines know what to expect from that link.

Why does my news site’s traffic go up and down?

Gary: (20:46) Niraj is asking, I follow the same pattern of optimization, but my news website traffic is up and down.

  • A: (20:53) Well, for most sites, it’s actually normal to have periodic traffic fluctuations. For example, seasonality affects e-commerce sites quite a bit. For news sites, specifically user interest in the topics you cover can cause fluctuations, but all in all, it is normal and not something that you have to worry about usually. 

Is changing the URL often impacting my SEO performance?

John: (21:16) Is changing the URL often impacting my SEO performance? For example, a grocery site might change a URL from /christmas/turkey-meat to /easter/turkey-meat. The page is the same, and the URL is just changed with a redirect. 

  • A: (21:35) I wouldn’t recommend constantly changing URLs. At the same time, if you must change your URLs, then definitely make sure to redirect appropriately. 

How does freshness play a role in ranking seasonal queries like Black Friday deals?

Alan: (21:45) How does freshness play a role in the ranking? For seasonal queries like Black Friday deals, it makes sense to update frequently as news or deals are released, but what about something less seasonal?

  • A: (21:58) You may decide to update a Black Friday deals page frequently to reflect the latest offers as they come out. Remember, however, that Google does not guarantee how frequently a page will be reindexed, so not all of the updates are guaranteed to be indexed. Also, a good quality page that does not change much may still be returned in search results if we think its content is still relevant. I would recommend focusing on creating useful content and not spending too much time thinking about how to make static pages more dynamic.

Is there a way to appeal Safe Search results?

John: (22:33) Adam asks, is there a way to appeal Safe Search results? I work with a client that has been blocked from their own brand term while resellers and affiliates are still appearing. 

  • A: (22:44) So first off, I think it’s important to realize that Safe Search is not just about adult content. There’s a bit of nuance involved there, so it’s good to review the documentation. Should you feel that your website is ultimately incorrectly classified, there’s a review request link in an article called “SafeSearch and your website” in the Search developer documentation. 

How can I update my site’s brand name?

Lizzi: (23:08) Danny is asking. My site name in search reflects the old domain’s brand name, even with structured data and metatags. What else can I do to update this information? 

  • A: (23:22) Hello, Danny. The site name documentation has a troubleshooting section with a list of things to check that’s more detailed than what I can cover here. You want to make sure that your site name is consistent across the entire site, not just in the markup. And also, check any other versions of your site and make sure that those are updated too. For example, HTTP and HTTPS. If you’re still not having any luck, go to the Search Console help forum and make posts there. The folks there can help.

When migrating platforms, do URLs need to remain the same?

John: (23:51) Aamir asks while migrating a website from Blogger to WordPress, do the URLs need to be the same, or can I do a bulk 301 redirect?

  • A: (24:02) You don’t need to keep the URLs the same. With many platform migrations, that’s almost impossible to do. The important part is that all old URLs redirect to whatever specific new URLs are relevant. Don’t completely redirect from one domain to another. Instead, redirect on a per URL basis.

How much do I have to do to update an algorithmic penalty?

Duy: (24:24) Johan asked if a website gets algorithmically penalized for thin content, how much of the website’s content do you have to update before the penalty is lifted? 

  • A: (24:34) Well, it’s generally a good idea to clean up low-quality content or spammy content that you may have created in the past. For algorithmic actions. It can take us several months to reevaluate your site again to determine that it’s no longer spammy. 

How can I fix long indexing lead times for my Google-owned site?

John: (24:49) Vinay asks, we’ve set up Google Search Console for a Google-owned website where the pages are dynamically generated. We’d like to get insights into what we should do to fix long indexing lead times.

  • A: (24:05) Well, it’s interesting to see someone from Google posting here. As you listeners might know, my team is not able to give any Google sites SEO advice internally, so they have to pop in here like anyone else. First off, as with any bigger website, I’d recommend finding an SEO agency to help with this holistically. Within Google, in the marketing organization, there are folks that work with external SEO companies, for example. Offhand, one big issue I noticed was that the website doesn’t use normal HTML links, which basically makes crawling it a matter of chance. For JavaScript sites, I’d recommend checking out the guidance in our documentation and our videos. 

How does the helpful content system determine that visitors are satisfied?

Duy: (25:49) Joshua asked, how exactly does the helpful content system determine whether visitors feel they’ve had a satisfying experience?

  • A: (25:58) We published a pretty comprehensive article called “What creators should know about Google’s August 2022 helpful content update” where we outline the type of questions you can ask yourself to determine whether or not you’re creating helpful content for users. Such as, are you focusing enough on people-first content? Are you creating content to attract search users using lots of automation tools? Did you become an expert on a topic overnight and create many articles seemingly out of nowhere? Personally, I think not just SEOs, but digital marketers, content writers, and site owners should be familiar with these concepts in order to create the best content and experience for users. 

Should we have 404 or noindex pages created by bots on our website?

John: (26:40) Ryan asks, bots have swarmed our website and caused millions of real URLs with code tacked on to be indexed on our website through a vulnerability in our platform. Should we 404 these pages or noindex them?

  • A: (26:56) Either using a 404 HTTP result code or a noindex robots metatag is fine. Having these on millions of pages doesn’t cause problems. Depending on your setup. You could also use robots.txt to disallow the crawling of those URLs. The effects will linger in Search Console’s reporting for a longer time, but if you’re sure that it’s fixed, you should be all set.

Will adding a single post in Spanish to my English site affect my search rankings?

Lizzi: (27:20) Bryan asks if my site is all in English and I add a single post in Spanish, will that affect search rankings? 

  • A: (27:29) Hey, Bryan. Sure. That’s totally fine. It’s not going to harm your search rankings. I also recommend checking out our guide to managing multilingual websites, as there’s a lot more to cover when you’re thinking about publishing content in multiple languages.

Do all penalties show up in Search Console?

Duy: (27:44) Stepan asked In Google Search Console exists a section called Manual Actions. Do Google show all penalties there and always notify domain owners when a domain is hit with some penalties?

  • A: (27:58) We have manual actions, which are issued by human reviewers and algorithmic actions, which are driven entirely by our spam algorithms, such as Spambrain. We only communicate manual actions to site owners through Search Console. You can search for manual actions reports. There’s a page there that lists a lot of information to help you understand more about our different types of manual actions, as well as how to file a reconsideration request when you receive and already address the manual action.

Will SEO decline? Should I study something different?

John: (28:33) Caroline asks, will SEO decline in favour of SEA and SMA? I’m starting my internship and need to know if I better redirect my path or continue on my way and specialise myself in accessibility.

  • A: (28:49) I’m not quite sure what SMA is, but regardless, there are many critical parts that lead to a website’s and a business’s success. I definitely wouldn’t say that you shouldn’t focus on SEO, but at the same time, it’s not, well, the answer to everything. My recommendation would be to try things out. Find where your passions and your talents lie, and then try more of that. Over the years things will definitely change, as will your interests. In my opinion, it’s better to try and evolve than towait for the ultimate answer. 

Does the number of outgoing links affect my rankings?

Duy: (29:24) Jemmy asked, does the number of outgoing links both internaland external, dilute PageRank, or is PageRank distributed differently for each type of link?

  • A: (29:35) I think you might be overthinking several things. First of all, focusing too much on PageRank, through building unnatural links whether it violates a policy or not, it takes time and effort away from other more important factors on your, such as helpful content and great user experience. Second of all, sites with internal links allowed us to discover not only new pages, but also understand your site better. Limiting them explicitly would likely do more harm than good.

Conclusion

John: (30:07) And that was it for this episode. I hope you found the questions and answers useful. If there’s anything you submitted which didn’t get covered here, I’d recommend posting in the Search Central Help community. There are lots of passionate experts there who can help you to narrow things down. And of course, if there’s more on your mind, please submit those questions with the form linked below. Your questions here are useful to us and to those who catch up on these recordings, so please keep them coming. If you have general feedback about these episodes, let us know in the comments or ping us on social media. I hope the year has gone well. For us things have certainly evolved over the course of the year with well ups and downs and a bunch of new launches. I’m looking forward to catching up with you again next year, perhaps in another episode of these office hours. In the meantime, may your site’s traffic go up and your crawl errors go down.
Have a great new year and see you soon. Bye!

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

Guide To Current & Retired Google Ranking Systems

The new Google Ranking System guide defines the relevant systems Google uses and the old ones that are retired and no longer in use to rank search results.

What is the difference between systems and updates?

Systems are always running in the background. Updates, however, refer to one-time changes to the ranking systems. For example, helpful content systems run in the background whenever Google provides search results but may receive updates to improve performance. Other examples of one-time changes to ranking systems are core algorithm updates and spam updates.

Let’s look at the highlights from the Google Ranking System guide.

Current Google Ranking Systems

Here is a list of the Google ranking systems currently in operation.

  • BERT
    Stands for Bidirectional Encoder Representation of Transformers and allows Google to understand how word combinations can express different meanings and intentions.
  • Crisis information system
    Google has procedures to provide specific sets of information in times of crisis. For instance, SOS alerts when searching for natural disasters.
  • Deduplication system
    The Google search engine tries to avoid duplicate or near-duplicate web pages.
  • The exact match domain system
    This system prevents Google from overly trusting websites with domain names that match search queries.
  • Freshness system
    Designed to display up-to-date content where it’s needed and where it’s expected.
  • Helpful content system
    Makes it easy for people to see original, useful content rather than content created primarily to drive traffic from search engines.
  • Link analysis systems and PageRank
    Determines which pages are the most useful in response to a query based on how the pages are linked.
  • Local news systems
    Displays local news sources relevant to your search query.
  • MUM or Multitask Unified Model
    An artificial intelligence system that can understand and generate speech. This powers the featured callout and is not used for the overall ranking.
  • Neural matching
    Helps Google understand and match conceptual expressions for queries and pages.
  • Original content systems
    Helps Google display original content, including actual reports, in search results.
  • Removal-based demotion systems
    Downgrading websites based on mass content removal requests.
  • Page experience system
    Evaluates various criteria to determine if a website provides a good user experience.
  • Passage ranking system
    An artificial intelligence system that Google uses to identify individual sections or “snippets” of web pages to understand better how relevant the page is to searchers.
  • Product reviews system
    Rewards quality product reviews by experienced writers with insightful analysis and original research.
  • Rank Brain
    An artificial intelligence system that helps Google understand the relationship between words and concepts. Allows Google to return results that contain different terms than the exact words used in the query.
  • Reliable information systems
    Google has several techniques for displaying reliable information. To promote authoritative pages, demote low-quality content, and reward high-quality journalism.
  • Site diversity system
    Prevents Google from listing its web pages from the same website in the top search results on two or more listings.
  • Spam detection system
    Processes content or activity that violates Google’s spam policy.

Outdated Google Ranking Systems

The following systems are marked for historical purposes. They are integrated into other systems or part of Google’s primary ranking system.

  • Hummingbird
    A significant improvement from the Google ranking system was introduced in 2013.
  • Mobile-friendly ranking system
    Prioritised content that plays better on mobile devices. It has been incorporated into the Google Pages interaction system.
  • Page speed system
    Prioritised content that loads quickly on mobile devices, introduced in 2018. It has since been incorporated into the Google Pages interaction system.
  • Panda system
    Prioritised quality and original content, introduced in 2011. In 2015, it became part of Google’s primary ranking system.
  • Penguin system
    Downgraded websites that use spam link building, introduced in 2012.
  • Secure site system
    Prioritised HTTPS-protected websites, introduced in 2014. It has since become part of the Google Pages experience.

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

Contact Us

Article by

Leonidas Comino – Founder & CEO

Leo is a, Deloitte award winning and Forbes published digital business builder with over a decade of success in the industry working with market-leading brands.

Like what we do? Come work with us

How to optimise images for your eCommerce website

Introduction

ALAN KENT: (00:07) They say a picture is worth a thousand words. And there is no field where that is not more true than eCommerce . My name is Alan Kent, and I’m a developer advocate at Google. In this episode, I’ll explore six tips to optimise images on your eCommerce  website. It is not uncommon for an eCommerce  page to reference hundreds of images. These images are everything from full-sized product images to smaller product thumbnails, category images, banners, page decorations, and button icons. Given their abundance, how can you make sure that they are fast and efficient?

Tip #1: Eliminate image Cumulative Layout Shift

(00:46) The first tip for optimizing the image used on your site is to eliminate cumulative layout shifts. Cumulative Layout Shift, or CLS for short, is where the contents of a page visibly move around on the screen. You know those sites you stop reading, or you try and click on a link, and suddenly the page content moves. It’s really annoying. Images can contribute to this problem if used incorrectly. CLS is so impactful to a user’s experience.

Google has defined CLS as one of three Core Web Vitals. These are factors that Google considers important for user experience on all web pages. So why can images cause CLS? To load a page, your web browser starts to download the HTML markup of the page. Most browsers will start displaying the top of the page before the whole page has been downloaded. To reduce your wait time, any references to images encountered are added to a queue of resources to download. JavaScript and CSS files are also added to the queue. These files are then downloaded in parallel to the main page, a few at a time. The problem is when the browser does not know the image dimensions before rendering the page content. Layout shift occurs if the browser discovers it did not leave the right amount of space for an image. CLS is often easy to spot on a page manually by watching it load. But there are also automated tools that can measure it. But let’s first take a slight detour and talk about lab versus field data.

Lab data is collected by testing tools you point to your web page, such as Google’s Lighthouse. You can perform a lab test at any time and have complete control over the process. Field data is collected by measuring what happens to real users on your site. In production, field data can be collected using JavaScript you embed in your own web pages or via anonymized data collected by Chrome. Chrome makes data for popular sites available in the Chrome User Experience Report, or CrUX, for short. Lab data can be easier for developers to collect and analyze, but it has some limitations. For example, data can miss shifts that occur after a page finishes loading. Ultimately, it is field data that demonstrates whether you’ve really solved a problem for your users. PageSpeed Insights is a useful tool, as it presents both lab and field data in one report. For CLS, look for warnings such as avoiding large layout shifts and images that do not have explicit width and height. Just be aware that layout shifts in the report can be caused by things other than images, such as JavaScript. Fixing image CLS issues can be as simple as including image dimensions in the HTML markup. That way, the browser immediately knows exactly how much space to reserve for the image. There are other CSS tricks that can be used as well if the CSS is loaded properly.

Tip #2: Correctly size your images

(3:56) The second tip is to pick the right width and height for your images. Larger files take longer to download, particularly on mobile phones with slower network connections. Larger files also require more processing time, especially on mobile phones with less powerful CPUs. Sizing images correctly can be complicated by the range of device sizes and resolutions that access your site. If the browser shrinks or crops the image, the download file is larger than needed, which is wasteful. One easy way to detect incorrectly sized images is using the properly sized images section under Opportunities in the PageSpeed Insights report. Page speed insight identifies images on a page that have larger dimensions than needed, listing the URLs. Once you have detected there is a problem, how to fix it. Responsive images refer to techniques to make images behave well on different-sized devices. For example, in HTML, there is a source set attribute that allows you to list URLs for different sizes and formats of images so the browser can pick the best one to download. This requires you to resize the images in advance or perform imagery sizing on demand. If resizing images is too much work for your own site, consider using a Content Delivery Network or CDM. Many such services can resize images and convert them to more efficient formats on your behalf.

Tip #3: Use the best image file format

(05:28) The next tip is to think about the file format of your images, such as whether to use PNG, JPEG, or webP files. The file format affects the file size. Care should be taken, however, as formats such as JPEG and webP can reduce files using lossy compression algorithms. Lossy means image quality may be reduced as a trade-off for reducing the file size. If pixel-perfect images are required, such as button icons, less efficient but pixel-perfect formats should be used. While lower-quality images may sound like a bad idea, remember that the degradation in quality may not be noticeable to shoppers. And the speed benefit can be substantial. Shoppers may abandon your page if it takes too long to load. To detect if your site can benefit from using a different image format, look at the serve images in the Next Gen Format section of the PageSpeed Insights report. This report lists images on a page that are candidates to be converted to a more efficient file format. So is there a single best image format to use? One complication is not all image formats will work on all browsers. The caniuse.com site can be used to check which browsers support image file formats. For example, webP is now supported by almost all browsers in use. So it offers a good combination of efficiency and adoption. Alternatively, rather than picking a single format, you can have your website return the most efficient format that the browser says it supports. Again, this is a service offered by CDMs.

Tip #4: Compress images appropriately

(07:17) Tip number four is to use the right quality factor for your images to encode them efficiently while retaining the desired image quality. The Encode Images Efficiently section of the PageSpeed Insights report can be used to identify candidate images for compression optimisation. The report also shows potential file size savings. Be aware, however, that the report does not perform a visual check on your compressed images. The report is based on commonly used compression factors. To find a quality factor you are happy with, use your favourite image conversion tool on several images using different quality values. A common default value for webP is 75. The Squoosh.app site can be useful for this purpose, as it makes it easy to compare the before and after versions of images. Remember also that there are times when you want higher resolution images, such as when you want to allow the shopper to Zoom in on a product image. Want to go deeper? Jake and Surma had a great session on image compression they gave at Web.dev Live.

Tip #5: Cache images in the browser

(08:22) Tip number five tells the browser how long it can safely cache images. When you return an image from your site, you can include an HTTP response header with caching guidance, such as how long it is recommended for a browser to cache an image. One approach to detect if the HTTP response cache headers have been set appropriately on your site is, again, to use the PageSpeed Insights report. The Serve Static Assets With an Efficient Cache Policy section of the PageSpeed Insights report identifies images that may benefit from caching improvements. Another approach is to use a networking tab in developer tools inside Chrome to examine the HTP cache response headers. To fix issues on your site, check to see if you have platform or web server settings you can change to adjust the cache lifetime for images on your site. If you do not change images frequently, or if you always give images a new URL, then you can set a very long cache lifetime. In addition to a cache duration, using a CDN frequently makes downloads faster by caching copies of your images in multiple locations around the world, closer to where users connect from.

Tip #6: Correctly sequence your image downloads

(09:37) The final tip is a more advanced tip. Correctly sequencing the order in which resources, including images, are downloaded can significantly improve page performance. Because downloading the images one by one can be slow, browsers using HTTP1 typically download several images in parallel over independent network connections to the website. If the website supports HTTP2, most browsers now multiplex downloads over a single network connection. This is generally faster and avoids problems such as large files blocking the downloads of smaller files. Whichever approach is used, there is still a network bandwidth bottleneck. In general, you want images to be downloaded in the following order. First, you want to download large hero images at the top of the page, as they can affect the largest Contentful Paint score for the page. The largest Contentful Paint, or LCP for short, is the time it takes to show the user the main content of the screen. Largest Contentful Paint, like cumulative layout shift, is a core web vital metric. Next, you want other images that will be visible without scrolling to be downloaded. Images visible without the user scrolling are referred to as above the fold. The rest are referred to below the fold. As a web page may be viewed on devices with different screen sizes, it is common to estimate which images are above and below the fold by checking your site on multiple devices. Finally, you want images to be downloaded that are just off the screen so that they can be ready for display when a user starts scrolling. Other images that are not likely to be displayed soon are often best to load lazily. If the user does not scroll the page, fetching them would be a waste of resources. To detect if your site is loading images efficiently, again, the PageSpeed Insights report can help. For example, the Defer Offscreen Images section of the report identifies images that could be loaded after other images. There are other sections that can be useful, such as Avoid Chaining Critical Resources. Although these chains typically involve JavaScript and CSS files. A common technique to improve the order of image loading is lazy loading. This is where images are not downloaded until the user scrolls to that portion of the page.

Lazy loading was originally implemented using JavaScript. But now, most browsers support the loading equals lazy attribute of HTML. Care should be taken, as performance degradation can occur if lazy loading is used for images above the fold. Recent versions of Lighthouse will highlight if an image is lazily loaded, which will impact LCP. With the advent of HTTP2, there are additional optimizations that are possible if the browser and website both support HTTP2. An HTTP2 website can start pushing images to the browser that knows they are going to be needed without waiting for the browser to request them. The HTTP/2 also allows browsers to download multiple images in parallel over a single network connection. To take advantage of HTTP2, either your web server must be configured, so it knows which resources to push or use a CDN with HTTP2 support and configure it to push resources, as required.

Conclusion

(12:59) To wrap up, I’ve shown common problems that can occur on eCommerce  sites with static images. Some have easy fixes, such as ensuring that image tags in HTML always specify the image width and height attributes or using the loading equals lazy image attribute. 

There are more advanced techniques that you can implement directly on your website, but it may be easiest to use a third-party CDN with suitable support. Such services can: 

  • Serve images in the best format supported by the browser; 
  • Convert images from a single source image to more efficient formats as needed; 
  • Pre-scale images to multiple sizes for efficient download;
  • Display across a range of devices, and compress images to reduce download sizes. 

Thanks for watching. If you enjoyed this video, make sure to click Subscribe. Google Search Central has new videos posted weekly.

Sign up for eCommerce Essentials today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

Customer retention strategy for eCommerce before, during and after the holiday season

Succeeding with engaging marketing activities in the holiday season assumes thoughtful planning of customer acquisition and retention strategies and their creative realisation. To help you get started, LION compiled a list of initiatives grouped into:

  • Tactical – organised by readjusting standard channels and tools that are typically already in place in the majority of eCommerce companies, and the outcomes will be seen immediately after appliance;
  • Strategic tools for customer retention need more time and effort to plan, launch and manage. However, the results are a solid base for the entire business’s longevity and subsequent scaling.

Tactical initiatives

Customer research and feedback

Generally, understanding the customer starts from target audience research and continues on the various marketing channel touch points. Capture the intention and attention of holiday shoppers by:

  • Surveying to collect information about the holiday shopping plans of the customers
  • Researching general shopping trends in the market
  • Sharing gift lists with popular items
  • Introducing holiday trends to inspire
  • Running teasing campaigns
  • Launching early access for loyal customers
  • Optimising the purchase process.

However, the customer journey doesn’t end with the purchase! 37% of respondents claim that more than five and 33% – more than three purchases are needed to create solid brand loyalty. Therefore, post-purchase communications are invaluable contributors to the overall business management process. Why did customers buy this particular product? What is the level of their satisfaction? How do they use the product or service? If they don’t use it, then why? What is the definition of customer loyalty specifically for your company? Moreover, remember that enriched data and an extra occasion to be in touch with the customer is another perfect way to cross-sell.

Encourage users at the right moment to create an account

The importance of account creation for repurchasing and increasing the customer retention rate is apparent. The problem is that the mandatory creation of an account could be an issue because many online shoppers prefer to purchase as a guest, so asking to create an account could prevent them from placing their first orders. However, you can suggest an account creation immediately after the first purchase is completed and even simplify the process by applying the information from the details of this order.

Send only value-adding emails

There is a list of must-have emails to start:

  • Welcome email
    Don’t miss the opportunity of using the email with the highest possible open rate of 50-60% at its maximum capacity. Personalise it apart from just using the name, but applying the details of the purchase and incorporating the personalised suggestion of similar products and services blocks.
  • Content email
    Send a selection of relevant content in different formats to maintain customer engagement even after holiday sales seasons: promoting new offers, bundles, and special gifts to a specific segment of the target; sharing relevant content from the company’s blog and establishing the brand as a thought-leader in the industry; interacting with the audience by surveying and asking questions about the experience with the brand; updates about new products and information that your audience will find valuable.
  • Upsell email
    Existing customers already have a history of successful experiences. Therefore, they trust the company and are more eager to purchase again. At the same time, the data collected during the previous order allows one to personalise subsequent communications easily.
  • Abandoned cart email
    The goal is the suggestion to proceed, provide some promo code that could be applied to gain additional perks or simply show that the company is ready to receive feedback.

Emails can help to build customer relationships before and after purchases, but only if they add benefits to the customer experience that holiday shoppers probably wouldn’t want to miss. Worth putting yourself in the client’s shoes and asking, “So what?” – the question client asks when reads the email, critically assessing the information and the relevance of the received stimulant to act.

Retarget ads on social media

Apart from organic coverage that could be gained through appealing social media posts and encouraging the clients through various communication channels to follow and engage with the brand, as one of the best customer retention strategies, consider plugging in the social media’s retargeting power, which allows showing ads on social to people who already somehow were engaged with the website starting from those visited it once to those who dropped the cart.

Discount or credit for those who return

When the margins are low, applying discounts or credit strategies could negatively affect the bottom line. However, sending them for existing clients’ next purchases or retaining those who haven’t purchased for long could be a winning strategy to increase customer retention rate. Considering the amounts on discount and credit as a way to cut customer acquisition costs, increasing thought as a standard discount of 10% up to 20% or even more doesn’t seem excessive.

Strategic initiatives

Boost your customer support to the next level

A proper level of customer support became an unspoken golden standard for the highly-competitive eCommerce field. Online shoppers are most likely unpleasantly surprised if the company doesn’t match these standards. However, creating additional value could add to communications with clients an element of surprise and delight that puts the business, in the eyes of this particular customer, in a special place, spotlighting among the competitors:

  • Sustainable 24-hour service
    Attract customer support agents, sourcing them across different time zones to provide outstanding 24-hour service alongside sustainable working conditions.
  • Live chat
    The flexibility of the time to send the request and receive the responses makes live chat the communication type the eCommerce customers prefer over the phone and email communications.
  • Omnichannel customer service options
    An omnichannel customer support strategy guarantees that you have agents spread across multiple channels, ready to meet and provide timely and eligible support to customers where they are.
  • FAQ page and store policies
    Big holiday sales seasons are the sources for a large amount of data collection, and it is an omission not to use insights to make the next year’s customer experience more advanced. Collect, analyse and systemise the information about the most frequent queries related to the business on FAQ pages and predefined store policies, and the next holiday sales season may proceed much more smoothly.

Own the responsibility even if others are to blame

The customer experience contains different stages, elements and actors. The customers cannot and shouldn’t have to be able to separate these elements. Therefore, they apprehend the experience as a whole, and if something small licks, the overall customer experience impression could be damaged. Being able to own the responsibility for clients’ difficulties and turn the negativity into positivity should be one of the key methods in customer retention strategy and loyalty management.

Personalise the customer journey

The main point of difference for an eCommerce business could be creating a customer journey that feels as if it was explicitly designed with customers in mind and was tailored to the specific customer’s need. Improve the customer retention rate by differentiating the company from its competitors and making loyal customers flow away harder:

  • Positive emotions and entertaining experiences
    Gamification on the website, reusable packaging and other ideas at each stage of the customer journey – positive associations make the company product or service much more memorable in the customer’s mind.
  • Unexpected gifts and thank you notes
    Miniature versions of the product samples or personalised and branded thank you notes with a handwritten signature included in the order are a striking touch perfectly appropriate for the holiday seasons and birthdays.

Return policy as the security guarantee to clients

92% of consumers polled claim they will be ready to buy from the online store again if the product return process is easy. It is connected to feeling secure from the risks of wasting money and extra time beyond what was already invested on an item that can or cannot match the requirements or the initially stated product description. Remind online shoppers that they can trust you if they’re not satisfied with the product, and they will be more likely to buy from your business, even if they’re unsure about a product.

A good return policy considers all conditions when a request can be qualified and specifically underlines the situations when it cannot be qualified as a return. Ensure ahead that the return policy is clear, reasonable and fair. 

Customer loyalty program – never dying classics

A sustainable customer portfolio contains a balanced range of new and retained clients. The strategies for working with retained clients consider the frequency of purchases as the key indicator to focus on. The importance of customer loyalty and engagement cannot be overestimated. The essence of customer loyalty programs is to reward the customer for various actions, from authorising the credit card and actual purchasing to leaving a review and inviting a friend. What effort should be rewarded and how – depends on the business model specifics and strategy. Regardless, customer loyalty programs are proven to be one of the most efficient ways for customer retention.

Subscription service

Subscriptions provide regular revenue for the business by locking people into purchasing monthly. Moreover, they keep existing customers constantly engaged by delivering personalised experiences, and it shouldn’t have to be mandatorily the entire business model for the online shop. For example, for eCommerce cosmetics retailers, monthly subscription boxes could include miniature versions of the best-selling products.

Conclusion

We hope these LION tips will help you succeed in highly competitive holiday season markets!

Whatever initiatives to retain the customers you choose, uphold a data-driven and creative approach. Don’t forget to measure the efficiency of your efforts and distinguish valuable insights to adjust and change the direction if needed. Last but not least – select dedicated partners in your technology stack.

At LION Digital, we value relevancy the most in complex customer retention strategies. Although we make agnostic recommendations based on customer needs, we recommend Yotpo who is a quality service partner, to help accelerate our clients’ growth by enabling advocacy and maximising customer lifetime value. Yotpo includes the most advanced solutions for SMS marketing, loyalty and referrals, subscriptions, reviews, and visual user-generated content – you can choose depending on what customer retention strategies you want to apply.

Given the influx of volume to the website, Yotpo heavily emphasises leveraging all the first-party data collected during the holiday promotions for post-holiday communications to make smarter, segmented audiences for email and SMS flows. Make this data work for you and provide hyper-personalized marketing to your subscribers. For example, get them back on site with a new product that complements one they have already purchased.

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

Contact Us

Article by

ASSELYA Sekerova –
MARKETING & PROJECT MANAGER

Is your agency focusing on ROAS and not Revenue?

As the cost of paid media advertising has increased over the last 2-3 years of exaggerated eCommerce usage, we have seen Cost Per Clicks (CPC’s) rise at an exponential rate as businesses scramble, fight to acquire market share & increase top-line revenue. As a result of this, we saw the ability to maintain a decent Return On Ad Spend (ROAS) be put in a comprising position were to maintain the ROAS, the business may have to sacrifice top line & paid media revenue contributions.

This now has created a fork in the road that feels like more of an ultimatum for the business, that it must either choose an acceptable ROAS with slower revenue growth of 5-10% YoY… or a historically below average ROAS with strong revenue growth of 20-30% YoY. A common example that you may be focusing on the wrong goal is if your ads account ROAS is growing, but your full site revenue may be stagnant or even in decline over the same period.

Here at LION Digital, we have seen both of these scenarios play out & neither of them is good or bad, right or wrong. Your agency should be having an in-depth conversation with your monthly/quarterly about what your true business goals are overall. It needs to be about more than just monthly budgets, monthly revenue targets, new product launches & increasing awareness of your product/service. We need to be setting goals for 1,2,3 years into the future & then reverse engineering them to work out micro checkpoints that will lead you to your macro goal. This may seem like common knowledge but somewhere along the line, digital marketing has been misconstrued that more money & big changes equal big returns & big improvements. Sadly this is not true.

This put digital media agency’s in an interesting position where they can no longer pump out a 1 size fits all approach & even the small businesses entering the online market need to have an in-depth understanding of what they are looking to get out of their marketing.

In this article, we’ll cover the most important points to consider when identifying goals for your business’s growth in 2022 & beyond.

Firstly, let’s break down what is Return on Ad Spend (ROAS) & if it is the right goal for the business at this stage in its life cycle.

When we have a new client start at LION, the most common strategy we implement is to spend 1-3 months optimising the account to an acceptable ROAS we have set with the client & doing an in-depth technical audit to ensure that when it comes time to scale, that we don’t have any issues behind the scenes that will hamper our ability or cause amplified inefficiencies. This is the most common issue we uncover in new accounts we onboard from other agencies, they have tried to scale in the past but the agency hasn’t done anything apart from putting the budgets up & hope for the best. Although some technical aspects may be time-consuming for LION lay a solid foundation by mastering the basics, we have found this is the only way to ensure long-term success.

Once we have created a stable foundation, we begin to set multiple KPIs, benchmarks & goals to work towards in manageable steps. One of those KPIs is ROAS & it can be looked at in two ways based on your business/industry/season/market conditions:

  • Firstly, you may use ROAS as an indicator of inefficiency to protect your margins. This has been important for the last 2-3 years with increasing supply issues, logistics costs increasing & margins getting thinner.
  • Secondly, you may use ROAS as a floor metric for performance that you do not wish to drop below while trying to maximise revenue.

Neither of these approaches is good or bad, right or wrong. You simply need to be honest about what would benefit the business in the short vs long term.

In terms of what approach you chose & what the ROAS could look like are vastly different from one industry to another. The main points that drive differences are:

  • Cost Per Click (CPC), driven by the level of competition
  • Average Order Value (AOV) of your online business
  • Total Ad Spend Budget (Cost) Allocated across the account
  • Types of campaigns running, Brand/PMAX/Shopping/Remarketing/Display/Search

It’s important we cover this, as these metrics will heavily influence your ROAS metric & overall results. The most overlooked point is simply what types of campaigns are running & do they align with the goals/approach you’re working towards. This point is the bread & butter of why there is a trade-off between ROAS & Revenue.

For example, if we were to focus on ROAS, the budget split would be more towards retaining market share, retention, loyalty & efficiency of ad spend. This would likely skew the spend towards PMAX/Shopping/Remarketing & maybe some niche search campaigns.

On the flip side, the more the ad account tracks into the generic search keyword territory, the competition is going to be higher, the conversion rate will be lower & ad costs will likely increase. The results will be similar if the ad account tracks more towards Display ads, Youtube & broad awareness marketing.

However, these campaigns still have value & will likely contain the highest % of people who haven’t yet bought from you. It’s imperative that if you do decide to spend money on these campaign types, you have price competitiveness, stock availability, unique selling points, reasons to buy from you over a competitive & lastly, a suitable budget for a minimum of 1-3 months of ad run time.

That last point is crucial as paid search is vastly different from social ads in that the longer your ads run, the better your reputation, expected clickthrough rate & bounce rate becomes. This will improve your ad relevance in the eyes of Google & allow you to creep up the paid rankings over time. Unfortunately, in paid search in 2022, there is very little likelihood that dropping $10k into broad keywords on Black Friday is going to reap the business any type of results if you don’t have any form of foundation from previous months’ work.

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

Contact Us

Article by

Sam McDonough –
Paid Media Director

WebMaster Hangout – Live from SEPTEMBER 07, 2022

A site that connects seekers and providers of household-related services.

LIZZI SASSMAN: (01:03) So the first question that we’ve got here is from Dimo, a site that connects seekers and providers of household related services. Matching of listings is based on zip code, but our users come from all over Germany. The best value for users is to get local matches. Is there some kind of schema markup code to tell Google algorithm show my site also in the Local Pack. Please note we do not have local businesses and cannot utilise the local business markup code. The site is… and I’m going to redact that.

  • A: (01:36) Yes, so Dimo– as you noted, local business markup is for businesses with physical locations. And that means that there’s typically one physical location in the world for that place. So it should only show up for that city. And your other question, currently, there’s no rich result feature for online services only that you could use structured data for, but you can claim your business profile with Google Business Profile manager and specify a service area there. So I think that that could help.

Is there a way to measure page experience or the core web vitals on Safari browsers, and is there a way to improve the experience?

MARTIN SPLITT: (02:12)  Indra asks, Google Search Console shows excellent core web vital scores for our site, but I understand that it only shows details of Chrome users. A majority of our users browse using Safari. Is there a way to measure page experience or the core web vitals on Safari browsers, and is there a way to improve the experience?

  • A: (02:35) Well, you can’t really use Google Search Console for this, but you can definitely measure these things yourself with the browser developer tools in a Safari browser and maybe ask around if you have any data from Safari users through analytics, for instance. There’s nothing here that we can do for the page experience or Search Console’s page experience resource because the data is just not available.

How can I best switch from one domain to a new one?

JOHN MUELLER: (03:01) For the next question, I’ll paraphrase. How can I best switch from one domain to a new one? Should I clone all the content or just use 80% of the content? What is the fastest way to tell Google that they’re both my sites?

  • A: (03:17) We call this process a site migration. It’s fairly well documented, so I would look up the details in our documentation. To simplify and leave out a lot of details, ideally, you’d move the whole website, 1 to 1, to the new domain name and use permanent 301 redirects from the old domain to the new one. This is the easiest for our system to process. We can transfer everything directly. If you do other things like removing content, changing the page URLs, restructuring, or using a different design on the new domain, that all adds complexity and generally makes the process a little bit slower. That said, with a redirect, users will reach your new site, regardless of whether they use the old domain or the new one.

Do you support the use and the full range of schema.org entities when trying to understand the content of a page, outside of use cases such as rich snippets?

LIZZI SASSMAN: (04:04) And our next question is from IndeQuest1. Do you support the use and the full range of schema.org entities when trying to understand the content of a page outside of use cases such as rich snippets? Can you talk about any limitations that might exist that might be relevant for developers looking to make deeper use of the standard?

  • A: (04:26) So, to answer your question, no, Google does not support all of the schema.org entities that are available on schema.org. We have the search gallery which provides a full list of what we do support for rich snippets, like you mentioned, in Google Search results. But not all of those things are visual. We do talk about certain properties that might be more metadata-like, and that aren’t necessarily visible as a rich result. And that still helps Google to understand things, like authors or other metadata information about a page. So we are leveraging that kind of thing.

What could be the reason that the sitemap cannot be read by the Googlebot?

GARY ILLYES: (05:07) Anton Littau is asking, in Search Console, I get the message “sitemap could not be read” in the sitemap report. No other information is provided. What could be the reason that the sitemap cannot be read by the Googlebot?

  • A: (05:21) Good question. The “sitemap could not be read” message in Search Console may be caused by a number of issues, some of them technical, some of them related to the content quality of the site itself. Rarely, it may also be related to the hosting service, specifically, if you are hosting on a free domain or subdomain of your hoster, and the hoster is overrun by spam sites, that may also cause issues with fetching sitemaps.

We’ve got guides and tips that are illustrated on our website, and they’re not performing well in the SERP.

LIZZI SASSMAN: (05:53) Our next question is from Nicholas. We would like to know how algorithms treat cartoon illustrations. We’ve got guides and tips that are illustrated on our website, and they’re not performing well in the SERP. We tried to be unique, using some types of illustrations and persona to make our readers happy. Do you think we did not do it right?

  • A: (06:18) I don’t know because I don’t think I’ve ever seen your cartoons, but I can speak to how to improve your cartoon illustrations in SERP. So our recommendation would be to add text to the page to introduce the cartoons, plus alt text for each of the images. Think about what people will be searching for in Google Images to find your content. And use those kinds of descriptive words versus just saying the title of your cartoon. Hope that helps.

Does posting one content daily increase rankings?

GARY ILLYES: (06:46) Chibuzor Lawrence is asking, does posting one content daily increase rankings?

  • A: (06:53) No, posting daily or at any specific frequency, for that matter, doesn’t help with ranking better in Google Search results. However, the more pages you have in the Google index, the more your content may show up in Search results.

Does Google agree with the word count or not?

LIZZI SASSMAN: (07:09) OK, and the next question is from Suresh. About the helpful content update that only 10% write quality content, and the rest, 90%, don’t right quality content, lengthy content, but how should they write quality content? Does Google agree with the word count or not?

  • A: (07:29) Well, nope, content can still be helpful whether it’s short or long. It just depends on the context and what that person is looking for. It doesn’t matter how many words, if it’s 500, 1,000. If it’s answering the user’s intent, then it’s fine. It can be helpful. These are not synonymous things.

When using words from a page title in the URL, should I include stopper words too?

JOHN MUELLER: (07:49) I’ll paraphrase the next question, hopefully, correctly. In short, when using words from a page title in the URL, should I include stopper words too? For example, should I call a page whyistheskyblue.HTML or whyskyblue.HTML?

  • A: (08:08) Well, thanks for asking. Words in URLs only play a tiny role in Google Search. I would recommend not overthinking it. Use the URLs that can last over time, avoid changing them too often, and try to make them useful for users. Whether you include stop words in them or not or decide to use numeric IDs, that’s totally up to you.

Do different bots type, image, and desktop share crawl budgets?

GARY ILLYES: (08:31) Sanjay Sanwal is asking: do different bots type, image, and desktop share crawl budget? And what about different hosts?

  • A: (08:40) Fantastic question. The short answer is yes, Google Bots and its friends share a single crawl budget. What this means to your site is that if you have lots of images, for example, Googlebot Images may use up some of the crawl budgets that otherwise could have been used by Googlebot. In reality, this is not a concern for the vast majority of the sites. So unless you have millions of pages and images or videos, I wouldn’t worry about it. It’s worth noting that the crawl budget is per host. So, for example, if you have subdomain.example.com, and you have another subdomain.example.com, they have different crawl budgets.

Request to 301 redirect the subdirectory to their new German site. Would you advise against it?

JOHN MUELLER: (09:24) Christopher asks: we’ve sold the German subdirectory of our website to another company. They request us to 301 redirect the subdirectory to their new German site. Would you advise against it? Would it hurt us?

  • A: (09:40) Well, on the one hand, it all feels kind of weird to sell just one language version of a website to someone else. On the other hand, why not? I don’t see any problems redirecting from there to a different website. The only thing I would watch out for, for security reasons, is that you avoid creating so-called open redirects, where any URL from there is redirected to an unknown third party. Otherwise, that sounds fine.

Can I expect to see clicks and impressions from this in the search appearance filter as we can see with some other rich results?

LIZZI SASSMAN: (10:08) Sam Gooch is asking: I’m experimenting with a new learning video, rich result, and can see it’s being picked up in Google Search Console. Can I expect to see clicks and impressions from this in the search appearance filter as we can see with some other rich results?

  • A: (10:23) Well, to answer this question specifically, there’s no guaranteed time that you’ll be able to see a specific rich result in Google Search after adding structured data. But I think what you’re asking about here is for a specific thing to be added to Search Console, and we’ll have to check with the team on the timeline for that. And we don’t pre-announce when certain things will be added to Search Console. But you can check the rich result status report for the learning video and make sure that you’re adding all of the right properties and that it’s valid and ready to go for Google to understand what it needs in order to generate a rich result. Hope that helps.

How big is the risk of penalising action if we use the same HTML structure, same components, layout, and same look and feel between the different brands?

JOHN MUELLER: (11:02) Roberto asks: we’re planning to share the same backend and front end for our two brands. We’re ranking quite well with both of them in Google. How big is the risk of penalising action if we use the same HTML structure, same components, layout, and same look and feel between the different brands? What would be different are the logos, fonts, and colours. Or would you suggest migrating to the same front end but keeping the different experience between the two brands?

  • A: (11:33) Well, this is a great question. Thanks for submitting it. First off, there’s no penalty or web spam manual action for having two almost identical websites. That said, if the URLs and the page content are the same across these two websites, then what can happen for identical pages is that our systems may pick one of the pages as a canonical page. This means we would focus our crawling, indexing, and ranking on that canonical page. For pages that aren’t identical, we generally index both of them. For example, if you have the same document on both websites, we’d pick one and only show that one in Search. In practice, that’s often fine. If you need both pages to be shown in Search, just make sure they’re significantly different, not just with a modified logo or colour scheme.

JavaScript SEO, what to avoid along with JavaScript links?

MARTIN SPLITT: (12:23) Anna Giaquinto asks, JavaScript SEO, what to avoid along with JavaScript links?

  • A: (12:30) Well, the thing with links is that you want to have a proper link, so avoid anything that isn’t a proper link. What is a proper link? Most importantly, it’s an HTML tag that has an href that lists a URL that is resolvable, so not like a JavaScript colon URL. And that’s pretty much it. If you want to learn more about JavaScript-specific things for Search, you can go to the JavaScript beginner’s guide on developers.google.com/search and see all the things that you might want to look out for.

I research a keyword that has no volume or keyword density, but we are appearing for those keywords on the first page. Should we target that keyword?

LIZZI SASSMAN: (13:05) Our next question is from Sakshi Singh. Let’s say I research a keyword that has no volume or keyword density, but we are appearing for those keywords on the first page. Should we target that keyword?

  • A: (13:19) Well, Sakshi, you can optimise for whatever keywords you want, and it’s not always about the keywords that have the most volume. I would think about how people should find your page and target those keywords.

Will audio content be given more priority and independent ranking following the helpful content algorithm update?

GARY ILLYES: (13:32) Kim Onasile is asking, hello, you previously advised that there are no SEO benefits to audio versions of text content and that audio-specific content doesn’t rank separately like video content. However, given you also said it might be that there are indirect effects like if users find this page more useful and they recommend it more, that’s something that could have an effect. Will audio content be given more priority and independent ranking following the helpful content algorithm update?

  • A: (14:07) This is an interesting question. And ignoring the helpful content algorithm update part, no, audio content, on its own, doesn’t play a role in the ranking of text results.

Is it OK to fetch meta contents through JavaScript?

MARTIN SPLITT: (14:33) Someone asked, is it OK to fetch meta contents through JavaScript? I think that means it is OK to update metatag data with JavaScript?

  • A: (14:44) While that is possible to do, it is best to not do that. It may give Google Search mixed signals, and some features may not pick up the changes. Like, some specific search result types might not work the way you expect them. Or it might have incorrect information, or it might miss something. So I would suggest not doing that.

Both of my websites have been hit by different updates, around 90% drops, and are suffering from some type of flag that is suppressing our sites until the soft penalty is lifted.

GARY ILLYES: (15:08) Anonymous is asking, both of my websites have been hit by different updates, around 90% drops, and are suffering from some type of flag that is suppressing our sites until the soft penalty is lifted. Or is there even a soft penalty?

  • A: (15:26) Good question. No, the named updates that we publish on the Rankings Updates page on Search Central are not penalties in any shape or form. They are adjustments to our ranking algorithms, so they surface even higher quality and more relevant results to Search users. If your site has dropped in rankings after an update, follow our general guidelines for content, take a look at how you could improve your site as a whole, both from content and user experience perspective, and you may be able to increase your rankings again.

When would be the next possible update for the Search results?

JOHN MUELLER: (16:03) Ayon asks, when would be the next possible update for the Search results?

  • A: (16:09) Well, on our How Search Works site, we mentioned that we did over 4,000 updates in 2021. That’s a lot of updates. Personally, I think it’s critical to keep working on things that a lot of people use. Our users and your users expect to find things that they consider to be useful and relevant. And what that means can change over time. Many of these changes tend to be smaller and are not announced. The bigger ones, and especially the ones which you, as a site owner, can work on, are announced and listed in our documentation. So in short, expect us to keep working on our systems, just like you, hopefully, keep working on yours.

Does having a star aggregated ranking on recipes improve its position?

LIZZI SASSMAN: (16:54) And our next question is from Darius. So Darius is asking, does having a star aggregated ranking on recipes improve its position?

  • A: (17:05) I think what Darius is asking about is the stars that show up for recipes and with structured data and whether or not that has an effect on ranking. So while the stars are more visual and eye-catching, structured data in and of itself is not a ranking signal. And it isn’t guaranteed that these rich results will show up all the time. The Google algorithm looks at many things when it’s creating what it thinks is the best Search experience for someone. And that can depend on a lot of things, like the location, language, and device type.

When I don’t set a rel-canonical, then I can see the internal links in the search console in the links report. Is this normal?

JOHN MUELLER: (17:37) Christian asks: I have set the rel-canonical together with a noindex meta tag. When Google does not accept a canonical at all, all internal links are dropped. When I don’t set a rel-canonical, then I can see the internal links in the search console in the links report. Is this normal?

  • A: (17:55) Well, this is a complex question since it mixes somewhat unrelated things. A noindex says to drop everything and the rel-canonical hints that everything should be forwarded. So what does using both mean? Well, it’s essentially undefined. Our systems will try to do the best they can in a conflicting case like this, but a specific outcome is not guaranteed. If that’s fine with you, for example, if you need to use this setup for other search engines, then that’s fine with us too. If you want something specific to happen, then be as clear as possible for all search engines.

If a video is indexed in the video indexing report, is it still worth adding the video structured data on that page and why?

LIZZI SASSMAN: (18:33) And our next question is from Thijs.  If a video is indexed in the video indexing report, is it still worth adding the video structured data on that page and why?

  • A: (18:47) Well, yes. Just because something’s indexed doesn’t mean that there’s not an opportunity to improve how it appears. Structured data helps Google understand more about your video, like what it’s about, the title, interaction statistics, and that kind of stuff. And adding structured data can make your videos eligible for other video features, like key moments. So it’s not just, oh, get your video indexed, and that’s it. There are other things that you can do to improve how your content appears on Google.

Can I cloak a list with lots of products to Googlebot and show users a Load More button?

MARTIN SPLITT: (19:20) Tamás asks, can I cloak a list with lots of products to Googlebot and show users a Load More button?

  • A: (19:26) I think this is not cloaking, as what users see when they click on the Search result roughly matches what Googlebot sees. And if you have a Load More button, users will click that if they don’t see the product they are expecting there. So I don’t think this is cloaking, and that’s a solution that I think works from a crawling point of view.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

Why are customer reviews important for eCommerce, and how can they be managed efficiently?

There is a general understanding among eCommerce business owners that reviews are important. However, the true significance of client reviews frequently eludes due to various, at first glance, more essential priorities in overall business processes. In a year-on-year comparison, more consumers read online reviews while searching for products and services. According to the 2022 State of Reviews provided by LION’ partner REVIEWS.io, 94% of users claim that reviews left by preceding business clients are influential when making a purchase decision. Moreover, 62% of respondents say that reviews significantly impact them and only 6% report no impact.

Where to find eCommerce business, product and service reviews?

Although eCommerce clients use multiple channels on the internet, there are three of the most influential points for review management:

  • Google. Google remains the first source to search for new businesses for 75% of consumers. Google Seller Ratings and Google My Business helps to build trust at the first point of contact for paid and organic channels. Google Seller Ratings helps improve the performance of Google paid marketing by increasing an ad’s click-through rate, thus lowering SEM’s cost-per-click (PPC). Whereas Google My Business and, properly integrated through data markup, Google 5-star ratings for individual products and services help businesses to stand out in organic SERP results and capture top-of-funnel traffic before competitors.
  • Social Media. Only 36% of customers go on social to directly search for products and services, which is more than twice lower as on Google. Nevertheless, the speed of information spread and the value for the user expressed in the amount of everyday dedicated time makes social media one of the most critical sources for client reviews.
  • Review sites and marketplaces. For some businesses, “Yelp” and other specialised review sites, Amazon and similar marketplaces could be the core source for customer reviews that can’t be neglected.

Trends and eCommerce customer reviews management

New requirements for trust

Preceding years of fake review generation finally gave the fruits, eCommerce prospects now frequently request if companies can fake reviews and question the perfect picture of 5-star reviews. They inspect reviews’ relevance, authenticity, recency and consistency through a critical lens, which found proof in 81% of respondents, claiming that reviews should be recent and contain relevant information to have significant influence. Hence, customers expect more balanced ratings and quality reviews from verified sources with factual insight into a business, products and services.

Average rating matters

Even if it is only a part of the bigger picture, 68% of respondents answer that before engaging with the business, they preferably search for the company with a 4 as the average star rating. In contrast, only 3% of consumers appeal to companies with 1 or 2 average star ratings. At the same time, 68% somewhat agree that a high rating could be trusted only if a significant number of such reviews support it.

Reviews before price

With the massive spread of online shopping and eCommerce businesses as a response to demand, shoppers’ behaviour is also gaining more sophistication. As a matter of fact, the most influential aspect of the decision-making process when it comes to online store choice became the reviews with the share of 40% of respondents, which overtook even the price with 27%, delivery time and free returns with 20% and 13%, respectively. Interpreting the numbers, it is an opportunity for retailers with higher prices to sell more than those with lower prices for the same goods just by having better reviews.

Fewer purchases proceed solely based on the product representation messages in marketing channels initiated by the company, and more people instead rely on the experience of others. Reviews increase the probability of unknown brands being discovered by customers and competing with top brands in their categories. At the same time, the competitiveness in the eCommerce market allows not to endure poor customer experiences, which amplifies the importance of client reviews.

Company’s response to a feedback

If there is something equally important in eCommerce client feedback management as past client experiences wrapped into words and images, it is the company’s response. Especially the one to negative feedback since the question “Do you read replies to negative reviews?” received “Yes” as the answer from 90% of eCommerce users that were approached. Most merchants seem to understand the importance of feedback, and 62% claim that they respond to all or most of the reviews they receive, in contrast to 15% that say they never or rarely respond to online reviews.

Negative reviews first

Research demonstrated that the first thing e-shoppers do while studying reviews nowadays is apply a filter for 1-star to check possible cons and evaluate the risks. Compared with the past, when an unsatisfied customer could most commonly influence people from his inner circle, the negative eCommerce review placed immediately alongside the goods and services descriptions can abruptly change the intention of any user that came on the page. Thus, responding to negative customer feedback promptly and adequately increases the positive impact on the client’s decision-making process even more.

Review collection strategy

As a part of nature, people are more eager to share their opinions in the extreme grades of perception – when experience exceeded or was below expectations. Therefore, an average customer with an intermediate level of satisfaction with the product or service is usually not eager to leave a review without encouragement. For instance, over half of respondents admit to leaving online reviews four times a year or even less, and 26% have never left a review at all. At the same time, only 5% of consumers say they never leave reviews based on a positive experience. Thus, eCommerce businesses should focus on an effective review collecting strategy that would include a 360o-degree view and engagement motivation at the final customer journey stages.

Review collecting systems

According to 81% of businesses that participated in the study, review collection systems provide a profitable return on investment.

REVIEWS.io provides tools for collecting and managing company and product reviews, user-generated content and other reputation management technologies. The system integrates with all popular eCommerce solutions, including Shopify, Google, WooCommerce, Klaviyo, Magento and many more. Reviews.io is trusted by over 8,200+ brands, such as Cake Vaay, BoxRaw, Bloom & Wild, helping businesses to grow through customer trust & advocacy.

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

Contact Us

Article by

Asselya Sekerova –
Marketing &

Project Director

6 Tips for Google Merchant Center

Introduction

ALAN KENT: (00:07) Google Merchant Center is a great way to share data about your eCommerce business with Google. Hi. My name is Alan Kent, and I’m a developer advocate at Google. In this episode, I’m going to share six tips on how to get the most out of Merchant Center for your presence in search results. The most common use for Merchant Center is to upload product data via structured feeds. Because feeds are designed to be read by computers, data is extracted more reliably than Googlebot crawling your site and extracting data from web page markup. If you’re familiar with structured data, you may wonder whether to embed structured data in web pages or provide a feed to the Merchant Center. Google’s recommendation is to do both. Google may cross-check feed data against your website. So product-structured data in web pages is still recommended even if you also provide Merchant Center feeds. If you have physical stores, you can also share inventory location data with Google. This can then be used by Google when answering queries for products near me.

Tip 1. Ensure products are indexed

(01:50) The Googlebot web crawler attempts to locate all products on your site by following links between pages. Googlebot, however, may miss pages in some circumstances. For example, you may have some products only reachable from on-site search results. Google typically does not enter search terms into the on-site search box to discover new pages. If you have a product page and are unsure if it is indexed, you can use the URL Inspection tool. This will report what Google Search knows about your page. You can also use the site colon URL as a search term to search for that specific URL. In a previous episode, I described creating a Sitemap file to list the important pages to index on your site. The Sitemap file is used by the Googlebot crawler to find pages on your site without relying solely on links between pages. But there is another way. Creating a Merchant Center product feed will help Google discover all the product pages on your website. These product page URLs are shared with the Googlebot crawler to potentially use as starting points for crawls of additional pages. It is, however, important to note that this and some other Merchant Center features are not available in all countries. Please refer to the Merchant Center Help Center for up-to-date lists of countries’ features available.

Tip 2. Check your prices are correct in the Search results

(03:26) The second tip is to check the accuracy of product pricing data used by Google. If Google incorrectly extracts pricing data from your product pages, it may show your original price instead of your discounted price in search results. To check if Google is extracting price data accurately, quickly test a sample of results. You can search for a product page and check the price displayed if rich results are displayed. Search using the site colon URL for your product page to return the web page as a search result. To accurately provide product information, such as list price, discounts, and net price, it is recommended to add structured data to your web pages and provide Merchant Center with structured feeds of your product data. This will help Google correctly interpret pricing shown on product pages.

Tip 3. Minimise price and availability lag

(04:24) Tip number 3 is to minimise inconsistencies in pricing and availability data between your website and Google’s understanding of your site due to timing lags. For example, Google crawls web pages on your site according to its schedule. Changes on your site may not be noticed until the next Googlebot crawl. On the other hand, Merchant Center can be updated on a more consistent schedule, such as once a day or even once an hour. These delays can result in Merchant Center and search indexes lagging behind site changes, such as when a product goes out of stock. I described how to check Google’s understanding of your pricing data in the previous tip using a site colon URL query. In addition, Merchant Center may identify products that have different pricing data according to your website due to delays in processing. This can negatively impact your products’ search results until the discrepancy is resolved. Merchant Center also allows you to download all pricing data in bulk if you want to do a more exhaustive reconciliation of pricing data in Merchant Center against your website. To reduce lag, you can request Merchant Center to process your feeds more frequently. This can reduce the time lag between the product data changing on your website, and Google is aware of it. Another approach is to enable automated item updates in Merchant Center. This causes Merchant Center to automatically update collected pricing and stock-level data based on web page contents when discrepancies are detected. This is based on the assumption that your website updates in real-time when pricing or availability changes.

Tip 4. Ensure your products are eligible for rich product results

(06:18) Tip number 4 is to check that your products are getting rich results treatment in search results. Rich results are displayed at Google’s discretion but rely on Google having rich product data. To check if your product pages are receiving rich results presentation treatment, you can use a site colon URL query to search for a specific web page. If not found, the page may not be indexed. You can also use the Google Search URL Inspection tool to verify if Google is indexing your product page. To get the special rich products presentation format, it is recommended to provide structured data in your product pages and a product feed to Merchant Center. This will help ensure that Google correctly understands how to extract product data from your product pages needed for rich text product results. Also, check for error messages in Google Search Console and Merchant Center.

Tip 5. Share your product inventory data

(07:18) Tip number 5 is to ensure, if you have physical stores, that your products are being found when users add phrases such as “near me” to the queries. To test if locality data is being processed correctly, you may need to be physically near one of your physical stores and then search for your product with “near me”, or similar added. Register your physical store locations in your Google Business Profile, and then provide a local inventory feed to Merchant Center. The local inventory feed includes product identifiers and store codes, so Google knows where your inventory is physically located. You might also like to check out Pointy from Google. Pointy is a device that connects to your in-store point of sale system and automatically informs Google of inventory data from your physical store.

Tip 6. Sign up for tab Shopping tab

(08:15) The final tip is related to the shopping tab. You may find your products are available in search results but do not appear. To see if your products are present, the easiest way is to go to the Shopping tab and search for them. To be eligible for the shopping tab, provide product data feeds via Merchant Center and opt-in to Surfaces Across Google. Structured data and product pages alone are not sufficient to be included in the Shopping Tab search results.

Conclusion

(08:45) This is the final episode in the current series on improving the presence of your commerce website in search results. If you have topics, you would like to see included in a future series. Please leave a comment. If you have found the series useful and want to see more similar content, make sure to Like and Subscribe. Google Search Central publishes new content every week. Until next time, take care.

Sign up for eCommerce Essentials today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

Is your business suffering
from the September slump?

Seasonality is an unavoidable challenge for any business. No matter if your business sells products that are geared toward Winter or Summer pursuits, or if you have a product that is in demand all year round, we all have to weather up and downs throughout the calendar year.

The majority of ecommerce clients’ peak season is unsurprisingly Oct-Dec, with Black Friday, Cyber Monday and the holiday gift-giving period boosting conversion rates and driving up revenue. However, this often means dealing with a much softer market in the month of September. The IMRG Online Retail Index noted a 12.5% drop in online sales YoY in 2021, and we can see a similar trend across the majority of clients this year.

On average, across our accounts, we can see a drop in conversion rate by a full percentage point or more compared to August, which has affected performance across a wide range of industries; however, CTRs on average are up by 25%, suggesting the consumers are in a stage of “browsing not buying”.

Considering the current economic climate, with consumers seeing a constant barrage of news around supply chain issues, rising inflation rates, and reports of an impending recession, this drop in performance is, of course, a concern to many. However, it’s not all bad news.

“Early data from Morning Consult, a global intelligence company, finds that people plan to spend about the same amount on gifts as they did last year.” Inflation rates and concerns around cost saving, however, mean they will be in the market for deals and discounts.

With this in mind, here are a few tips from the LION team on how to weather the storm and win in the holiday season:

  1. Capitalise on any low-cost traffic to the site now. Consumers who are visiting your site have put you in their consideration set and may come back to purchase in the following months. Invest in owned channels, like SEO, Email and CRO and make the most of the visitors already have and how you can expand this.
  2. Start planning for sales and promotions now, and talk to the team about the best way to market these. You might want to consider adding retargeting to your strategy to let people know about discounts or flesh out your email strategy to capture low-hanging fruit. Think creatively about how you will stand out from the crowd during Black Friday and other upcoming holiday sales periods.
  3. Consider your ROAS thresholds carefully. While we don’t recommend going dark during this time, don’t spend at the cost of margin to the business when the money can be better used later in the year.
  4. Leverage new formats like YouTube shopping and awareness channels to bring new customers to the brand.

Reach out to the team at LION for advice and strategy tips that are personalised to your business.

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

Contact Us

Article by

Leonidas Comino – Founder & CEO

Leo is a, Deloitte award winning and Forbes published digital business builder with over a decade of success in the industry working with market-leading brands.

Like what we do? Come work with us