March 2022 - Premium eCommerce marketing services

The eCommerce game is changing in 2022

As many new businesses have transitioned online and started to invest in digital strategies in the last two years, this has created a supply and demand dilemma with the algorithms in paid media.

CPC costs have risen by 150% or more compared to previous years and this trend doesn’t seem to be softening in a hurry

I’m sure you’ve experienced the pinch of this trend if your topline revenue relies heavily on paid advertising.

While LION is across 100s of data points and we keep a close eye on our client’s ad spend to make sure we’re squeezing the most revenue possible from this channel, one thing is for sure.

THE ECOMMERCE GAME IS CHANGING IN 2022

The email channel has always been a fairly misunderstood part of eCommerce from my experience. Most of the brands I start working with only have checkout abandonment automations and a welcome series turned on, and run 1-2 campaigns per week. There is so much more that can be done with this channel to enhance the customer journey or grow the database,

and the Klaviyo platform is the only way to do it!

The real power of the email channel is twofold. You own the experience that you create for your customers (aka you are in control), and, you can target highly specific communication to your loyal customers at pivotal points in their customer journey (read: help them take meaningful actions to your brand). Klaviyo gives you the control you need to do both.

Nutrition Warehouse

Nutrition Warehouse, one of the largest supplement brands in the country that we started working with, in mid-January, has seen incredible growth in a short period of time.

294% MoM growth, a 40% increase in conversions from the email, 7,300% growth in their SMS channel, and March to date have seen the trend continue with 99% growth from the previous month, and an average transaction value of $9 higher than the site average. A large contributor to this growth has been the X45 email and SMS campaigns that were sent last month, using Klaviyo’s sophisticated segmentation tools.

294%

MoM

40%

+CR

7,300%

SMS

99%

MTD

+9$

AOV

Shoe Me

Shoe Me, a specialty shoe store that we work with has seen 15% growth in February from January which was already up 56% from the previous month in December. Historically, at the start of the year, sales are down, however, sending a much higher quantity of campaigns than they did previously to highly segmented audiences has generated these returns. We moved from 11 campaigns per month in December, to 19 in January and February.

11

19

Increase in campaigns from Dec 21 to Feb 22, allowed for MoM growth in a usually down sales period.

DUST N BOOTS

We started with an apparel brand that sells country workwear in July last year, Dust N Boots. They had zero email program and now in February this year it accounted for 34% of their total revenue, and this has not cannibalised revenue from other channels.

I hope you see the potential that email has to change the game in 2022.

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

Contact Us

Article by

Leonidas Comino – Founder & CEO

Leo is a, Deloitte award winning and Forbes published digital business builder with over a decade of success in the industry working with market-leading brands.

Like what we do? Come work with us

WebMaster Hangout – Live from February 25, 2022

Sub-Feature Which Is Present in Google

Q. If a link for any of [Big Query] if a particular URL has been ranked at People Also Ask,  is it counted as a click or impression? And how that feature has been added to the Search Console part is it tracked or monitored? If the users click on some other questions then your query gets added to that particular list? How is this thing monitored or calculated?

  • (01:39) Whenever there is a URL shown in the search results, we do try to show that in the Search Console. With the People Also Ask sections, if you expand it then it shows a link to your site with that question and the snippet there. That means whenever someone expands that People Also Ask section and sees the URL for your site, then that is counted as an impression and it doesn’t matter if you’re kind of the first one in the People Also Ask, or if people clicked 15 different ones and then suddenly yours shows up that would be counted as an impression. And also, when a click happens, that gets counted as a click. The ranking, the position is based on the overall position of that group of elements. The Search Console — the only thing to watch out for so the performance reports shows the ranking of the topmost items. So if different URLs are listed multiple times in a search page results, it tracks the position of the first one.

Sub-Domain for Google Search

Q. When you have subdomains it could be a blog or a help forum or resources, how good is Google to recognize this subdomain it’s not a different one but it’s a subsidiary of this particular domain?

  • (04:34) Specifically around the links there, I don’t think we would differentiate that much. It’s not that we would say, this is an internal link or an external link. It’s essentially just we found this link from this one page. From our point of view, those are just pages essentially. We don’t say this is a home page or this is kind of a product page or category page. We essentially say just this is a page and this is what it’s relevant for.

Search Console for New Website

Q. In the Search Console I do see at least one link from the former owner that’s still there. Does that mean that this link would actually still count as Search Console does show it?

  • (06:27) I don’t know if it counts. But the important part  with Search Console and the link report there’s we try to show all the links that we know of to that site. It’s not a sign that we think these are important links or that they count. So, in particular things like no follow links they would still be listed. Disavowed links would still be listed. Links that we just ignite for other reasons they could still be listed as well. So just because it’s listed doesn’t mean it’s actually a relevant or helpful link on the site.

Building Mobile Site on Amp

Q. Will it be still cached by Google the same way as a site with an AMP version would be and it would appear in search results in the Google Cache Viewer? What happens if there is an error that invalidates the page? Will it just not be shown at all in the search results?

  • (07:37) So I think you need to watch out for the difference between having pages that are purely on AMP and pages that are kind of this paired setup. And if a page is purely on AMP from our point of view it’s essentially a normal HTML page. It can also be a valid AMP page. And if it is a valid AMP page, then we can use the AMP cache and all of the AMP functionality in search as well.

Internal Links in Footer Section

Q. Is it likely to be seen as problematic by Google because the links are not contextual?

  • (09:49) Most parts that wouldn’t cause any problems. I would see this more as these are links on these pages. They’re normal internal links. For example if you have a larger website and essentially every page is linked with every other page, there’s no real context there. So it’s hard for us to understand what the overall structure is, which of these pages is more important. Because if you’re linking to everything, then it’s like everything is not important. So that’s the kind of element that I would watch out for. Whether or not they’re in the footer from my point of view it is irrelevant. If they’re generated by a plug-in or added manually, I don’t think that matters either. I would just kind of watch out, from a structural point.

Hidden and Invisible Text

Q. The support doc about the topic addresses the intent to deceive or trick bots by including excessive keywords to establish tropical relevance. Is all hidden text against webmaster guidelines?

  • (14:36) I don’t think that would be problematic. Hidden text, from our point of view, is more problematic when it’s really about deceiving the search engines with regards to what is actually on a page. So the extreme example would be that, you have a page about shoes and there’s a lot of hidden text there that is about Olympics or something like that. Then suddenly your shoe page starts ranking for these Olympic terms but when a user goes there there’s nothing about the Olympics and that would be kind of problematic from our point of view. We do a reasonable job in recognizing hidden text and trying to avoid that from happening but that’s kind of the reason we have this kind of element in the webmaster guidelines.

Page Performance

Q. We want to improve page performance with some metrics like LCP,FID. We want to change the way of loading description and some reviews content from synchronous to asynchronous on user guide. But for the Googlebot side this content will still be synchronous. Do you think when we’re doing this, it will have any impact to the site from the SEO side?

  • (24:16) That’s essentially similar to the dynamic rendering setup that we used to recommend, where you use JavaScript for the client side and you statically render the content for Googlebot and other search engines from our Google it is the same content you’re just delivering it in a slightly different way that’s perfectly fine.

Indexing, Image File Names & Ranking

Q. Do you care about image search? Will image alt text and captions be sufficient for Google to understand without an appropriate image, file name and title?

  • (24:59) We’re using an intelligent CDN provider which has been replacing the image file names with unique numbers and noticed that all the images are 404s in the Search Console. Disabling the CDN would significantly degrade overall site performance. There are also two things that I would look at, one hand if these are images that you need to have indexed in image search then you should definitely make sure that you have a stable file name for your images. Kind of the most important element here you don’t mention that these numbers or URLs change but sometimes these CDNs essentially provide a kind of session-based ID for each image. If the image URL changes every time we crawl then essentially we’ll never be able to index those images properly. Mostly because for images we tend to be a little bit slower with regards to crawling and indexing. So if we see an image once and we say we will just drop that image from our search results from the image ranking and essentially we’ll this image that we thought was here is actually no longer here anymore. For web search, we don’t need to be able to crawl and index the images because we essentially just look at the web pages. If all of the images were 404 all the time or blocked by robots.txt we would still treat that page exactly the same as if we were able to index all those images.

Search Results

Q. We’re still getting flagged for explicit content. Is there a way to get reconciled via Google? How do we go about this?

  • (28:22) Most part, this kind of understanding of which content we would filter by SafeSearch is handled automatically. If we recognize that a site has changed significantly then we will treat it differently in search results as well. We will remove that SafeSearch bit for the site or for those pages depending on when you made this change. It might be that you just need to be a little bit more patient and it’ll settle down properly.

Same Keyword on Two Different Pages

Q. Is it okay to target the same main keyboard on those two different pages?

  • (29:51) Whatever keywords that you want from our point of view we’re not going to hold you back. There are guidelines that you should not do this but it’s more that if you have multiple pieces of content that are ranking for the same query with the same intent then you’re essentially kind of diluting the value of the content that you’re providing across multiple pages. That could mean that these individual pages are not that strong when it comes to competing with other people’s websites. If you have two pages and they’re both targeting the same keyword and they have different intents then that seems kind of reasonable because people might be searching for that keyword with extra text added for one intent and extra text added for other intent. They’re essentially unique pages.

SEO Rankings, Filtering & Sorting

Q. How does Google treat these pages within the website? How do the search results affect the overall ranking? Is it enough to submit sitemaps for ranking? Or we should take additional consideration to help Googlebot to gather all of the returnable URLs?

  • (37:58) The last question, I would not rely on sitemaps to find all of the pages of your website. Sitemaps should be a way to give additional information about your website. Internal linking is super important and something you should definitely watch out for. Make sure that you set things up when someone crawls your website. They’re able to find your content and not rely on the sitemap.

Poor Core Web Vitals

Q. My website had a drop in visitors due to poor Core Web Vitals. What page experience ranking to desktop? How important is it compared to the other ranking factors?

  • (42:40) The Page Experience ranking factor is essentially something that gives us a little bit of extra information about these different pages that could show up in the search results in situations where we have strong, clear kind of intent from the query where we can understand that they really want to go to this website then from that point of view we kind of can ease off using Page Experience as a ranking factor. If all of the content is very similar in the search results page then probably using Page Experience helps a little bit to understand which of these are fast pages or reasonable pages with regards to the user experience. Less reasonable pages to show in the search results. With that kind of situation helps us with the desktop rollout, I believe this is going to be a slower rollout over the course of something like a month which means you would not be seeing a strong effect from one day to the next but rather you would see that effect over a period of time.

Ranking Signals

Q. How important is it compared to other ranking signals?

  • (44:52) Websites would not see a big visible change when it comes to the Core Web Vitals. Even if your website goes from being kind of reasonable to being in that poor bucket in the Core Web Vitals from one day to the next I would not expect to see that as a kind of a giant change in the search results. Maybe changing a few positions seems kind of the right change there. But I would not see it as a page going from ranking number 2 to ranking number 50. If you are seeing a drastic change like that I would not focus on purely Core Web Vitals. I would step back and look at the overall picture and see what else could be involved and try to figure out what you can do to improve things overall.

Validating & Building Schema

Q. I’m building out schema and doing it exactly like it should be done.But when I go over the Rich Results Test for Google it doesn’t validate. When I go to Google Search Console it’s not showing up. What should I do to actually get this to validate?

  • (49:00) There are two main things that play a role. ONe hand the validator schema.org is set up to validate theoretical schemes that you can provide with schema.org The validator in Search Console is based purely on functionality that has visible changes in Google Search and that’s usually very small subset of the bigger scheme.org set of things that you can mark up. Example you are making things up that don’t have a visible effect in the search results in terms of maybe it starts, shows video or something like that then that would be something where Search Console would say I don’t see anything here which might fall into that category that you’re seeing there. Sometimes plays a role with regards to elements that do have a visible effect on schema.org the requirements are sometimes not the same as in Google Search. So it will have some required properties and some optional properties and it kind of validates based on those Google Search sometimes we have stricter set of requirements that we have documented in our Help Center as well. So from point of view, if Google doesn’t show it in the search results then we would not show in the testing tool and if the requirements are different, Google’s requirements are stricter and you don’t follow these guidelines then we would laso flag that as I don’t know.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

WebMaster Hangout – Live from February 18, 2022

Indexing API

Q. Using API for things other than job posting and broadcasting events doesn’t bring any benefits.

  • (06:49) The person asking the question is interested whether he can use API for pages different than job posting and broadcasting events, like, for example, news articles and blogs. John says that people try doing that, but essentially what Google has documented is what it uses the API for. If there isn’t content that falls into those categories, then the API isn’t really going to help. But trying that won’t negatively affect the content.

Unlinked Brand Mention

Q. Unlinked brand mention doesn’t really affect SEO.

  • (12:01) When there is an article, that mentions the website without linking to it, it is not a bad thing. It is only a little inconvenient for users, because they have to search for the website being mentioned. But, otherwise, John says he wouldn’t assume there’s some SEO factor that is trying to figure out where someone is mentioning the website’s name. 

User Reviews and Comments

Q. Non-spammy comments help Google understand the page a little better.

  • (12:58) A really useful thing about the user comments is that oftentimes people will write about the page in their own words, and that gives Google a little bit more information on how it can show this page in the search results. From that point of view, comments are a good thing on a page. Obviously, finding a way to maintain them in a reasonable way is something tricky, because people also spam those comments, and all kinds of crazy stuff happens in them. But overall once a way to maintain comments on a web page is found, that gives the page a little bit more context and helps people who are searching in different ways to also find this content. 

SSL Certificates

Q. Any kind of valid SSL certificate works fine for Google.

  • (14:03) Different types of SSL certificates are not important for SEO, and just free SSL certificates are perfectly fine. The different types of certificates is more a matter of what is wanted with this certificate. From Google point of view, it just watches out for whether the certificate is valid or not.

Multiple Job Postings

Q. Posting the same job posting in different subdomains from the same root domain is fine.

  • (14:47) John assumes that posting the same job posting in different subdomains with the same job posting data structure is perfectly fine, because it’s very common to have the same job posted on different websites, and those different websites could have structured data on them as well. From that point of view, just having it posted different times on the same website or in different subdomains should be perfectly fine as well. However, John mentions that he doesn’t know all the details of all the guidelines around job postings, so it might be that there’s some obscure mention that it should be only listed once on each website. 
  • John also says that usually Google tries to de-dupe different listings, and that is done for all kinds of listings. So if it’s an image or if it’s a web page, or anything else, if Google can recognise that it’s the same primary content, it will try to just show it once. He assumes the same rule applies for Google Jobs.

Internal Duplicate Content

Q. Having the same content as a PDF on the website and as a block article is not viewed as duplicate content.

  • (17:10) The person asking the question has a situation where she has put a piece of content as PDF file on her website and wants to use the same content to present it as an HTML block article on the same website and is worried that it might be viewed as duplicate content. John assures her that Google wouldn’t see it as duplicate content, because it’s different content. One is an HTML page, on is a PDF. Even if the primary piece of content on there is the same, the whole thing around it is different. From that level, Google wouldn’t see it as duplicate content. At most, the difficulty might be that in the search results it can happen that both of these show up at the same time. From SEO point of view, it is not necessarily a bad things, but maybe there’s a personal strategic reasons to have either the PDF or the HTML page more visible.

Paginated Content

Q. In paginated content Google views first pages of content as more important than pages that come after.

  • (19:39) The person asking the question has a website with discussions, where a thread can have too many comments to have them all in one long page. He wants to make it a paginated content but is not sure whether the newest comments should appear on the first page or on the last pages. John says that that is something that is ultimately up to the person asking the question and which comments he wants to prioritise. John assumes that if something is on page four, then Google would have to crawl page one, tow, three first to find that, and usually that would mean that it’s longer away from the main part of the website. From Google’s point of view, what would probably happen there is Google wouldn’t give it that much weight, and probably Google wouldn’t recrawl that page as often. Whereas if the newest comments should be the most visible ones, then maybe it makes sense to reverse that order and show them differently. Because if the news comments are right on the main page, then it’s a lot easier for Google to recrawl that more often and to give it a little bit more weight in the search results. That’s up to the website owner how he wants to balance that.

Page Number

Q. Google doesn’t have a specific ratio on how many pages or how many indexable pages a website should have.

  • (28:48) From Google’s point of view, there’s no specific ratio that Google would call out for how many pages a website should have or how many indexable pages a website should have. That’s ultimately up to a website owner. What John says he tends to see is that fewer pages tend to perform better in the sense that if the value of the content is concentrated on fewer pages, then, in general, those few pages tend to be a lot stronger than if the content was to be diluted across a lot of different pages. That plays across the board, in the sense that, from a ranking point of view, Google can give these pages a little bit more weight. From a crawling point of view, it’s easier for Google to keep up with these. So especially if it’s a new website, John recommends starting off small focusing on something specific and then expanding from there, and not just going in and creating 500,000 pages that Google needs to index. Because especially for a new website, when it starts off with a big number of pages, then chances are, Google will just pick up a small sample of those pages, and whether or not that small sample is the pages most important to the website is questionable.

Referring Pages

Q. If URLs referring to the pages of the website are long-retired microsite domains, it is perfectly fine.

  • (30:36) John says that URLs from long-retired microsite domains referring to the important pages on the website are not bothersome at all. So in particular, the referring page in the Inspection Tool is where Google first sees the mention of the pages, and if it first sees them on some random website, then that’s just where it saw them. That is what is listed there, it doesn’t mean that there’s anything bad with those pages. From Google’s point of view, that’s purely a technical thing. It’s not a sign that there is a need to make sure that the pages were first found on some very important part of the website. If the pages are indexed, that’s the important part there. Referring page is useful when there’s a question on how Google even found the page, or where it comes from. If there are weird URL parameters in there, or if there’s something really weird in the URL that Google should never have found in the first place, then looking at the referring URL is something that helps to figure out where this actually comes from. 

A Drop In Crawl Stats

Q. There are several things that Google takes into account when deciding on the amount of crawling it does on a website.

  • (35:09) On the one hand, Google tries to figure out how much it needs to crawl from a website to keep things fresh and useful in its search results. That relies on understanding the quality of the website, how things change on the website. Google calls it the crawl demand. On the other hand, there are the limitations that Google sees from the server, from the website, from the network infrastructure with regard to how much can be crawled on a website. Google tries to balance these two.
  • The restrictions tend to be tied to two main things – the overall response time to requests to the website and the number of errors, specifically, server errors, that can be seen during crawling. If Google sees a lot of server errors, then it will slow down crawling, because it doesn’t want to cause more problems. If it sees that the server is getting slower, then it will also slow down crawling. So those are the two main things that come into play there. The difficulty with the speed aspect is that there are two ways of looking at speed. Sometimes that gets confusing when looking at the crawl rate.
  • Specifically for the crawl rate, Google just looks at how quickly it can request a URL from the server. The other aspect of speed is everything around Core Web Vitals and how quickly a page loads in a browser. The speed that it takes in a browser tends not to be related directly to the speed that it takes for Google to fetch an individual URL on a website, because in a browser the JavaScript needs to be processed, external files need to be pulled in, content needs to be rendered, and all of the positions of the elements on the page need to be recalculated. That takes a different amount of time than just fetching that URL. That’s one thing to watch out for.
  • When trying to diagnose a change in crawl rate there’s no need to look at how long it takes for a page to render, instead, it’s better to look at just purely how long it takes to fetch that URL from the server.
  • The other thing that comes in here as well – is that, from time to time – depending on what is done on the website, Google tries to understand where the website is actually hosted. If Google recognises that a website is changing hosting from one server to a different server – that could be to a different hosting provider, that could be moving to a CDN, or changing CDNs, anything like that – Google’s systems will automatically go back to some safe rate where it knows that it won’t cause any problems, and then, step by step, increase again.

Link Juice

Q. Link Juice is a great way to tell Google which pages on the website are important.

  • (46:01) “Link Juice” is always one of those terms where people have very conflicting feelings about it because it’s not really how Google’s systems look at it. With regard to internal linking, this is one of the most important elements of a website because it is a great way to tell Google what is considered to be important on the pages. Most websites have a home page that is seen as the most important part of the website, and especially links that can be provided from those important pages to other pages that are thought to be important – that’s really useful for Google. It can be that these are temporary links too. For example, if there’s an e-commerce site, and a new product is linked to from the home page, then that’s a really fast way for Google to recognise those new products, to crawl and index them as quickly as possible, and to give them a little bit of extra weight. But of course, if those links are removed, then that internal connection is gone as well. With regard to how quickly that is picked up, that’s essentially picked up immediately as soon as Google recrawls and reindexes those pages.

Crawling

Q. There are a couple of ways to go around “discovered, not indexed” URLs.

  • (58:21) John says Google finds all kinds of URLs across the web, and a lot of those URLs don’t need to be crawled and indexed, because maybe they’re just variations of URLs Google already knows, or maybe they’re just some random forum or scraper script that has copied URLs from the website and included them in a broken way. Google finds all of these linked all the time on the web. So it’s very normal to have a lot of these URLs that are either crawled and not indexed or discovered and not crawled, just because there are so many different sources of URLs across the web. John suggests to first of all download a list of a sample of those so that it is possible to look at individual examples and try to classify which of those URLs are actually ones that are important and which of these are ones that can be ignored. Anything that looks really weird as a URL is better to be ignored. Regarding the ones that are important, that’s something where it would be useful to try to figure out what can be done to better tie these to the website with regard to tying them to things like internal linking.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH