July 2022 - Premium eCommerce marketing services

Shopify Announces Launch of YouTube Shopping

Shopify announced the launch of YouTube shopping this week, outlining benefits including:

  • Customers can buy products right when they discover them
  • Instantly sync products to YouTube
  • Create authentic live shopping experiences
  • Sell on YouTube, manage on Shopify

What does this mean for our clients?

There are some eligibility restrictions for this product at the moment. You must already have 1000 subscribers to your YouTube channel and at least 4000 hours of annual watch time. This means as a brand, you will need to have an already well-established YouTube channel or look to start working with content creators who do.

Consider content creators who align with your brand or category and research their channels and content. There are specific websites and agencies that can help source content creators for a fee, including theright.fit and hypeauditor.com

YOUTUBE FOR ACTION WITH PRODUCT FEEDS

For clients who don’t meet the eligibility requirements, but still want to explore video for retail, there is another option. YouTube for action campaigns allow us to promote videos on the YouTube network, and attach a product feed through the merchant centre, creating a virtual shop front for the watcher, with an easy “shop now” functionality.

This powerful format allows brands to generate both awareness and engagement with their brand, whilst also driving bottom line sales. This can be managed through your Google Ads account allowing you to optimise towards the same conversions and use the same audience signals as your other Google campaigns.

What is YouTube for Action?

Previously named TrueView for Action, this product allows users to buy video ads on the YouTube network which are optimised towards a performance goal rather than pure reach or video views.

You can optimise towards:

  • Website traffic
  • Leads
  • Sales/Purchases

And have the option to choose your bud strategy based on:

  • Cost per View
  • Cost per Action
  • Maximise Conversions
  • Cost per thousand impressions

Who can I target?

YouTube and Google’s shared data provide a wealth of information to help us build audience segments that will fit your brand and services. The options include but are not limited to:

  • Demographic targeting: Age, gender, location –  based on signed-in user data
  • Affinity audiences: Pre-defined interest/hobby and behavioural data based on users browsing history
  • In-market audiences: Users deemed to be “in-market” for a product or service based on their searching behaviour and browsing history
  • Life-Events: Based on what a user is actively researching or planning, e.g. graduation, retirement etc
  • Topics:  Align your content with similar  themes to video content on the YouTube network
  • Placement: Align your content to specific YouTube channels, specific websites, or content on channels/websites.
  • Keyword: Similarly, to search, build portfolios of keywords to target specific themes on YouTube

The team at LION will work with you to select and define the right audiences to test and optimise to get the best results.

What content should I use?

Like any piece of content, there is no right or wrong answer, and what works for some brands may not for others. Your video should align with your brand tone of voice and guidelines. 

Think about what action you want the users to take and ensure the video aligns with this, e.g. if you want users to buy a specific product, show the product in the video and talk about its benefits. Testing multiple types of video content is the best way to learn about what your potential customers like and do not like.

What do I need to get started?

  1. At least one video uploaded to YouTube (we recommend 30 seconds in length)
  2. A Google merchant centre account & Google Ads account
  3. A testing budget of at least $1,000

YOU CAN CHAT WITH THE TEAM AT LION DIGITAL AND WE CAN HELP YOU TO SELECT AND DEFINE THE RIGHT AUDIENCES TO TEST AND OPTIMISE TO GET THE BEST RESULTS

LION stands for Leaders In Our Niche. We pride ourselves on being true specialists in each eCommerce Marketing Channel. LION Digital has a team of certified experts and the head of the department with 10+ years of experience in eCommerce and SEM. We follow an ROI-focused approach in paid search backed by seamless coordination and detailed reporting, thus helping our clients meet their goals.

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

Contact Us

Article by

Leonidas Comino – Founder & CEO

Leo is a, Deloitte award winning and Forbes published digital business builder with over a decade of success in the industry working with market-leading brands.

Like what we do? Come work with us

WEBMASTER HANGOUT – LIVE FROM JULY 01, 2022

Which number is correct, Page Speed Insights or Search Console?

Q: (00:30) Starting off, I have one topic that has come up repeatedly recently, and I thought I would try to answer it in the form of a question while we’re at it here. So, first of all, when I check my page speed insight score on my website, I see a simple number. Why doesn’t this match what I see in Search Console and the Core Web Vitals report? Which one of these numbers is correct?

  • (01:02) I think maybe, first of all, to get the obvious answer out of the door, there is no correct number when it comes to speed when it comes to an understanding of how your website is performing for your users. In PageSpeed Insights, by default, I believe we show a single number that is a score from 0 to 100, something like that, which is based on a number of assumptions where we assume that different things are a little bit faster or slower for users. And based on that, we calculate a score. In Search Console, we have the Core Web Vitals information based on three numbers: speed, responsiveness, and interactivity. And these numbers are slightly different because it’s three numbers, not just one. But, also, there’s a big difference in the way these numbers are determined. Namely, there’s a difference between so-called field data and lab data. Field data is what users see when they go to your website. And this is what we use in Google Search Console. That’s what we use for search, as well, whereas lab data is a theoretical view of your website, like where our systems have certain assumptions where they think, well, the average user is probably like this, using this kind of device, and with this kind of a connection, perhaps. And based on those assumptions, we will estimate what those numbers might be for an average user. And you can imagine those estimations will never be 100% correct. And similarly, the data that users have seen will change over time, as well, where some users might have a really fast connection or a fast device, and everything goes really fast on their website or when they visit your website, and others might not have that. And because of that, this variation can always result in different numbers. Our recommendation is generally to use the field data, the data you would see in Search Console, as a way of understanding what is kind of the current situation for our website, and then to use the lab data, namely, the individual tests that you can run yourself directly, to optimise your website and try to improve things. And when you are pretty happy with the lab data you’re getting with your new version of your website, then over time, you can collect the field data, which happens automatically, and double-check that users see it as being faster or more responsive, as well. So, in short, again, there is no absolutely correct number when it comes to any of these metrics. There is no absolutely correct answer where you’d say this is what it should be. But instead, there are different assumptions and ways of collecting data, and each is subtly different.

How can our JavaScript site get indexed better?

Q: (04:20) So, first up, we have a few custom pages using Next.js without a robots.txt or a sitemap file. Simplified, theoretically, Googlebot can reach all of these pages, but why is only the homepage getting indexed? There are no errors or warnings in Search Console. Why doesn’t Googlebot find the other pages?

  • (04:40) So, maybe taking a step back, Next.js is a JavaScript framework, meaning the whole page is generated with JavaScript. But as a general answer, as well, for all of these questions like, why is Google not indexing everything? It’s important first to say that Googlebot will never index everything across a website. I don’t think it happens to any kind of non-trivial-sized website where Google would completely index everything. From a practical point of view, it’s impossible to index everything across the web. So that kind of assumption that the ideal situation is everything is indexed, I would leave that aside and say you want Googlebot to focus on the important pages. The other thing, though, which became a little bit clearer when, I think, the person contacted me on Twitter and gave me a little bit more information about their website, was that the way that the website was generating links to the other pages was in a way that Google was not able to pick up. So, in particular, with JavaScript, you can take any element on an HTML page and say, if someone clicks on this, then execute this piece of JavaScript. And that piece of JavaScript can be used to navigate to a different page, for example. And Googlebot does not click on all elements to see what happens. Instead, we go off and look for normal HTML links, which is the kind of traditional way you would link to individual pages on a website. And, with this framework, it didn’t generate these normal HTML links. So we could not recognise that there’s more to crawl and more pages to look at. And this is something that you can fix in how you implement your JavaScript site. We have a tonne of information on the Search Developer Documentation site around JavaScript and SEO, particularly on the topic of links because that comes up now and then. There are many creative ways to create links, and Googlebot needs to find those HTML links to make them work. Additionally, we have a bunch of videos on our YouTube channel. And if you’re watching this, you must be on the YouTube channel since nobody is here. If you’re watching this on the YouTube channel, go out and check out those JavaScript SEO videos on our channel to get a sense of what else you could watch out for when it comes to JavaScript-based websites. We can process most kinds of JavaScript-based websites normally, but some things you still have to watch out for, like these links.

Does it affect my SEO score negatively if I link to HTTP pages?

Q: (07:35)Next up, does it affect my SEO score negatively if my page is linking to an external insecure website?

  • (07:44) So on HTTP, not HTTPS. So, first off, we don’t have a notion of an SEO score. So you don’t have to worry about the kind of SEO score. But, regardless, I kind of understand the question is, like, is it wrong if I link to an HTTP page instead of an HTTPS page. And, from our point of view, it’s perfectly fine. If these pages are on HTTP, then that’s what you would link to. That’s kind of what users would expect to find. There’s nothing against linking to sites like that. There is no downside for your website to avoid linking to HTTP pages because they’re kind of old or crusty and not as cool as on HTTPS. I would not worry about that.

Q: (08:39) With Symantec and voice search, is it better to use proper grammar or write how people actually speak? For example, it’s grammatically correct to write, “more than X years,” but people actually say, “over X years,” or write a list beginning with, “such as X, Y, and Z,” but people actually say, “like X, Y, and Z.”

  • (09:04) Good question. So the simple answer is, you can write however you want. There’s nothing holding you back from just writing naturally. And essentially, our systems try to work with the natural content found on your pages. So if we can crawl and index those pages with your content, we’ll try to work with that. And there’s nothing special that you need to do there. The one thing I would watch out for, with regards to how you write your content, is just to make sure that you’re writing for your audience. So, for example, if you have some very technical content, but you want to reach people who are non-technical, then write in the non-technical language and not in a way that is understandable to people who are deep into that kind of technical information. So kind of the, I would guess, the traditional marketing approach of writing for your audience. And our systems usually are able to deal with that perfectly fine.

Should I delete my disavow file?

Q: (10:20) Next up, a question about links and disavows. Over the last 15 years, I’ve disavowed over 11,000 links in total. I never bought a link or did anything unallowed, like sharing. The links that I disavowed may have been from hacked sites or from nonsense, auto-generated content. Since Google now claims that they have better tools to not factor these types of hacked or spammy links into their algorithms, should I just delete my disavow file? Is there any risk or upside, or downside to just deleting it?

  • (10:54) So this is a good question. It comes up now and then. And disavowing links is always kind of one of those tricky topics because it feels like Google is probably not telling you the complete information. But, from our point of view, we do work hard to avoid taking this kind of link into account. And we do that because we know that the disavow links tool is a niche tool, and SEOs know about it, but the average person who runs a website doesn’t know about it. And all those links you mentioned are the links that any website gets over the years. And our systems understand that these are not things you’re trying to do to game our algorithms. So, from that point of view, if you’re sure that there’s nothing around a manual action that you had to resolve with regards to these links, I would just delete the disavow file and move on with life and leave all of that aside. I would personally download it and make a copy so that you have a record of what you deleted. But, otherwise, if you’re sure these are just the normal, crusty things from the internet, I would delete it and move on. There’s much more to spend your time on when it comes to websites than just disavowing these random things that happen to any website on the web.

Can I add structured data with Google Tag Manager?

Q: (12:30) Adding schema markup with Google Tag Manager is that good or bad for SEO? Does it affect ranking?

  • (12:33) So, first of all, you can add structure data with Google Tag Manager. That’s an option. Google Tag Manager is a simple piece of JavaScript you add to your pages and then does something on the server-side. And it can modify your pages slightly using JavaScript. For the most part, we’re able to process this normally. And the structured data you generally like can be counted, just like any other structured data on your web pages. And, from our point of view, structured data, at least the types that we have documented, is primarily used to help generate rich results, we call them, which are these fancy search results with a little bit more information, a little bit more colour or detail around your pages. And if you add your structured data with the Tag Manager, that’s perfectly fine. From a practical point of view, I prefer to have the structured data on the page or your server so that you know exactly what is happening. It makes it a little bit easier to debug things. It makes it easier to test things. So trying it out with Tag Manager, from my point of view, I think, is legitimate. It’s an easy way to try things out. But, in the long run, I would try to make sure that your structured data is on your site directly, just to make sure that it’s easier to process for anyone who comes by to process your structured data and it’s easier for you to track and debug and maintain over time, as well, so that you don’t have to check all of these different separate sources.

Is it better to block by robots.txt or with the robots meta tag?

Q: (14:20) Simplifying a question a little bit, which is better, blocking with robots.txt or using the robots meta tag on the page? How do we best prevent crawling? 

  • (14:32) So this also comes up from time to time. We did a podcast episode recently about this, as well. So I would check that out. The podcasts are also on the YouTube channel, so you can click around a little bit, and you’ll probably find them quickly. In practice, there is a subtle difference here where, if you’re in SEO and you’ve worked with search engines, then probably you understand that already. But for people who are new to the area, it’s sometimes unclear exactly where these lines are. And with robots.txt, which is the first one you mentioned in the question, you can essentially block crawling. So you can prevent Googlebot from even looking at your pages. And with the robot’s meta tag, you can do things like blocking indexing when Googlebot looks at your pages and sees that robot’s meta tag. In practice, both of these results in your pages do not appear in the search results, but they’re subtly different. So if we can’t crawl, we don’t know what we’re missing. And it might be that we say, well, there are many references to this page. Maybe it is useful for something. We just don’t know. And then that URL could appear in the search results without any of its content because we can’t look at it. Whereas with the robot’s meta tag, if we can look at the page, then we can look at the meta tag and see if there’s no index there, for example. Then we stop indexing that page and drop it completely from the search results. So if you’re trying to block crawling, then definitely, robots.txt is the way to go. If you just don’t want the page to appear in the search results, I would pick whichever is easier for you to implement. On some sites, it’s easier to set a checkbox saying that I don’t want this page found in Search, and then it adds a noindex meta tag. For others, maybe editing the robots.txt file is easier. Kind of depends on what you have there.

Q: (16:38) Are there any negative implications to having duplicate URLs with different attributes in your XML sitemaps? For example, one URL in one sitemap with an hreflang annotation and the same URL in another sitemap without that annotation.

  • (16:55) So maybe, first of all, from our point of view, this is perfectly fine. This happens now and then. Some people have hreflang annotations in sitemap files separated away, and then they have a normal sitemap file for everything. And there is some overlap there. From our point of view, we process these sitemap files as we can, and we take all of that information into account. There is no downside to having the same URL in multiple sitemap files. The only thing I would watch out for is that you don’t have conflicting information in these sitemap files. So, for example, if with the hreflang annotations, you’re saying, oh, this page is for Germany and then on the other sitemap file, you’re saying, well, actually this page is also for France or in French, then our systems might be like, well, what is happening here? We don’t know what to do with this kind of mix of annotations. And then we may pick one or the other. Similarly, if you say this page was last changed 20 years ago, which doesn’t make much sense but say you say 20 years. And in the other sitemap file, you say, well, actually, it was five minutes ago. Then our systems might look at that and say, well, one of you is wrong. We don’t know which one. Maybe we’ll follow one or the other. Maybe we’ll just ignore that last modification date completely. So that’s kind of the thing to watch out for. But otherwise, if it’s just mentioned multiple sitemap files and the information is either consistent or kind of works together, in that maybe one has the last modification date, the other has the hr flange annotations, that’s perfectly fine.

How can I block embedded video pages from getting indexed?

Q: (19:00) I’m in charge of a video replay platform, and simplified, our embeds are sometimes indexed individually. How can we prevent that?

  • (19:10) So by embeds, I looked at the website, and basically, these are iframes that include a simplified HTML page with a video player embedded. And, from a technical point of view, if a page has iframe content, then we see those two HTML pages. And it is possible that our systems indexed both HTML pages because they are separate. One is included in the other, but they could theoretically stand on their own, as well. And there’s one way to prevent that, which is a reasonably new combination with robots meta tags that you can do, which is with the indexifembedded robots meta tag and a noindex robots meta tag. And, on the embedded version, so the HTML file with the video directly in it– you would add the combination of noindex plus indexifembedded robots meta tags. And that would mean that, if we find that page individually, we would see, oh, there’s a noindex. We don’t have to index this. But with the indexifembedded, it essentially tells us that, well, actually, if we find this page with the video embedded within the general website, then we can index that video content, which means that the individual HTML page would not be indexed. But the HTML page embedded with the video information would be indexed normally. So that’s kind of the setup that I would use there. And this is a fairly new robots meta tag, so it’s something that not everyone needs. Because this combination of iframe content or embedded content is kind of rare. But, for some sites, it just makes sense to do it like that.

Q: (21:15)Another question about HTTPS, maybe. I have a question around preloading SSL via HSTS. We are running into an issue where implementing HSTS into the Google Chrome preload list. And the question kind of goes on with a lot of details. But what should we search for?

  • (21:40) So maybe take a step back when you have HTTPS pages and an HTTP version. Usually, you would redirect from the HTTP version to HTTPS. And the HTTPS version would then be the secure version because that has all of the properties of the secure URLs. And the HTTP version, of course, would be the open one or a little bit vulnerable. And if you have this redirect, theoretically, an attacker could take that into account and kind of mess with that redirect. And with HSTS, you’re telling the browser that once they’ve seen this redirect, it should always expect that redirect, and it shouldn’t even try the HTTP version of that URL. And, for users, that has the advantage that nobody even goes to the HTTP version of that page anymore, making it a little more secure. And the pre-load list for Google Chrome is a static list that is included, I believe, in Chrome probably in all of the updates, or I don’t know if it’s downloaded separately. Not completely sure. But, essentially, this is a list of all of these sites where we have confirmed that HSTS is set up properly and that redirect to the secure page exists there so that no user ever needs to go to the HTTP version of the page, which makes it a little bit more secure. From a practical point of view, this difference is very minimal. And I would expect that most sites on the internet just use HTTPS without worrying about the pre-load list. Setting up HSTS is always a good practice, but it’s something that you can do on your server. And as soon as the user sees that, their Chrome version keeps that in mind automatically anyway. So from a general point of view, I think using the pre-load list is a good idea if you can do that. But if there are practical reasons why that isn’t feasible or not possible, then, from my point of view, I would not worry about only looking at the SEO side of things. When it comes to SEO, for Google, what matters is essentially the URL that is picked as the canonical. And, for that, it doesn’t need HSTS. It doesn’t need the pre-load list. That does not affect at all on how we pick the canonical. But rather, for the canonical, the important part is that we see that redirect from HTTP to HTTPS. And we can kind of get a confirmation within your website, through the sitemap file, the internal linking, all of that, that the HTTPS version is the one that should be used in Search. And if we use the HTTPS version in Search, that automatically gets all of those subtle ranking bonuses from Search. And the pre-load list and HSTS are not necessary there. So that’s kind of the part that I would focus on there.

How can I analyse why my site dropped in ranking for its brand name?

Q: (25:05) I don’t really have a great answer, but I think it’s important to at least mention, as well what are the possible steps for investigation if a website owner finds their website is not ranking for their brand term anymore, and they checked all of the things, and it doesn’t seem to be related to any of the usual things?

  • (25:24) So, from my point of view, I would primarily focus on the Search Console or the Search Central Health Community and post all of your details there. Because this is where all of those escalations go and where the product and the Help forum, they can take a look at that. And they can give you a little bit more information. They can also give you their personal opinion on some of these topics, which might not match 100% what Google would say, but maybe they’re a little bit more practical, where, for example, probably not relevant to this site, but you might post something and say, well, my site is technically correct and post all of your details. And one of the product experts looks at it and says it might be technically correct, but it’s still a terrible website. You need to get your act together, write, and create better content. And, from our point of view, we would focus on technical correctness. And you need someone to give you that, I don’t know, personal feedback. But anyway, in the Help forums, if you post the details of your website with everything that you’ve seen, the product experts are often able to take a look and give you some advice on, specifically, your website and the situation that it’s in. And if they’re not able to figure out what is happening there, they also have the ability to escalate these kinds of topics to the community manager of the Help forums. And the community manager can also bring things back to the Google Search team. So if there are things that are really weird and now and then, something really weird does happen with regards to Search. It’s a complex computer system. Anything can break. But the community managers and the product experts can bring that back to the Search team. And they can look to see if there is something that we need to fix, or is there something that we need to tell the site owner, or is this kind of just the way that it is, which, sometimes, it is. But that’s generally the direction I would go for these questions. The other subtly mentioned here is that I think the site does not rank for its brand name. One of the things to watch out for, especially with regards to brand names, is that it can happen that you say something is your brand name, but it’s not a recognised term from users. For example, you might say I don’t know. You might call your website bestcomputermouse.com. And, for you, that might be what you call your business or what you call your website. Best Computer Mouse. But when a user goes to Google and enters “best computer mouse,” that doesn’t necessarily mean they want to go directly to your website. It might be that they’re looking for a computer mouse. And, in cases like that, there might be a mismatch of what we show in the search results with what you think you would like to have shown for the search results for those queries if it’s something more of a generic term. And these kinds of things also play into search results overall. The product experts see these all the time, as well. And they can recognise that and say, actually, just because you call your website bestcomputermouse.com I hope that site doesn’t exist. But, anyway, just because you call your website doesn’t necessarily mean it will always show on top of the search results when someone enters that query. But that’s kind of something to watch out for. But, in general, I would go to the Help forums here and include all of the information you know that might play a role here. So if there was a manual action involved and you’re kind of, I don’t know, ashamed of that which, it’s kind of normal. But all of this information helps the product experts better understand your situation and give you something actionable that you can do to take as a next step or to understand the situation a little bit better. So the more information you can give them from the beginning, the more likely they’ll be able to help you with your problem.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

SEO & web developers: friends or foes?

Is SEO just another constraint?

(0:59) MARTIN: So you are saying that for you as a web developer, SEO is just another constraint. What do you mean by that?

SURMA: I mean, as web developers, as far as web development goes, we and I are pretending to be a representative of all web developers, which I’m clearly not. But the usual struggle involves stuff like, how many browsers do I support, or how far back in the browser versions do I go? Do I support IE11? Do I not support IE11? How do I polyfill certain things? Do I polyfill them, or do I use progressive enhancement? What kind of implications do both of these choices have? Do we actually make design fully responsive? Do we do a separate mobile site? Like, there are all these choices that I already have to make. Four and then now frameworks come along and are like; we’re just going to do most of the stuff on the client-side because we want to write a single-page app. And then you either have to say, do I set up something for server-side rendering or static rendering and build time, or do I go all-in on the single-page app, and just everything happens on the client? What do search engines think about that? And then search engines come in like, well, you should be doing this, and you should not be doing that because that gets penalised. And actually, we don’t even understand what you’re doing here because search engines are running Chrome 12 or something. And it’s just like it’s yet another constraint that I have to balance and make decisions on whether following their restrictions is more important or following my business restrictions, my marketing restrictions, or my project management restrictions. And I feel like some people are frustrated about that sometimes.

MARTIN: I remember when I was a web developer, there’s also this entire user first no, mobile-first, no, content first, no this first, no, that first. That’s probably also going in the same direction, and I understand the frustration. And I see that there are lots of requirements, and sometimes these requirements might even contradict each other. But I think as developers, we should understand what SEOs are trying to help us with and what search engines, separately from what we are building and doing, are actually trying to accomplish. And then we would see that it’s basically all of these requirements are important, but maybe some of them are more important than others, and they are important in different stages, I would say. So, for instance, you mentioned mobile-first versus or a responsive design versus having separate versions, right? I would say that’s a decision that you need to make relatively early on, right? In the technology process before you, yeah, whereas then should I use this feature, should I polyfill this feature, should I not use this feature because I need to support an old browser that doesn’t support it, and the polyfill is tricky, that’s something that probably happens a little later in development, right?

SURMA: Yeah, I think I agree with that. It depends on how much flexibility you’re given as a developer. I think we all may or may not have lived through the experience of working with a designer who insists on a pixel perfect design, which is just not something that works on the web, but sometimes, you’re given a task, and your job is to complete it and not have an opinion. I don’t want to go down. It depends on the route. But in the end, we won’t get whatever we end up talking about. We probably won’t find a definitive answer. Like context matters, and everyone has different constraints. And I think that’s really what it’s about that you need to just be aware of the constraints that you have and make decisions in the context of your situation.

SEO – friend, not foe

(04:41) SURMA: You mentioned something that I find quite interesting. You said SEOs are trying to help us with something because often they’re kind of like villains, almost like the people who just try to get you to the top of the rankings, whether you deserve it or not. But in the end, I feel like there is help going on. Both search engines, as well as the people that want you to do well in the search results, actually are trying to make your site better in the end. Like no search engine actually wants to give you results that are bad. That just doesn’t make sense. In the end, search engines are trying to put the best results on top. And if an SEO helps you get on top, then ideally, what that means is your site has gotten better. 

MARTIN: Yes, exactly. And I love that you are saying, like, oh yeah, you have to look at the context. You have to understand the constraints. And that’s actually something that a good SEO will help you with because if you look at it from a purely SEO standpoint, depending on what you’re building and how you’re building it, you might have different priorities. So, for instance, if you’re saying, oh, this is a test version of a landing page. We just want to see if the customer segment is even interested in what we are potentially building later on, and you don’t want to build for the bin, right? You don’t want to build something that then, later on, you find out doesn’t actually work because there’s no interest in it. So then, for these things, SEO might be relatively important because you definitely want people to find that so that you get enough data on making decisions later on. But you might not be so constrained in terms of oh, yeah, this has to be client-side versus server-side. We don’t really have to make this super fast. We just have to get this into people’s faces, especially through search engines, so that we get some meaningful data to make decisions later on, versus you’re building and improving on an existing product, and that should belong evitable.

Building better websites for everyone

(6:33) MARTIN: So, a good SEO helps you understand what kind of requirements you should take into account. And SEO is a gigantic field, and they should pick the bits and pieces that actually matter for your specific context. So you said like, oh, we want to build a single page application. Maybe. Maybe you do, maybe you don’t. Maybe it’s fine to build a client-side rendering, but maybe consider doing some bits and pieces of server-side rendering because you reap some performance benefits there. And that also influences SEO because, as you say, search engines want to find the good things. So making your site better includes making it faster but also making it accessible because if you think about it, search engines are computers interacting with your website, working through your website and trying to understand what your website says. So they have basic accessibility needs. They don’t necessarily interact with things. They don’t click on stuff. And yet they need to work with your content. So it should be accessible to them. And SEOs will point these things out.

SURMA: That’s really interesting that you bring that up because I was just thinking about both performance, like loading performance, for example, and accessibility. So, on the one hand, it’s kind of accepted that loading performance is important. But now that, for example, we have Core Web Vitals. And one of the core ones of their core statements is that they don’t want to just put a number on a metric or something that’s measurable. They want to measure things that are important to user experience. And so the Core Web Vitals that we currently have, which is just three metrics, LCP, CLS, and FID, right. All of these are statistically correlated to users enjoying the site more or staying on your site longer. And that means if you optimise for those, you actually will get something out of that. You will get users that stay longer. And now that search is looking into those, it means optimising for those metrics not only gets you higher in the rankings potentially but also the people that do see your site will most likely stay longer or engage with it more because we know that these metrics correlate with user behaviour. And I think that’s a really interesting approach, wherein, in the end, actually search engines are helping you do the right thing. And now I’m wondering which I don’t even know like accessibility is something, which we keep talking about, and we know it’s important. And yet it feels like it always falls off the truck. In many projects, it’s an afterthought, and many people know that it needs to be something that needs to be considered from the very beginning of a project because it’s hard to shoehorn in at the end. It needs to be something that works from the start. Has any search engine ever done anything in this space to help developers be better with accessibility?

MARTIN: We are more or less going in that direction, not necessarily from a purely accessible standpoint, but as search engines need to semantically understand what the site is about, we just don’t take the text and take it as plain. We basically try to figure out, oh, so this is a section, this is a section, this is the section that is most important on the page. This is just an explainer for the previous section and so on and so forth. For that, we need the semantics that HTML gives us. And actually, these semantics are also important for accessibility oftentimes because people need to be able to navigate your content differently, maybe with a keyboard, maybe with a screen reader. And for that, the semantics on the page need to be in place from the get-go, basically. So in that direction, having better semantics does help search engines better understand your content and, as a byproduct, also help people better navigate your content who have additional navigational needs. So you could say search engines are a little involved in terms of accessibility. That does not cover accessibility as a whole. There is so much more to accessibility than just that. But at least in the core of the semantics on the web, that is taken care of here. 

Keeping up with web development trends is important for SEOs

(10:37) MARTIN: Another thing that I really found interesting is where you say, oh, you know, SEOs are often seen as just coming with all of these additional constraints and requirements. What is there that they could do differently that you would think would help you and other developers understand where they’re coming from or have a meaningful discussion about these things and turn that into a positive, constructive input?

SURMA: I don’t know if this is the answer you’re looking for, but one thing I have seen is that some SEOs need to put a bit more effort into being up to date on what is good and what is not good guidance, or more specifically, what search engines are and are not capable of processing correctly. I think– I know that you have been fighting the no, no JavaScript is the fine fight for a long time now, but I think to this day, there are still some SEOs out there who insist that anything that is in JavaScript is invisible to the search engine. I think in general, I think it goes back to the trade-off thing, where I think web developers need to realise that SEOs are trying to help you be better, and SEOs need to realise that that they can’t give advice as a either you do this, or you’re screwed kind of approach. Like, it’s a trade-off. You can say that this is one way where you can make a site better. This is another way, and this is yet another thing you can do. And all of these things will accumulate to make your site better, ideally resulting in giving you a higher rank. But it’s not like an all or nothing approach, I think. Sometimes certain constraints just outweigh other constraints, and you then make a decision to go with plan A rather than plan B or stick to what you currently got. We have recently seen a lot of shifts from purely client-side up to like this hybrid approach, where the app is rendered on the server-side or even at build time but then turns into a single page app once loaded onto the client, and that has upsides, and it has downsides. Like we know that statically rendered content is very good for your first render, your largest loading time, that all goes down. But now we have this additional blob of JavaScript state that is somehow inserted into the document, and then often, the full dynamic client-side re-render happens, which can create an iffy user experience at times. And all these things are working for or against you in certain aspects. And I think that’s just something that the SEOs need to be mindful of as well, that the developer cannot just follow everything that they say because they’re different; they’re not the only deciding force on a project. I’m not saying that all SEOs behave like this, of course, because I’m honestly quite inexperienced in working with an SEO directly. But just based on stories that I hear and people that I see on Twitter, it’s all a trade-off. And I think people need just to realise that everyone is in 90% of the cases trying to do the best they can and do their job well. And just keep that in mind. And then probably find a solution that works for both or is a compromise.

Change your perspective

(13:57) MARTIN: Yeah. No, that makes perfect sense. And I wish that both SEOs and developers would look at it from a different perspective. Like both SEOs and developers want to build something that people are using, right? You don’t want to build something and no one uses it. That’s neither going to pay your bills very long. Nor is it making you happy to see like, oh, yeah, we built something that helps many people. That’s true for me. When I was a developer, I wanted to build things that have an impact, and that means that they need to be used by someone. And if we are building something that we genuinely are convinced is a good thing, then that should be reflected by the fact that search engines would agree on that and say like, oh, yeah, this is a good solution to this problem or this challenge that people might face and thus want to showcase your solution basically. But for that, there needs to be something that search engines can grasp and understand and look at and put into their index accordingly. So basically, they need to understand what is the page about, what it offers the users, is it fast, is it slow, is it mobile-friendly, all these kinds of things. And SEOs are then the ones who are– because you as a developer are focused on making it work in all the browsers that it needs to work in, making it fast, using all the best practices, using tree shaking, bundle splitting, all that kind of stuff. And then SEOs come in and help you make sure that search engines understand what you’re building and can access what you’re building and that you are following the few best practices that you might not necessarily be aware of yet. But you are right. For that, SEOs need to follow up-to-date best practice guidance, and not all of them do. Well, at the beginning of 2021, I ran a poll in one of the virtual events, asking if people were aware that the Google bot is now using an evergreen Chrome. So we are updating the Chromium instance that is used to render pages. And I think like 60% of the people were like, oh, I didn’t know that even though we announced that in 2019 at Google I/O in May.

SURMA: How was that?

MARTIN: That was amazing. I mean, launching this has been a great initiative. But I’m surprised that I think we have gotten developers to notice that, but not necessarily all SEOs have noticed. And it’s things that are not necessarily easy or not even your job as a developer, where SEOs can really help you or at least make the right connections for you. For instance, I know you build squoosh. The app, right?

SURMA: Well, not just me, but I was part of the team that built it.

MARTIN: Right. You were part of the team who built squoosh app. And I think squoosh.app is awesome. For those who don’t know it, it’s a web app that allows you to experiment with different image compression settings and then basically get the image that you put into the application in your browser. It’s all working from the browser. You don’t have to install anything. And basically, get like the best settings for the best gains in terms of file size, right? That’s roughly what it does.

SURMA: Yeah. It’s an image compressor, and you can fiddle with all the settings and can try out the next generation codecs now that are coming to the web. But yeah, you have more control than I think any other image compression app that I know.

MARTIN: And it’s really, really cool, and I really admire the work that the engineering put into this, that all the developers put into this to make this work so smoothly, so quickly, so nice. It implements lots of best practices. But for a search engine, if you were to sell that as a product, this might not be very good. And that’s because if you look at it, it’s an interface that allows me to drag and drop an image into it, and then it does a bunch of stuff in terms of user interface controls to fine-tune settings. But if I was robot access that page, it’s a bunch of HTML controls, but not that much content, right?

SURMA: Agreed

MARTIN: So would you want to have to sit down and figure out how you would describe this and how you probably don’t want to do all that work by yourself. You want to focus on building cool stuff with the new image algorithms and fine-tuning how to make the settings work better or more accessible, or easier to understand, right? That’s where you want to focus on.

SURMA: Yeah. And I think I actually would like to get help from someone who knows whether this site like I wouldn’t have been able to say if like I think our loading performance is excellent because we spend lots of time on making it good and trying to pioneer some techniques. But I wouldn’t have been able to tell you whether it gets a good ranking from a search bot or a bad ranking, to be honest. I mean, the name is unique enough that it’s very Google-able, so I think even if it didn’t do so well, people would probably find it. But in the end, it’s actually a very interesting example because you’re completely right. The opening page, because it’s about images, it mostly consists of images. The only text we have on the landing page is the name and the file size of the demo images, and the licensing link. So there’s not much going on for a bot to understand what the site does, especially because something this specific, there’s not even much to do with semantic markup, as you said. Right, OK, cool, there’s an image and an input tag. You can drag and drop an image. But even that, even the drag and drop is actually only communicated via the user interface, not via the markup. And so yeah, that’s a really interesting example. Like, I would have no idea how to optimise. I would have probably said like meta description tag. I don’t know. And then John Miller told me that apparently, we don’t pay attention to the meta description tag anymore.

MARTIN: Well, we do. It’s the keywords that we don’t.

SURMA: Oh, the keywords are the one. OK, I take that back. Yeah, exactly. So I think you’re right that it’s very easy for developers to sometimes also guess what is good for SEO and what is bad and actually get input from someone who put in the time to learn what is actually going on. Keep up to date with the most recent updates. As you say, people apparently don’t even know that Google bot is now evergreen Chrome, which is amazing. So there are probably a lot of SEOs who go around saying like, no, no, no, no, you can’t use Shadow Dom or something like that, even if they know the JavaScript actually works. I agree. Get someone who knows.

Making things on the web is a team sport.

(20:26) SURMA: I mean, I’ve been saying that even as a very enthusiastic experimenter and web developer, one single person cannot really understand and use the entire web platform. It’s now so incredibly widespread in the areas that cover. So you can do web audio, web assembly, web use B, MIDI, and all these things. Like, you will not have experience in all of these things. And some of these holes, like WebGL itself, is a huge rabbit holes to fall into. So, pick some stuff. Get good at it. And for things you don’t know, get help because otherwise, you’re going to work on half-knowledge that might end up very likely going to end up making actually counterproductive for what you’re trying to achieve.

Sign up for SEO & Web Developers today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

WebMaster Hangout – Live from June 03, 2022

Can I use two HTTP result codes on a page?

Q: (01:22) All right, so the first question I have on my list here is it’s theoretically possible to have two different HTTP result codes on a page, but what will Google do with those two codes? Will Google even see them? And if yes, what will Google do? For example, a 503 plus a 302.

  • (01:41) So I wasn’t aware of this. But, of course, with the HTTP result codes, you can include lots of different things. Google will look at the first HTTP result code and essentially process that. And you can theoretically still have two HTTP result codes or more there if they are redirects leading to some final page. So, for example, you could have a redirect from one page to another page. That’s one result code. And then, on that other page, you could serve a different result code. So that could be a 301 redirect to a 404 page is kind of an example that happens every now and then. And from our point of view, in those chain situations where we can follow the redirect to get a final result, we will essentially just focus on that final result. And if that final result has content, then that’s something we might be able to use for canonicalization. If that final result is an error page, then it’s an error page. And that’s fine for us too.

Does using a CDN improve rankings if my site is already fast in my main country?

Q: (02:50) Does putting a website behind a CDN improve ranking? We get the majority of our traffic from a specific country. We hosted our website on a server located in that country. Do you suggest putting our entire website behind a CDN to improve page speed for users globally, or is that not required in our case?

  • (03:12) So obviously, you can do a lot of these things. I don’t think it would have a big effect on Google at all with regards to SEO. The only effect where I could imagine that something might happen is what users end up seeing. And kind of what you mentioned, if the majority of your users are already seeing a very fast website because your server is located there, then you’re kind of doing the right thing. But of course, if users in other locations are seeing a very slow result because perhaps the connection to your country is not that great, then that’s something where you might have some opportunities to improve that. And you could see that as something in terms of an opportunity in the sense that, of course, if your website is really slow for other users, then it’s going to be rarer for them to start going to your website more because it’s really annoying to get there. Whereas, if your website is pretty fast for other users, then at least they have an opportunity to see a reasonably fast website, which could be your website. So from that point of view, if there’s something that you can do to improve things globally for your website, I think that’s a good idea. I don’t think it’s critical. It’s not something that matters in terms of SEO in that Google has to see it very quickly as well or anything like that. But it is something that you can do to kind of grow your website past just your current country. Maybe one thing I should clarify, if Google’s crawling is really, really slow, then, of course, that can affect how much we can crawl and index from the website. So that could be an aspect to look into. In the majority of websites that I’ve looked at, I haven’t really seen this as being a problem with regards to any website that isn’t millions and millions of pages large. So from that point of view, you can double-check how fast Google is crawling in the Search Console and the crawl stats. And if that looks reasonable, even if that’s not super fast, then I wouldn’t really worry about that.

Should I disallow API requests to reduce crawling?

Q: (05:20) Our site is a live stream shopping platform. Our site currently spends about 20% of the crawl budget on the API subdomain and another 20% on image thumbnails of videos. Neither of these subdomains has content which is part of our SEO strategy. Should we disallow these subdomains from crawling, or how are the API endpoints discovered or used?

  • (05:49) So maybe the last question there first. In many cases, API endpoints end up being used by JavaScript on our website, and we will render your pages. And if they access an API that is on your website, then we’ll try to load the content from that API and use that for the rendering of the page. And depending on how your API is set up and how your JavaScript is set up, it might be that it’s hard for us to cache those API results, which means that maybe we crawl a lot of these API requests to try to get a rendered version of your pages so that we can use those for indexing. So that’s usually the place where this is discovered. And that’s something you can help by making sure that the API results can also be cached well, that you don’t inject any timestamps into URLs, for example, when you’re using JavaScript for the API, all of those things there. If you don’t care about the content that’s returned with these API endpoints, then, of course, you can block this whole subdomain from being crawled with the robots.txt file. And that will essentially block all of those API requests from happening. So that’s something where you first of all need to figure out are these API results are actually part of the primary content or important critical content that I want to have indexed from Google?

Q: (08:05) Is it appropriate to use a no-follow attribute on internal links to avoid unnecessary crawler requests to URLs which we don’t wish to be crawled or indexed?

  • (08:18) So obviously, you can do this. It’s something where I think, for the most part, it makes very little sense to use nofollow on internal links. But if that’s something that you want to do, go for it. In most cases, I will try to do something like using the rel=canonical to point at URLs that you do want to have indexed or using the robots.txt for things that you really don’t want to have crawled. So try to figure out is it more like a subtle thing that you have something that you prefer to have indexed and then use rel=canonical for that? Or is it something where you say actually, when Googlebot accesses these URLs, it causes problems for my server. It causes a large load. It makes everything really slow. It’s expensive or what have you. And for those cases, I would just disallow the crawling of those URLs. And try to keep it kind of on a basic level there. And with the rel=canonical, obviously, we’ll first have to crawl that page to see the rel=canonical. But over time, we will focus on the canon that you’ve defined. And we’ll use that one primarily for crawling and indexing.

Why don’t site:-query result counts match Search Console counts?

Q: (09:35) Why don’t the search results of a site query, which returns so many giant numbers of results, match what Search Console and the index data have for the same domain?

  • (09:55) Yeah, so this is a question that comes up every now and then. I think we’ve done a video on it separately as well. So I would double-check that out. I think we’ve talked about this a long time already. Essentially, what happens there is that there are slightly different optimisations that we do for site queries in terms of we just want to give you a number as quickly as possible. And that can be a very rough approximation. And that’s something where when you do a site query, that’s usually not something that the average user does. So we’ll try to give you a result as quickly as possible. And sometimes, that can be off. If you want a more exact number of the URLs that are actually indexed for your website, I would definitely use Search Console. That’s really the place where we give you the numbers as directly as possible, as clearly as possible. And those numbers will also fluctuate a little bit over time. They can fluctuate depending on the data centre sometimes. They go up and down a little bit as we crawl new things, and we kind of have to figure out which ones we keep, all of those things. But overall, the number in Search Console for in, I think the indexing report that’s really the number of URLs that we have indexed for your website. I would not use the about number for any diagnostics purposes in the search results. It’s really meant as a very, very rough approximation.

What’s the difference between JavaScript and HTTP redirects?

Q: (11:25) OK, now a question about redirects again, about the differences between JavaScript versus 301, HTTP, status code redirects, and which one would I suggest for short links.

  • (11:43) So, in general, when it comes to redirects, if there’s a server-side redirect where you can give us a result code as quickly as possible, that is strongly preferred. The reason that it is strongly preferred is just that it can be processed immediately. So any request that goes to your server to one of those URLs, we’ll see that redirect URL. We will see the link to the new location. We can follow that right away. Whereas, if you use JavaScript to generate a redirect, then we first have to render the JavaScript and see what the JavaScript does. And then we’ll see, oh, there’s actually a redirect here. And then we’ll go off and follow that. So if at all possible, I would recommend using a server-side redirect for any kind of redirect that you’re doing on your website. If you can’t do a server-side redirect, then sometimes you have to make do. And a JavaScript redirect will also get processed. It just takes a little bit longer. The meta refresh type redirect is another option that you can use. It also takes a little bit longer because we have to figure that out on the page. But server-side redirects are great. And there are different server-side redirect types. So there’s 301 and 302. And I think, what is it, 306? There’s 307 and 308, something along those lines. Essentially, the differences there are whether or not it’s a permanent redirect or a temporary redirect. A permanent redirect tells us that we should focus on the destination page. A temporary redirect tells us we should focus on the current page that is redirecting and kind of keep going back to that one. And the difference between the 301, 302, and the 307, and I forgot what the other one was, is more of a technical difference with regards to the different request types. So if you enter a URL in your browser, then you do what’s called a GET request for that URL, whereas if you send something to a form or use specific types of API requests, then that can be a POST request. And the 301, 302 type redirects would only redirect the normal browser requests and not the forms and the API requests. So if you have an API on your website that uses POST requests, or if you have forms where you suspect someone might be submitting something to a URL that you’re redirecting them, then obviously, you would use the other types. But for the most part, it’s usually 301 or 302.

Should I keep old, obsolete content on my site, or remove it?

Q: (14:25) I have a website for games. After a certain time, a game might shut down. Should we delete non-existing games or keep them in an archive? What’s the best option so that we don’t get any penalty? We want to keep informed of the game through videos, screenshots, et cetera.

  • (14:42) So essentially, this is totally up to you. It’s something where you can remove the content of old things if you want to. You can move them to an archive section. You can make those old pages no-index so that people can still go there when they’re visiting your website. There are lots of different variations there. The main thing that probably you will want to do if you want to keep that content is moving it into an archive section, as you mentioned. The idea behind an archive section is that it tends to be less directly visible within your website. That means it’s easy for users and for us to recognise this is the primary content, like the current games or current content that you have. And over here is an archive section where you can go in, and you can dig for the old things. And the effect there is that it’s a lot easier for us to focus on your current live content and to recognise that this archive section, which is kind of separated out, is more something that we can go off an index. But it’s not really what you want to be found for. So that’s kind of the main thing I would focus on there. And then whether or not you make the archive contains no index after a certain time or for other reasons, that’s totally up to you.

Q: (16:02) Is there any strategy by which desired pages can appear as a site link in Google Search results?

  • (16:08) So site links are the additional results that are sometimes shown below a search result, where it’s usually just a one-line link to a different part of the website. And there is no meta tag or structured data that you can use to enforce a site link to be shown. And it’s a lot more than our systems try to figure out what is actually kind of related or relevant for users when they’re looking at this one web page as well? And for that, our recommendation is essentially to have a good website structure, to have clear internal links so that it’s easy for us to recognise which pages are related to those pages, and to have clear titles that we can use and kind of show as a site link. And with that, it’s not that there’s a guarantee that any of this will be shown like that. But it kind of helps us to figure out what is related. And if we do think it makes sense to show a site link, then it’ll be a lot easier for us to actually choose one based on that information.

Our site embeds PDFs with iframes, should we OCR the text?

Q: (17:12) More technical one here. Our website uses iframes and a script to embed PDF files onto our pages and our website. Is there any advantage to taking the OCR text of the PDF and pasting it somewhere into the document’s HTML for SEO purposes, or will Google simply parse the PDF contents with the same weight and relevance to index the content?

  • (17:40) Yeah. So I’m just momentarily thrown off because it sounds like you want to take the text of the PDF and just kind of hide it in the HTML for SEO purposes. And that’s something I would definitely not recommend doing. If you want to have the content indexable, then make it visible on the page. So that’s the first thing there that I would say. With regards to PDFs, we do try to take the text out of the PDFs and index that for the PDFs themselves. From a practical point of view, what happens with a PDF is as one of the first steps, we convert it into an HTML page, and we try to index that like an HTML page. So essentially, what you’re doing is kind of framing an indirect HTML page. And when it comes to iframes, we can take that content into account for indexing within the primary page. But it can also happen that we index the PDF separately anyway. So from that point of view, it’s really hard to say exactly kind of what will happen. I would turn the question around and frame it as what do you want to have to happen? And if you want your normal web pages to be indexed with the content of the PDF file, then make it so that that content is immediately visible on the HTML page. So instead of embedding the PDF as a primary piece of content, make the HTML content the primary piece and link to the PDF file. And then there is a question of do you want those PDFs indexed separately or not? Sometimes you do want to have PDFs indexed separately. And if you do want to have them indexed separately, then linking to them is great. If you don’t want to have them indexed separately, then using robots.txt to block their indexing is also fine. You can also use the no index [? x-robots ?] HTTP header. It’s a little bit more complicated because you have to serve that as a header for the PDF files if you want to have those PDF files available in the iframe but not actually indexed. I don’t know. Timing, we’ll have to figure out how long we make these.

Q: (20:02) We want to mask links to external websites to prevent the passing of our link juice. We think the PRG approach is a possible solution. What do you think? Is the solution overkill, or is there a simpler solution out here?

  • (20:17) So the PRG pattern is a complicated way of essentially making a POST request to the server, which then redirects somewhere else to the external content so Google will never find that link. From my point of view, this is super overkill. There’s absolutely no reason to do this unless there’s really a technical reason that you absolutely need to block the crawling of those URLs. I would either just link to those pages normally or use the rel nofollow to link to those pages. There’s absolutely no reason to go through this weird POST redirect pattern there. It just causes a lot of server overhead. It makes it really hard to cache that request and take users to the right place. So I would just use a nofollow on those links if you don’t want to have them followed. The other thing is, of course, just blocking all of your external links. That rarely makes any sense. Instead, I would make sure that you’re taking part in the web as it is, which means that you link to other sites naturally. They link to you naturally, taking part of the normal part of the web and not trying to keep Googlebot locked into your specific website. Because I don’t think that really makes any sense.

Does it matter which server platform we use, for SEO?

Q: (21:47) For Google, does it matter if our website is powered by WordPress, WooCommerce, Shopify, or any other service? A lot of marketing agencies suggest using specific platforms because it helps with SEO. Is that true?

  • (22:02) That’s absolutely not true. So there is absolutely nothing in our systems, at least as far as I’m aware, that would give any kind of preferential treatment to any specific platform. And with pretty much all of these platforms, you can structure your pages and structure your website however you want. And with that, we will look at the website as we find it there. We will look at the content that you present, the way the content is presented, and the way things are linked internally. And we will process that like any HTML page. As far as I know, our systems don’t even react to the underlying structure of the back end of your website and do anything special with that. So from that point of view, it might be that certain agencies have a lot of experience with one of these platforms, and they can help you to make really good websites with that platform, which is perfectly legitimate and could be a good reason to say I will go with this platform or not. But it’s not the case that any particular platform has an inherent advantage when it comes to SEO. You can, with pretty much all of these platforms, make reasonable websites. They can all appear well in search as well.

Does Google crawl URLs in structured data markup?

Q: (23:24) Does Google crawl URLs located in structured data markup, or does Google just store the data?

  • (23:31) So, for the most part, when we look at HTML pages, if we see something that looks like a link, we might go off and try that URL out as well. That’s something where if we find a URL in JavaScript, we can try to pick that up and try to use it. If we find a link in a text file on a site, we can try to crawl that and use it. But it’s not really a normal link. So it’s something where I would recommend if you want Google to go off and crawl that URL, make sure that there’s a natural HTML link to that URL, with a clear anchor text as well, that you give some information about the destination page. If you don’t want Google to crawl that specific URL, then maybe block it with robots.txt or on that page, use a rel=canonical pointing to your preferred version, anything like that. So those are the directions I would go there. I would not blindly assume that just because it’s in structured data, it will not be found, nor would I blindly assume that just because it’s in structured data, it will be found. It might be found. It might not be found. I would instead focus on what you want to have to happen there. If you want to have it seen as a link, then make it a link. If you don’t want to have it crawled or indexed, then block crawling or indexing. That’s all totally up to you.

Sign up for our Webmaster Hangouts today!

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH 

LION Digital Welcomed as Shopify Plus Partners

For over a decade Shopify has been making commerce better for everyone by reducing barriers to business ownership and offering a suite of services, including payments, marketing, inventory management and customer engagement tools to help brands scale. The Shopify Plus Partners program recognises agencies that combine world-class industry and platform expertise and specialise in solutions to help eСommerce and retail brands grow.

LION Digital has been recognised for its long-term partnership with Shopify, successfully applying the Shopify platform’s best practices and generating online growth for brands like Ledlenser, OneWorld Collection, Nutrition Warehouse, and Havaianas, to name a few.

Leo Comino, CEO and Founder of LION Digital echoed the team’s excitement in officially joining the Plus roster, “We have been working alongside the Shopify Plus team for a long time and are excited to take our relationship to the next level as Digital Marketing Partners”.

LION stands for leaders in our niche, the agency standouts by hiring leaders with over a decade of eCommerce channel experience who recognise success comes from cross-channel cohesion. Having bolstered the senior leadership team with the recent appointments of Clare Graham as Head of Paid Media, joining from Dentsu’s iProspect, and Stelios Moudakis as General Manager from Omnicom’s Resolution Digital, the team is well poised to deliver on its vision to drive performance and provide exceptional client experience.

Leo Comino noted of their appointments “we are very proud of our growth over the past two years and the strategic hires of Clare and Stelios will ensure our product and cohesive client experience offering reaches new heights and reinforces our commitment to being true partners of our clients and tech partners”.

LION provides digital strategy, SEO, Paid Media, Email & Social services for some of the biggest brands in Australia and around the globe.

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

Contact Us

Are You Ready for Google Analytics 4?

With all the new changes in the past decade in the digital marketing landscape, a more sophisticated way to collect and organise user data was much needed. In the fall of 2020, Google introduced an updated software called Google Analytics 4 (GA4), a version that, so far, has worked in parallel with its predecessor Google Universal Analytics (UA). However, Google recently announced that this version would be sunsetting on July 1, 2023, including its premium version 360 Universal Analytics, which will stop processing data in October of next year as well. It is worth noting that the premium features from 360 Universal Analytics will be rolled into the new iteration of GA4 as well.

Getting used to new software takes time, especially in this case, considering that Google Analytics 4 presents an entirely different interface and configuration to UA. This is most certainly why Google made this announcement in advance, to allow businesses still using the UA tool to migrate and get used to the latest version. It is also worth noting that GA4 doesn’t provide any historical data you’ve tracked in Universal Analytics, which adds another good reason why you should start migrating to GA4 since data continuity and reporting are paramount to your business.

Some of the main tools integrated with Google Analytics 4

Event-based data model

Probably the most significant change in the software, Google Analytics 4 introduces an event-based model that offers flexibility in the way we collect data while also providing a new set of reports based on the model.

With Google Analytics Universal, businesses relied on “sessions”, which accounted for a more fragmented model since it only collected data in limited slots. Also, it only worked with specific and predefined information categories, making custom-type data much more challenging to obtain. But now, since everything can be an event, there’s a broader opportunity to understand and compare client behaviour through different custom-type data across various platforms.

Operation across platforms

Previous to GA4, businesses required different tools to analyse both website and app data separately; this made it difficult to obtain a global picture of its user traffic. But now, GA4 added a new kind of property that merges app and web data for reporting and analysis.

Thanks to this update, if you have users coming to you through different platforms, you can now use a single set of data to know which marketing channels are acquiring more visitors and conversions.

No need to rely on cookies

As mentioned at the beginning of this article, a lot has changed in the last decade regarding digital marketing; this includes an ever-growing emphasis on user privacy.

Big tech companies, such as Apple, have started to develop a first-privacy policy, which is why Safari started blocking all third-party cookies in 2020. So it comes as no surprise that Google also announced that they will do the same in 2023 for Chrome.

With GA4, Google is moving away from a cookies-dependent model, no longer needing to store IP addresses for its functionality and looking to be more compliant with today’s international privacy standards.

Audience Triggers

This is a cool feature and lets brands set conditions for a user to move from one audience group to another (like they’ve bought into a specific product category). Then you can better personalise the ad experience, offering complimentary/similar products across the display, video and discovery placements and bring them back to shop more with you.

More Sophisticated insights

GA4 promised a more modernised way of collecting and organising data. Still, the most important thing for businesses is “how” to use this data. Advanced AI learning has been applied in Google Analytics 4 to generate sophisticated predictive insights about user behaviour and conversions, pivotal to improving your marketing.

Integrations

GA4 brings deeper integrations with other Google products, such as Google Ads, allowing you to optimise marketing campaigns by using data to build custom audiences that are more relevant to your marketing objectives and utilising Google Optimise for AB testing

In summary, Google Analytics 4 combines features designed to understand client behaviour in more detail than Universal Analytics previously allowed whilst prioritising user privacy. It also brings about a very friendly interface, with some drag-and-drop functionality to help build reports, reminiscent of Adobe Analytics Workspace.

Adobe Analytics Workspace

GA4 Drag and Drop

You can chat with the team at LION Digital and we can help you set up on GA4

We had a good chat with a colleague at our first Shopify Plus Partner meetup who was developing a Shopify Plus site for their client. They noted GA4 setup they had to do was quite complex and time-consuming as event-tracking needed to be configured, including eCommerce tracking, and Data Studio reports needs to be rebuilt. Took him a good 3 hours that he was keen not to repeat. Thankfully we’ve got a bunch of skilled specialists to help you set up GA4 and we can connect this to our Digital ROI Dashboard to help you get the insights you need, and look at your Channel Action Plans.

GET IN CONTACT TODAY AND LET OUR TEAM OF ECOMMERCE SPECIALISTS SET YOU ON THE ROAD TO ACHIEVING ELITE DIGITAL EXPERIENCES AND GROWTH

Contact Us

Article by

Dimas Ibarra –
Digital Marketing Executive