Martin MacDonald helps SEOs in 2023 to understand how Google is shifting from informational retrieval to informational suggestion, how the game is changing, and why having richer content and good media assets is a must if you want to compete on the SERP of tomorrow.
Martin says: “We’re finally reaching the year that Google is moving away from the environment we’ve had for the last two decades of 10 blue links. They’re moving away from being a purely informational retrieval system to an informational suggestion system.
Google Discover has been a thing for the last few years but, increasingly (if you follow Barry Schwartz’s updates), we see tests where they’re injecting additional panels into the overall search results. This has to be a manoeuvre to try and get back some of the eyeball time that Google has been losing to social media like Facebook and Twitter - and that YouTube is losing to TikTok.
You can expect a much richer content-based front end for search results over the next year. If you aren’t ready for this - if you have a website that’s entirely based on text, you haven’t got good media assets on your website, or you’re not using OG definitions for images - then you’re going to be losing out.
Unless Google is able to correctly define your content in the Knowledge Panel that’s going to become part of the search results, then you’re going to lose your CTR. You need to start focusing on things outside of the words that appear on the page and look at the entire multimedia return that each one of your web pages is giving”
What do SEOs need to do differently to adapt to this new world?
“You need to have a clearer understanding. It’s not about using marketing personas in the old sense - where we would make up an individual and then target them - you need to think more about marketing personas in the individual sense.
Google has an amazing understanding of the items that a user is interested in once they recognise that individual. If you open a new tab in Chrome now, you get very good and very personalised Discover suggestions, and all of these are entirely unique to the individual. Google has been experimenting with this for over a year now, and it demonstrates that they understand the best results for what people are searching for, but also the items that people are going to be most interested in.
To understand this, look at the entities that individual pages or pieces of content are returning through various APIs. If you assume that Google maintains a table defining the interests of individuals that it recognises in order to build Discover recommendations, you can see how it would be able to associate content with what that content is about and associate what that content is about with whether or not you’re going to be interested in it.
In order to have your content surface in that environment (particularly if this way of thinking makes its way through to actual search results), you are going to need a rich level of content to compete.
If you go back 10 or 15 years, we used to get by with two or three sentences of poorly constructed Markov-chained content on a web page. Over time, those requirements have expanded. It became: more text, longer text, more unique text, more readable text, better text, etc. It has become harder and harder as the years have gone by. However, every one of those things was about the text.
Google have the Discover product, they have YouTube, they have user data, and they have people’s search queries. What they’ve never done is tied all of these things together. Now, you can see how they would be able to do that and start taking chunks out of the Facebooks, Twitters, and TikToks of this world.
Ultimately, Google’s mission has changed. Back when they formed 20 years ago, they wanted to catalogue and categorise the world’s information and make it available to search. I don’t think that’s their mission statement anymore. I think their mission statement is quite clearly: ‘We want to have the most eyeball time and attention focused on us of any company on the internet.’ This is the only realistic way that they can leverage their dominance in search to be able to compete in the attention economy, which is something that they’ve been losing out on to Reddit, Facebook, Twitter, TikTok, et al, for the last couple of years.
When they’ve tried to compete with these platforms in their own verticals, it’s never worked out very well. I can see them developing something that sits in the middle instead, and leveraging their dominance on search by injecting a much richer ecosystem into the search results. That would change Google from the 10 blue links that we think of now into a much more social-media-looking environment, but through the rich content that you’re seeing; tailored to what you’re searching for, plus what your interests are. That’s going to be a fundamental change over the coming years.”
How do you define your relevance to an entity?
“It is still about schema and building authority because those things are not going to go away. Schema allows Google to have a much quicker and easier understanding of the content that’s on the webpage, and authority is something that’s always going to be important from a ranking perspective.
There is more to it now, though. If you pass content through any of the NLP APIs (there’s a fantastic tool out there specialising in this right now, but other APIs are available too), what it returns back to you are recognised entities. Those entities are things that it recognises on that page. If you analyse the entities that appear on each one of the pages in a set of search results, you can see what is present on pages 1 through 6 that isn’t present on pages 7, 8, or 9.
We conducted some experiments in 2021 and early 2022, where we used this method to plug the gap in the content on a couple of eCommerce sites. We did this by making sure that all of the relevant pages had the information that Google was expecting to see for one of the top results – and our testing was extremely positive. Certain things were highlighted that you simply wouldn’t have noticed, had you not been able to accumulate this data for an entire SERP set.
For instance, there was a specific individual that was relevant to the history of a product that was mentioned in the top 3 or 4 results, so we passed this on to the client (a luxury goods eCommerce company). They refactored and rebuilt their content to also include information about the history of that product, and that content enrichment helped improve the overall search rankings. Is that because they included entities that Google was expecting or is it because they had simply enriched their content? Unless you’re testing hundreds of thousands of these at a time, it’s hard to make a scientific judgement on that.
I can say, from the hundreds that we have done, that it has certainly had a very strong impact. Tracking and understanding this data at scale is also something that I see as being a big competitive advantage in 2023, and moving forward as well.”
How do you measure the positive impact of contributing to the enriched SERP more than your competitor?
“It’s going to depend on what the actual SERP looks like, and I do have an entirely myopic view because I’ve spent 20 years looking at 10 blue links on a page (or, increasingly, 7 blue links and 55 ad spots).
I am envisaging the new SERP as roughly 10 information cards, similar to what we see in Discover - and there will be ranking as well. Your inclusion in these cards is either going to be based on having rich media to put in them or you will just have a meta title and meta description in the same way that we do at the moment. If everyone else on the page has a video or images embedded within their listing, then they’re going to get the CTR and you’re not.
It might simply be that we’re going to track this based on overall ranking position but that’s, again, my myopic view of having sequential links on a page. We may end up having some kind of tessellation of results instead; there’s no way to say what the ultimate UI will be. That UI is fundamentally what’s going to drive reporting and the overall targets that we put on it.
If we’re still seeing an ordered list, I don’t think things will change much, as far as reporting is concerned. However, it’s possible that we move away from that model entirely. That would be a very interesting shake-up because it hasn’t changed much for two decades now. We’re still typing a query into a form on a page, and then getting X number of results back and clicking on one of them. We’re overdue a significant technological update to the way that we interact with search.”
You’ve previously mentioned Google’s data obfuscation. What do you mean by that and how can SEOs get around it?
“Let’s take another trip down memory lane, for those of us that are old enough to remember when Google Analytics actually had keywords in the landing page report. That was fantastic. You could tell which keywords triggered a conversion, but Google took that away a long time ago. Many people working in SEO do not remember, or have no idea, that we used to have access to this data.
They took it away because of ‘privacy’, yet we still managed to receive it in places like Google Search Console. Fundamentally, that has always been available, and I’ll come back to that idea for paid search.
On organic search, the personalised queries that Google is not telling us about are privacy queries - the PII queries. As a ratio, these have been increasing over the last couple of years now. a couple of people have done some work this year into how many queries were reported as specific line items in Google Search Console versus the total amount of organic search traffic that was reported at the site level. They used that to determine how much of the traffic came from keywords that didn’t appear in the keyword table.
It’s not a perfect approach, because we don’t know whether or not there are individual pieces of data or whole keywords that are missing, so it’s impossible to make that evaluation. From the data that we can see (I have a tonne of data from enterprise clients as well), that ratio is increasing slowly. I’ve seen some people claim that 40%+ of the traffic they’re aware of is not being reported at a keyword level through Google Search Console. Personally, I’ve only seen 10-12%, but three years ago I was seeing 2-4%. Even that number has gone up greatly.
It very much depends on the kind of site that you’re looking at. If it’s the kind of site that has lots of PII data, then it’s going to have a much higher share of keywords that it’s no longer receiving data on. Either way, it shows the general trajectory that Google Search Console is going in.
This year, Google has also started taking this data away from their paid search advertisers, which is a Rubicon I didn’t expect them to cross. I always assumed that, because people are paying for this data, they would have access to it. Google has now managed to obfuscate quite a lot of the keyword reports within AdWords that tell you the specific keywords and broadened phrase-matching that returned the click.
That leaves us in a situation where we’re working even more blind on the web than we were before. It becomes incumbent on us to start building our own datasets - and building those as early as possible. A good part of the data that you need to build is from Search Console. The third-party indices that are out there at the moment are all very good at what they do, but they’re useless for really telling you every keyword that’s important to your site. People need to start cataloguing this today.
There are plenty of tools out there that do it, but start backing up and cataloguing 100% of your Google Search Console data today - if that’s the one thing you take away from this. In two, three, or five years, it will be far more important than you would ever imagine it is right now.”
What shouldn’t SEOs be doing in 2023? What’s seductive in terms of time, but ultimately counterproductive?
“I used to do this - we all used to do this - but the algorithm update chasers of the SEO world are akin to the ambulance chasers in the legal world. As an industry, we spend infinitely too much time trying to analyse what happened in the last Google update.
This used to happen once a month, and it was really fun and exciting. It was the ‘Google dance’. If you had a good update, it was great because you knew that you were going to keep that number one spot for at least the next three or four weeks before the next update happened. Crucially, that gave us time to try and have a better stab at why things had gone up and gone down.
Now, you can look at directionality, but you cannot look at individual rankings on individual websites to infer any reality – on a single unit basis - about what happened in a search engine update. You just can’t do it anymore. There’s too much stuff going on simultaneously for you to draw any conclusions that you can learn and work from. If there’s one thing that the entire SEO industry needs to stop doing, it’s investing so much time, energy, and money into the creation and consumption of content (that fundamentally means nothing) talking about the latest update.
The helpful content update is a great example. We were told a week in advance of it coming out and the entire industry spent the entire week talking about nothing else. Everyone was commentating as to how big it was gonna be - and nothing happened. No one really saw anything. There was a core update 10 days later and everyone suddenly saw big changes. Was it because of the core update? Was it because of the helpful content update? Was it because of something else entirely? There is no way for us to know, and any amount of time that we spend exalting the reasons why something has happened is entirely wasted.
You could have been spending that time doing what will result in better rankings longer-term. From day one, Google has always had the same objective in mind for the search results: to make them better. The answer has always been to make your website better if you want to have more traffic. Make the content better, make it easier for Google to crawl, and make it easier for Google to understand. Those three things encapsulate top-end content production, technical SEO, internal linking, and entities – there are a million things that go into those three little bullet points.
That’s what people should be spending time on, not trying to reverse-engineer what happened in the latest Google update. It’s tired.”
Martin MacDonald is the CEO at MOG Media and you can find him over at mog.media.
If you like to get up-close with your favourite SEO experts, these one-to-one interviews might just be for you.
Watch all of our episodes, FREE, on our dedicated SEO in 2023 playlist.
Maybe you are more of a listener than a watcher, or prefer to learn while you commute.
SEO in 2023 is available now via all the usual podcast platforms
Opt-in to receive email updates.
It's the fastest way to find out more about SEO in 2023.