On November 4th, 2019, Google hosted a webmaster conference for a small group of SEOs and webmasters and was emceed by Danny Sullivan (aka Google’s Search Liaison). The event was used as an opportunity to share what Google has been working on, and to get feedback from people who regularly use Google Search Console (GSC) and optimize their sites for Google Search.
The following is a compilation of the most important topics discussed at the conference.
- Google Search lives in a mobile world
- Structured data isn’t just for rich results
- CrUX is a bigger deal than you think
- De-Duplication signal priorities and the importance of disambiguation
- Paul Haahr provides insights into how Google approaches Search
- Download presentations
Google Search lives in a mobile world
It was stated clearly that the Google product team is primarily focused on mobile search experiences. The focus on mobile was something Danny Sullivan highlighted early on in his opening remarks, and there were no desktop examples in any of the presentation slides.
It’s not new that Google has started to focus more on mobile, but I left the event with the impression that desktop search is now just an afterthought. Very few search experiences have been added to desktop search results in the past few years, and when they are added, they are modified versions of what’s been on mobile results for a significant amount of time.
My takeaway is that it’s time for me to break my decades’ old habit of focusing on desktop results. Mobile-first is no longer just an approach to web design; it’s the present and future of search.
Structured data isn’t just for rich results
There wasn’t a lot of new information about what Google is doing with structured data. The presenter gave examples of how they’re using it and recommended the audience visit the Search Gallery to learn more.
I have always contended that Schema.org structured data should be used regardless of whether or not Google has a rich result for it. That stance was confirmed during the presentation. Even if Google doesn’t display rich results for certain Schema.org types, they can still use the structured data to verify and disambiguate the page content.
CrUX is a bigger deal than you think
The Chrome User Experience Report (CrUX) was mentioned several times through out the event. It was stated that CrUX is as a data source that the Google Search team likes because it’s based on real world data. They implied that they’re using CrUX to make decisions, and there was evidence of that with the new Speed Report in GSC.
To my surprise, the Speed Report uses data from CrUX instead of Lighthouse or PageSpeed Insights. It’s possible they did that because it’s the least resource-intensive way to populate results. However, I think that the decision might be shortsighted and flawed. The CrUX results in the Speed Report were wildly off from PageSpeed Insights and Lighthouse results. Many attendees expressed in the room that they prefer to have data from PageSpeed Insights and Lighthouse. The team behind the tool is also aware that it’s not perfect, which is why they’ve labeled it as Experimental.
De-Duplication signal priorities and the importance of disambiguation
I enjoyed the de-duplication presentation. It’s the first time I’ve ever heard a Google representative give a detailed overview of how they approach duplicate content. The most interesting part of the presentation to me was in regards to Signals. The de-duped pages they pick are based on prioritized signals. Their first concern is determining the correct owner and page for the content so they can avoid hijacking. Their second concern is UX. The page that they think has a better user experience – performs better, is secure, etc. – will get preference. The signals that get the least amount of priority are redirects, rel=“canonical” and sitemaps.
Knowing that Google gives the least amount of priority to redirects, hints, and sitemaps is essential because it means it’s up to the site to do everything it can to disambiguate its pages. That’s especially true when it comes to international pages. It’s not enough to use hreflang and to put pages inside a country-specific folder. Sites need to find a way to disambiguate the content further, like including the country name in the page title and on the page. Doing that can help Google also understand that it’s a relevant page for a locale instead of a duplicate page with a false signal.
Paul Haahr provides insights into how Google approaches Search
The keynote presentation was given by Paul Haahr, Distinguished Engineer for Search. I recommend that you go through his presentation. It provides insight into how Google approaches search and solves problems. Paul provided interesting examples of how they approach understanding synonyms, similes, and non-compositional compounds. In particular, he discussed the problems they encountered and explained why they prefer to identify patterns of failure and to fix them algorithmically, instead of manually.
Except for Paul Haahr, I’ve removed all of the presenters’ names. Google specifically requested that we don’t mention their names.
We encourage you to discuss topics presented at the event with colleagues and other members of the Webmaster community, but please refrain from revealing specific individuals in those discussions.
It’s not 100% clear whether they meant individual discussions with product managers or the presentations, so I’m playing it safe and assuming they meant everything. They said they might end up releasing videos of the presentations, so the Chatham House Rule they invoked may be for naught.
Download presentations (Zip file with PDF files)