Using Surveys to Increase Executive Buy-in

Quantifying subjective metrics like “E-A-T”, “UX” and “Content Quality”

This is the SEO MBA - a newsletter exploring leadership & business skills for SEO professionals. I’m half way through the second course beta and I just bought a yeti mic! Next I’m researching cameras and lighting… I think the full course will be finished in 6-8 weeks. Stay tuned!


There’s a business adage that says “you can’t manage what you can’t measure.”

Today, I’m going to reframe it as:

“you can’t manage upwards what you can’t create a measure for.”

Unfortunately modern SEO requires investing in initiatives that are hard to measure. Things like:

  • Improving a site’s UX

  • Adding E-A-T signals to a site

  • Improving content quality

Almost every site would benefit from investing in these initiatives. But getting buy-in and sign-off to work on them can be difficult and frustrating.

Partly this is because these initiatives tend to be cross-functional, with many stakeholders. But partly it’s because they’re hard to measure. There’s no standard metric for “how good the UX is” or “how good the content is”.

(We’ve talked before about how “make the content better” is lazy thinking…)

To secure buy-in and resources for a large complex initiative you need to find a way to quantify it. Executives will pay more attention to something that has a concrete measure and they’ll feel more comfortable investing behind improving it.

As SEOs our first instinct is to try and look at the macro view - a big data analysis or a regression analysis of the people ranking ahead of us. This isn’t wrong but it’s not always useful - especially for these kinds of subjective initiatives.

Instead we can use simple surveys to create benchmarks and metrics that executives can easily understand.

Two stories from my own consulting work:

Example #1: The Panda Survey

I was working with a large enterprise organization and we realized there was a key weakness around UX and E-A-T for their high value pages. Subjectively we could see that competitors were providing a better user experience - giving them what they wanted faster with a more modern site design.

But how to convince the executive team to invest here? After all, the site’s design and page layout hadn’t been touched in years.

We decided to run the Panda Survey1. For those that aren’t familiar it’s basically replicating the 10 questions Google asks in their quality rater guidelines. Watch this video to learn more about it.

We surveyed a few hundred people and produced this chart:

Getting to this point took maybe $1,000 in costs to set up and run the survey. But once we presented this data the executives instantly sat up and paid attention. Not only were we presenting a concrete quantifiable gap against our key competitor - but we’re also presenting a metric that we can manage. “Better” is no longer an abstract concept.

The immediate impact was budget to run this survey quarterly and create an executive dashboard of these metrics over time.

The secondary impact was that once executives were used to looking at this data, projects around UX, E-A-T and more suddenly started to get budget and resources.

Example #2: “Make the content better”

I’ve lost count of the times I’ve tried to convince clients to invest in content quality. Of course - the failure is all mine. You can’t just say make the content better, you have to actually understand how to measure content quality.

So that’s what I did recently - while working with a smaller, more nimble company I was trying to convince the editorial team to invest in improving content quality. We had zero budget (not even $1000!) but I found a friendly editor and we re-wrote a single page.

Importantly we created a new guiding principle for the content and re-wrote the page following that principle. Then, once we had the old and new versions of the page we ran some user testing with a tiny number of users using the company’s existing UserTesting account (again - we had zero budget!).

And we found this:

Using the new content principles:

  • 80% of participants thought the new page was easier to read

  • 83% of participants thought the new page was more useful

Now, we’re working with a single page and a small number of users but there’s enough here for an executive to sign off on a real project to expand this new approach to content.

Next, I’d look at not only running the survey across more of the client’s pages but also expanding to look at key competitors. Executives hate losing to competitors.

Comparison Is the Key

Notice how both these examples have a comparison. It’s not just creating the metric - it’s showcasing the gap. Either the gap between us and competitors or the gap between before and after.

This comparison is essential to provide context - without it an executive is flying blind trying to understand if the metric is inherently good or bad.

Unfortunately - when you start trying to run surveys inside an organization you might get pushback. Especially from a design/UX team who might already be running user testing and on-site user surveys.

The problem is that these surveys and user testing are designed to drive insights, they’re not designed to drive executive buy-in. And they often lack comparison.

For example a UX team might be able to give you access to the on-page surveys (e.g. something like Qualaroo on site user surveys) and that’s great for generating insights into user behavior. But they’re not very useful at quantifying the things we care about and creating a comparison vs competitors.

So you might have to go a little bit rogue. Small user survey studies are relatively cheap and might provide the leverage you need to make the case for increased budget, more robust user surveys etc.

Surveys Aren’t the Whole Story

To be clear, a survey won’t tell you how to improve the metric, and it’s rarely the whole pitch.. It’s important to understand the limitations, especially for a survey that’s got a small number of users and where you’re surveying random internet people, not actual users.

And be careful getting executives to buy-in to metrics that you don’t believe in or can’t influence!

But surveys are uniquely understandable by executives. It’s a simple “we asked people and they said…” You might be surprised at how effective it is. Especially for getting the initial buy-in and resources to run a pilot or a proper test.

Have you had success using surveys to drive executive buy-in? Drop me a note, I’d love to hear your stories.

Until next time,

Tom

1

Following my own advice, calling something “The Panda Survey” in front of executives is a dumb idea and I’m ashamed that I did that. Going forward I’m never going to call this the Panda Survey again in front of an executive. I’m going to call it the Google Quality Raters Survey - QRS for short. Sounds much better.