Money and mouth, together at last: content strategy in action, part I

I’ve talked a lot about content strategy lately. Let’s do some. Here’s a whistle stop example of drilling into some of our web content using Google Analytics, and asking those questions: what’s it for, and is it any good at it?

Today’s victims are the SQL Compare “case studies“: short articles outlining how SQL Compare has been used by some of our customers to solve their problems. In principle, they’re fantastic. They could show real, complex problems, and solve them in a useful, portable way, at once making a sales case and offering assistance.

I love this type of messily liminal technical/marketing content. You know the deal – the stereotypical technical author writes help to solve a user’s problem with a tool; the stereotypical marketing writer comes up with copy about how the tool solves the user’s business problem. Then there’s the fun middle ground where knowing how to use the tool to solve the business problem is big and complex and needs to be true, but is also the best way to sell. Quite often, this is where bad writing lives. Not least because in a pre content strategy culture, getting it right doesn’t look like any one person’s job.

That, or the here be dragons factor.

Example

Preaching aside, we’ve got some case studies. They seem like a good idea, and their usage stats look a bit like this: 

I’ve marked up one or two of the obvious questions. Some of this content is either irrelevant or invisible to its intended users. The stuff that gets found has problems. Analytics at this high level don’t give me that much detail, but they signpost content behaving oddly.

What’s wrong with this picture?

Well, let’s look at number 10 there. I’m fairly sure that if we ever did know what the page was for, but only eight people viewed it this year, and for rather less time than it takes to read, something is broken somewhere.

There are a few options:

  • Irrelevant and invisible?
    There’s a quick fix – we could put it out of its misery. If it’s only one of those things, we have less fun: we have to understand it in a fair bit of detail.
  • Irrelevant and visible?
    It shouldn’t have been created in the first place, or should have been modified as soon as that became obvious.
  • Relevant and invisible?
    There are probably architectural and/or SEO issues we can address around visibility. Ideally, this would also have been caught by early curation.
  • It isn’t both relevant and visible. We know that much.
    So is it doing any harm? Now, perhaps not, but I don’t know for sure, and it could be a missed opportunity.

Those are the obvious culprits, but there are also two more:

  • Relevant, but looks irrelevant
    Readers online are busy or fickle or both, and poorly designed information retains no attention to speak of. Structure, presentation, tone, scanability, all sorts of things play a part.
  • Irrelevant, but looks relevant
    The evil twin, and what might be going on if there were high pageviews, with low time on page and/or high exit rates. Typically this means useful-sounding titles attached to utter twaddle.

Content with those last two problems makes me sad. The first four, we can fix with curation – knowing what things are for, measuring success, and acting on it. The last two need informed creation – people sharing expertise to make sure that information is optimised for the needs of our users and our business. 

The content we’re looking at is a few years old now, and picking on it like this is a bit unfair. But was it used back when it was created? Sadly, I don’t have the data. I hope so. It’s a miserable waste of time and resources otherwise.

So what should we do – fix all the legacy content? That isn’t cost effective for every page, although we could go for quick wins on pages that are visible. No, what we should do is not let this kind of thing happen in future, by making sure we create in an informed manner, and curate early and often.

More information