Video for user assistance, a few questions

Working on SQL Compare 8, I spent some time thinking about video. It’s a bit different, it’s fun, and it’s a good excuse to watch things on YouTube and call it “research”. As a way we communicate (technically) with our users, it’s also an area that rather interests the Technical Communications team. Surprising, that.

We’re a bookish lot, so apart from watching clips of kittens falling off things, and 80s cartoons, we’ve been looking at learning styles, information design, and all the other stuff that informs on how we put together user assistance made of words. Some of it applies to video, some of it probably doesn’t, and some of it leaves us with a few questions about how best to proceed.

After making this (slightly rambling) list, I thought I’d throw it out there and see if anybody else had any thoughts about this stuff. So, these are some of the concepts we think are important in web content design, and some of the questions raised:

Navigation, titles, and signposts
On the web, people want to know what something is, and what it’s related to. They want this fast. We can’t expect anybody to read anything or look at it for more than a second if we haven’t made it easy to find, and told them why they should.

  • People navigate within pages using headings, borders, lines, prominent typefaces, all sorts of things. How do we approach this for video?
  • Titles are particularly important in navigation – they’re often all anybody has to assess the usefulness of content they haven’t read yet. What should video titles be like?

Scanability, units, and chunks

When people do read web pages, they skim in an “F” pattern. They look at titles, then a little explanation, then salient headings, keywords, and leading sentences. Running down a page, people get only a few words into a given line, seven if you’re lucky.

  • The best help imposes the lightest cognitive load whilst scanning – it makes it easy for users to recognise relevant content, and discard the rest. We’re not sure how that works for video. Is it even possible if you’re using audio?
  • Scanability goes a little beyond signposting, it’s in the minutiae of information design. We have to be able to throw away at clause level: why read the second half of the sentence, if the first tells you don’t need it? Can we structure multimedia to be thrown away?
  • What do we even mean by “chunks” in a video context: An entire demonstration? A Step? Some other semantic unit?

We’re a bit puzzled about this one.

Lateral discovery learning

I mean links, mostly. Lots of web users like to explore, they like related information.

  • Strongly cognitive and constructivist learners respond well to a rich, lateral, and highly hypertextual environment. It’s important to support that. Can we?
  • Even once we’ve worked out what meaningful chunks are, we need to find ways of relating them. When and how should videos link out to other content? When and how should they link to whole other videos or to sections of other videos?
  • Can we continue to support multiple information seeking styles with a structured yet explorable environment?

Kinds of information

Going beyond bullet points and examples. Structuring and anchoring concepts (“advance organisers”) that frame a context, or tell people what to expect are pretty important for reinforcing learning.

  • People can pick out the useful stuff more easily if they know what the whole thing is about. How do we support that?
  • There is often a requirement for more thorough conceptual and overview information. Would this sit within or outside video content?
  • Nobody wants to be reading the help. Nobody wants to leave their workflow. Good design lets me have a question, get an answer, and get back to what I was doing, in few steps with little stress. Will that still work?

Words and pictures living together

This is the sensory channels and working memory bit. Say you’re watching a PowerPoint presentation, and the speaker’s just droning on. They’re basically reading out the slide. But you’ve read it before they’ve finished the first paragraph, right? Why sit through the rest? That may be bad presentation design, but spoken and written words have a balance to strike, and people register them in different ways.

How do image, speech, and text relate to each other, and how can they be best used together to convey information? I was sent an interesting paper on this recently, and it’s something we’ll be talking about at the next Cambridge ISTC discussion group.

There are other issues too: stylistic and terminological consistency, branding and the like, but they’re far more to do with implementation. The big question, and the one which raises all these other questions is how to use the technology available to produce the best possible user assistance.

If there’s anything I’ve missed, or if this is an area anybody’s been looking into, it would be great to hear about it.