Words like things

<p>Did anybody go to the Information Design Conference last week? If not, you missed a fascinating presentation by <a href=”http://www.infodesign.org.uk/2009-conference/speakers/wong.php”>Michèle Wong</a> on semantic, mimetic, typography. I’m not really able to do it justice here – it was a dense, detailed presentation on some pretty big ideas. This, unfortunately, is rather more whistle-stop. In fact, when I explained the idea to a colleague, he summed up the concept:</p>  <blockquote>   <p>”What, like those tacky Christmas fonts with snow on top?”</p> </blockquote>  <p>I don’t quite agree with the dismissiveness, but yes – exactly like that. The idea is to support different learning styles by using typography to present visual reinforcement: words that look like things.</p>  <p>Michèle’s example used a paragraph of text about the workings of the heart, where phrases like Pulmonary Artery were set in lettering that resembled arteries and veins. At one end of the spectrum, the typeface was essentially conventional, with a slight cartoonishness to it. At the other, it looked like an animated gif with blood squirting out of the letter E. </p>  <p>Yes, it’s a bit silly, but the idea behind it is fascinating. If we look at <a href=”http://en.wikipedia.org/wiki/Learning_styles”>Kolb and Georgc’s</a> models for learning styles, there are some that aren’t necessarily well supported by typical textual information. In particular, learners in Georgc’s Concrete Random quadrant are ill served by something like a conventional software manual. If elements of the information delivery style are user-adjustable, they can be tweaked when a learner is out of their comfort zone, to provide reinforcement from their preferred style. Which is to say that if you spice up a paragraph of text with typography that represents the subject matter, concrete random learners will be more at home.</p>  <p>That’s the theory, anyway. There was also an interesting section on the “affordances of the medium”, the idea being that multimedia information often fails to exploit the potential of being more elaborate than flat text. Which is what got me thinking.</p>  <p>One of the biggest affordances of text, of language, really, as a medium, is <i><a href=”http://en.wikipedia.org/wiki/Differance”>Différance.</a></i> Glibly, this is the idea that meaning is not in words, but rather in between concepts. It’s your standard Derrida and Barthes shtick about the continual postponement of meaning, and the absence of transcendent signification. You and I can both say “rock”, but which rock are we thinking of, exactly, and what does it mean that we didn’t say “boulder”? You know the drill. </p>  <p>Spoken/written language allows the building of an elaborate architecture of meanings, suggestions, salient ambiguities, and complexities. As mimetic typography became more elaborate, it would not be so very different to, say, substituting a paragraph of text about the heart for a short video of the heart beating. But a short film of the heart just doesn’t tell you very much, and the meanings can’t be manipulated with any great dexterity. </p>  <p>Language is valuable for creating complex meanings, but the “price” for this, if you like, is the impossibility of perfect signification (it’s both useful and frustrating that “rock” can be <i>any </i>rock). The more concrete we try and be in our figuration, the more specific and restrictive any given signifier becomes, and also the less portable. As a writing system, Kanji, for instance is fiendishly complicated, and most hieroglyphics aren’t very expressive. </p>  <p>Now, the presentation wasn’t talking about replacing language, rather complementing it. So we don’t have to get into practicalities like graphic design resources, or just how we might convey Larkin’s <i>High Windows</i> using only some polaroids of a stepladder and a sheet of glass. But I do wonder how much point there is in trying to superimpose the two systems. Given that this is a kind of cake-and-eat-it attempt to have perfect, concrete, figuration layered over language, is it just going to be counterproductive? It’s not unlike the problems of audio and video, in fact. There’s a substantial risk of merely producing cognitive clutter. </p>  <p>I’m not arguing that Michèle’s experiment has no value, and I’d love to see the results. However, I’d caution against abandoning the affordances of one medium in a rush to exploit another. The research I’ve been looking at around video for learning design suggests this pretty clearly – that delivery styles and use of sensory channels should be carefully balanced. But the static pictorial has little access to temporality, and the entirely non-textual (or non-verbal) has little access to complex semantics, so where visuals distract from language, or where no language is used, I’m sceptical of the capacity to deliver information.</p>