In our opening post on Information 4.0 – a primer for technical communicators, we looked at how and why Information 4.0 is the next big step in how techcomm professionals will document technical information – and also help machines to document and act on this information.
Now guest author and techcomm expert Neil Perlin starts to unpack how Information 4.0 works across four leading content areas: content creation and categorisation, followed by retrieval and delivery, and how technical communicators will have to adapt.
1. Content creation
This seems straightforward: content must be created. But who, or what, will create it? In what form? In how many varieties? Consider the development of content to illustrate some of the issues facing technical communicators…
The appearance of online help in the late 1980s began to change content from the document or book model, toward a chunk or topic model. This became more granular in the early 2000s as authoring tools offered smaller-than-topic-sized chunks: snippets, variables, library items and so on. These reusable bits of content are usually combined to form topics but under Information 4.0, they might exist on their own.
Information 4.0’s proponents refer to this content chunking as ‘molecular content’ but it can be broken down further to ‘granular content’ because content theoretically can be anything from a granule to a combination of granules. This is a powerful and flexible approach to content but risky because the number of granules can easily expand to tens or hundreds or thousands; once-mundane tasks, such as file naming, become crucial issues and the number of granules can overtax your hardware and software.
Huge numbers of granules may be needed to handle the different contexts. Human authors may not be able to create the granules quickly enough or keep track of them, so machine-created content may be the logical next step. That leads into artificial intelligence and a potential shift in the role of the technical communicator from content creator to AI rule writer and enforcer/curator.
The content must also be ubiquitous – available wherever users are located on whatever devices they have, ranging from desktop monitors to tablets to smartphones to whatever appears next. This will have two effects. First, the ubiquity must be computer-controlled since human tailoring of the content granules for different devices won’t be fast enough or consistent enough. This means that authors must follow good, syntactically correct programming practices and use authoring tools that do the same. Familiar old tools that don’t do this will have to be abandoned, leading to the expense of buying and learning the new tools, and converting existing content from the old tools to the new ones.
Second, computer-controlled ubiquity will make use of responsive design. But responsive design goes beyond just changing the frame around the content. It can change the layout of the content itself, even change text dynamically as the screen size shrinks – changing ‘click’ to ‘tap’ for example – by using CSS. A knowledge of CSS will become increasingly important.
The content must also fit users’ needs as closely as possible so there may have to be several versions of each granule, some containing conceptual details and the steps, and others containing just the steps for users who already know the concepts. This means additional content creation, plus the categorisation of the granules to allow the right granules to be used for each user. (We’ll look at how to do this below, in metadata in Content categorisation.)
Storing and safeguarding the granules is vital. Authors will have to move from local authoring on their C: drive to version control systems like Subversion or Microsoft TFS. And depending on the number of granules, a version control system may not have the capacity or governance features and may require a full CMS, such as Vasont or Documentum instead. Either case will require authors to learn new software and new workflow methods.
2. Content categorisation
Creating content granules is only half the job; they must also be categorised in order to be retrievable and assemble-able by computer. This categorisation has three parts:
- Identification of the contexts under which a granule might be called, that is, which granules to display under what conditions. Technical communicators have done ‘contextualisation’ for years when we create context-sensitive help. We’ll now have to expand our skills to new (to us) forms of contextualisation, for example:
- geographical (outdoor or indoor location)
- chronological (date, time, day v night)
- personal (exercise or health status)
- environmental (temperature, humidity, light v dark, radiation levels)
- and others.
Authors will also have to learn how contextualisation ‘detectors’ work in order to understand how they affect the categorisation of the content.
- Identification of the user characteristics that drive content personalisation, that is, which granules to display for which user. This will be based in part on users’ privileges identified on login, if there is a login, plus on users’ search history.
- Categorisation of the granules through the assignment of metadata. This means that authors will have to become familiar with the use of metadata, taxonomies, perhaps ontologies and, in some cases, their creation and management. That will move technical communicators from content creators to metadata users and enforcers, or ‘metators’.
3. Content retrieval
Technical communicators won’t be the primary movers of content retrieval but we should be involved in several aspects ranging from the technical to the strategic. Authors will have to get more involved in:
- design of the granule retrieval mechanisms – they’ll need to become comfortable discussing technical issues with IT.
- writing and curating of the scripts – those that retrieve the content granules and assemble them into finished information on demand.
- information design – the contexts in which users ask for content may change often and quickly, so designing granules for rapid retrieval is crucial. Liaising with (or becoming) information designers will make sure their granules’ structure is effective.
- senior management and discussions at the corporate strategic level – about content-strategic issues, such as whether to put the content on a public-facing server or hide it behind a subscription. Getting involved in strategic discussions means building credibility with senior management.
4. Content delivery
As with content retrieval, technical communicators won’t be the primary movers of content delivery but there are several aspects that we’ll have to consider:
- bandwidth – can you write the granules in ways that minimise the use of network bandwidth?
- battery life – will this be an issue on the users’ mobile devices ?
- access issues – such as, how to handle users’ requests for content when they don’t have internet access because they’re out of range of a WiFi signal. That may get authors involved in creating content that can be locally stored on a mobile device and automatically synchronised to a repository of content once the user gets the internet access again.
In the third and final Info 4.0 post, published tomorrow, I will be looking more widely at what all this means for technical communication professionals and the techcomm industry.
Image: (CC) Pixabay
Neil Perlin has 39 years’ experience in technical communication. He is the founder of Hyper/Word Services, which provides training, consulting and development for online formats and tools, and is the author of eight books on computing – his latest, Writing Effective Online Content Project Specifications, was released in January 2018. Neil has been a columnist for STC and IEEE and is a popular conference speaker, recently at TCUK 2015 (keynote) and TCUK 2017. He founded and ran the Bleeding Edge stem at the STC Summit, and was STC’s representative to the W3C from 2002 to 2005. He is a Fellow of the STC. You can contact him at email@example.com or on LinkedIn, Facebook and Twitter (@NeilEric).