Fast Casual Usability Testing, Part One: An Introduction


Sunday April 16th, 2017

My next couple of posts will offer a step-by-step process for fast, effective, and low-cost usability testing for e-learning applications.  The approach I will be outlining represents a refinement of the processes that we at Candent have been following for many years and have found to be very successful in achieving results.

This first entry provides a broad-stroke introduction to the guide by setting the context and establishing a few basic assumptions. If you’d prefer to skip right to the particulars of the process itself, I suggest you go straight to the next blog post, where the rubber really hits the road.

Usability Testing is Important…Sometimes

E-learning generally finds itself in a tricky double-bind with respect to usability testing. On the one hand, because most e-learning courses are experienced only once — “one-and-done,” in most cases — it becomes all the more critical that the user-interface (UI) functions transparently and seamlessly for the first-time user. Unlike a productivity application, which users will be able to gradually master over days if not weeks, e-learning applications must present a learning-curve that is as close to flat as possible. If learners are expending cognitive effort on just figuring out how the activity is supposed to work, it’s unlikely that your real learning objectives are being met.

On the other hand, it is precisely because of this abbreviated life-cycle that e-learning courseware rarely gets the close attention that more durable, higher investment applications receive. Where a customer-facing website, or a tool like, say, a shift-scheduling system, might go through multiple cycles of intensive user-acceptance testing, e-learning solutions rarely receive the time and/or budget for this level of rigorous quality assurance. Usability problems, therefore, often go unfixed and are left to silently undermine the effectiveness of the learning solution as a whole.

For most instructional designers, this is not as dire a problem as it might sound. These dual pressures on e-learning products generally result in an across-the-board simplification of instructional approaches. One might call this process a “dumbing-down,” or, more generously, a “refinement and streamlining” of e-learning interactivity to include only those types of learner engagement which have either been pre-tested and validated or which present little to no UI complexity to be addressed. For the majority of template-based or off-the-shelf e-learning solutions, that is to say, usability testing has either been baked into the product itself or is frankly excessive given the simplicity of the instructional approach. User-testing a next button and a multiple choice question would hardly be called “mission critical” by anyone.

So when is usability testing important for e-learning? For us, the answer is pretty straightforward: the need for user-testing can be gauged in direct proportion to the degree of novelty of the solution for the intended audience. If the approach that the solution takes is one that the audience has seen and had success with before, obviously the need for extensive testing is reduced; if it is a brand new strategy, on the other hand — moving a company’s training from desktop to mobile, for instance — then we’re likely going to need multiple rounds of properly administered tests.

Since Candent is in the business of creating custom solutions, it is rare that we do not include at least a couple of rounds of user-testing in our development cycle. Before I get into the particulars of that process, though, it’s important to understand where and how usability testing occurs in the context of the overall project lifespan. Usability testing is really just one technique in the larger toolbox of our iterative, user-centric design approach, and taking it in isolation present some risks.

Prototyping and Concept-Testing

Before we even get to the stage of actual usability testing, a few critical milestones will have already been reached in our iterative design process. Without this pre-work, there’s not much point in going through even the best testing process since the whole project might have already veered off track.

Well before we get into any usability testing per se, where we are asking questions about the transparency and ease of the interaction’s UI, we will have done at least one or two rounds of concept-testing with learners, where we use rough prototypes — sometimes even just whiteboard sketches or PowerPoint mock-ups — to validate that the basic idea for the activity is both resonating with the intended audience and is correctly aligned with the learning objectives we have identified.

This work is really the essence of the design process itself: successive attempts to connect what the learner is doing in the course with what we want the learner to do on the job.

Here’s an example of how this plays out in reality. For a training audience of emergency dispatchers, it was deemed important that they be able to distinguish between an analog radio system and a digital one, primarily in order to diagnose possible causes of transmission interference. If the learning objective was that they be able to describe the technical distinction between analog signals and digital ones, then a set of multiple choice questions focused on the science of sound transmission might have been just fine.  However, the desired outcome, as we eventually discovered, was not really a detailed knowledge of radio science, but simply an ability to recognize the audible markers of analog interference (static) versus digital signal degradation (“artifacting” and/or a metallic echo effect).

The interaction we ultimately designed to meet this objective was a rapid-fire, multiple-round, binary-choice activity in which the learner listened to a series of audio clips, made a quick decision as to whether each clip was digital or analog, and then received remedial feedback on their choices at the end of each round. By the end of activity, the learner was able to develop a fairly refined sense of those differences through a repetitive, corrective practice of the desired skill itself.

Had we defined the objective incorrectly at the outset, and gone down the path of trying to teach the technological process by which sound is digitized, we would have ended up wasting a lot of time and resources on an ill-conceived and ineffective outcome. And to be completely honest, we did go down that path for a short time. We built not one, but two rough prototypes designed to illustrate the digital/analog distinction from a technical perspective. They were met with confusion and apprehension among users during our early concept-tests, not because of any specific usability problem, but because the whole approach was based on a misconstrual of the underlying learning objective. Both were scrapped well before usability itself became the primary target of our testing.

What Usability Testing Is and What It Isn’t

All this ground-clearing has been intended to enforce two main points: 1) usability testing is a part of an iterative design process that 2) takes place after a round or two of rapid-prototyping/concept-testing/objective-refinement have already occurred. Once you’ve got a strong interactive concept — one that your learners have validated as both meaningful and relevant to their job — you’re ready for the next round: functional prototyping.

As an erstwhile developer, I will admit to some lingering discomfort over the very concept of a “functional prototype.” The idea, according to iterative design orthodoxy, is that you put in only as much programming effort as is needed to test the viability of the prototype with users. Devoting any more time to a design that may change dramatically or be abandoned completely, the argument goes, makes no sense. However, a programmer will respond that if you’ve built a prototype that is feature-complete enough to go through real, rigorous usability testing, your job as a developer is 99% done.

Over the many years I spent as a developer embedded in this process, I evolved my own set of strategies to square this particular circle. Reusable component sets, flexible layout frameworks, and highly editable, externalized styling approaches made the challenge of creating functional, but changeable prototypes much less daunting. These days, HTML5 component frameworks like Bootstrap and Angular Material, along with CSS languages like Sass and Less, do most of the same things out-of-the-box that my hand-rolled solutions did back in the heyday of Flash. There is also an expanding list of rapid prototyping tools that, if you’re willing to surmount the learning curve, can make the whole process much easier, even obviating the need for a developer at all.

The functional prototype, finally, is the thing you take into the usability testing phase, properly defined. It is feature-complete enough to allow users to interact with every element that will end up in the final product. The activity you test may not have finalized content, and it may not have a finalized look & feel, or “skin”; however, these aspects must be close enough to completion so as to not be distracting for learners or to cause interference with goals of the tests. These guidelines, of course, are difficult to define any more precisely than this, but they really don’t need to be: a certain amount of messiness and ambiguity is intrinsic to the iterative design process itself. Once you’ve got something that you feel is ready for usability testing, at that point, you should just go with your gut. One of the benefits of the approach itself is that you quickly get better at it the more you do it: even if you discover your gut isn’t totally to be trusted at first, you’ll find that your sensitivities and intuitions will become more and more refined and accurate every time you go through the process.

Ultimately, this is primary benefit of a fast casual approach to usability testing. Low-cost, low-impact, and low-effort, it allows you hone those personal design skills as you improve your training products, all without serious consequences from the inevitable failures and missteps that occur along the way. Next time, I’ll offer a more practical, implementable guide to the fast casual approach itself.