Move people forward – new research techniques improve navigation

We’ve been applying some new online research techniques to the evaluation of both proposed and existing information architectures on our clients’ websites. These techniques focus on the words and labels that drive visitors through their top tasks. In this article, you’ll learn to combine techniques and tools to provide both qualitative and quantitative insights to improve your information architecture.

“The primary purpose of web navigation is to help people to move forward. It is not to tell them where they have been, or where they could have gone.” – Gerry McGovern

Results from these types of studies will help you identify which words or labels are moving people forward, and which are hindering progress. Your studies can compare different information architecture models and performance across groups of people. Aggregating metrics of success, speed, paths taken and confidence will validate what is working well and identify barriers or gaps in your designs.

Choose a technique to improve navigation
Let’s assume you already have identified the top tasks on your site. You may have performed a task-voting project or triangulated on your top tasks using a variety of data from your site. When developing a site that supports users’ top tasks, many clients first develop an information architecture. These tools and techniques allow for very early testing of that architecture and the navigation links against the top tasks, well before any visual design or even page templates have been designed. Here are three ways you could test the top task navigation:

1. Unmoderated test: Conduct quick and iterative online tests to produce quantitative data on success rates and overall speed from a large pool of visitors.
Try a test yourself in a quick 3-task demo.

2. Moderated test: Conduct a remote screen-sharing study with a small group of visitors, using wireframes (mock-ups of page designs) or an early prototype, or even just the navigation hierarchy, to collect success data along with qualitative insights gained by probing for the reasons behind each participant’s choices.

3. Blended approach: Combine the tests above to develop a rich picture of the interaction between top tasks and the website design, iterate the design and test again.

Which words and labels move people forward?

By launching an unmoderated test, you can quickly find out whether visitors are being misled by particular headings or are getting lost in similar sets of links. One tool we’ve used recently is TreeJack, by Optimal Workshop. We use it to test the phrasing and organization of navigation links and menus. We put in the top tasks and the tree structure of your architecture and assign the ‘correct’ answers for each task. Site visitors are invited to spend 5-10 minutes on an exercise to improve the site. We find it most effective to have them perform a random subset (5 to 10) of a larger set of tasks, to ensure complete coverage of all top tasks.

Example of the TreeJack task process



Moderated sessions provide the ‘Why’
In parallel with the collection of the tree data, you can hold moderated sessions – online of course, using remote screen sharing. In a recent study, we had the participants perform a set of TreeJack tasks using a “think aloud” protocol. That is, they talked about their choices as they made them. Then the participants performed a set of tasks (from the same pool of TreeJack tasks) but using wireframes that looked more like a home page, with links brought forward under headings. After each task, participants were asked to rate their confidence that the last link they clicked would ultimately take them to the task solution. Probing on these confidence questions provided insights into participants’ behaviours – why they chose one category and not another. These insights, together with the quantitative data, provided the necessary background to inform and recommend design changes.

Example of a Wireframe Mock-Up for Moderated Sessions

wireframe example


Results identify poorly-differentiated and misleading links
People move forward when links are clear and easy to differentiate from other links. By testing your current information architecture, you can identify poorly-differentiated categories at the lower levels. In a recent test, there were three categories under a particular heading, only one of which held the answer for the task. Looking at the data, we could see that all of the participants successfully made it to the correct heading, but after that, spread their answers across the three categories. Thus half of the participants selected the wrong category. The testing enabled us to identify where visitors’ success rate is low, and where the web team could focus their efforts with high impact. The team is currently redesigning those three categories to clarify and differentiate them.

Example of Results for Poorly-Differentiated Categories


Undifferentiated category results

The test may also reveal that top level headings aren’t clear to visitors. The data may show that many try a particular path, only to return and try another. The category labels themselves can be misleading – we’ve seen a top task in which over 80% of participants selected a completely different category than the ‘correct’ category as designated by the design team. Insights from the moderated session have helped the team understand the issues underlying the poor fit, and they will be moving the task content out of that category.

Improving top task success with information architecture testing
Moving people forward is about improving the success rate on top tasks. The methods we’ve described can be used during the design process to compare an existing architecture to a new design – not to show that one is better than the other, but to learn from both. We recommend incorporating the learning into a revised design, and then testing again to ensure the weaknesses are resolved.

Finally, don’t feel constrained by the online tools. Many are flexible or can be adapted into the tool you need. In one case, we adapted an online survey tool to mimic a tree test that would allow multiple selections within the categories. We’ve also mocked up wireframes to do multi-click tests when the one-click options like Chalkmark or video tools like didn’t meet a particular research need. We would love to help you test your interaction design and information architecture. Contact us at 613-271-3001 or email us.

Related book and articles

Back to Top

Hear Gerry McGovern’s recent discussion with John Blackmore of IBM on how they tripled sales leads; now available in video, slide and text format.

Back to Top

Quote of the month

“Using more than one user research method during a project provides a more complete and reliable picture of the entire user experience.”

Beyond the Usability Lab

This entry was posted in Remote usability testing, User experience trends and tagged , , . Bookmark the permalink.

Comments are closed.


  • +1 613 271 3001 Main
  • +1 866 322 8522 Toll-free
  • +1 866 232 6968 Fax

Follow Us    Twitter   LinkedIn

© 2012-7 Neo Insight. All Rights Reserved