As a designer in search of the perfect user experience, I’m used to employing quantitative measures to capture user patterns and preferences. Information about popular links, click-thru rates, average time spent on particular pages, traffic sources and browser specs (usually collected by Google Analytics) give me a sense of what our users are doing and how they do it.
Last week, I got a chance to journey beyond my computer screen and help lead a focus group testing a project in development, the Peer Advising System (PAS). After some brief instructions, our potential users were let loose to explore the application. And I got to watch.
While we only tested about 20 users, I collected a truckload of useful feedback. Observing someone interact with our program for the first time allowed me to specifically see how they approached the program, how we could have provided more instruction, and the overall mood of the user experience.
Most of all, I enjoyed talking with the group of testers afterward. PAS is designed to help military personnel identify a friend that may be exhibiting symptoms linked to Post-Traumatic Stress Disorder (PTSD), and many of our users related stories of friends, cousins and fathers who needed help, but never got it.
It was touching to remember the purpose behind our project and to get affirmation that our application would help soldiers and marines stay healthy and safe.
We collected important quantitative data during the week too, but as Jocelyn Wyatts in an article for GOOD reminds us:
When evaluating the effectiveness of a program, quantitative data alone does not convey enough meaning, and typically leaves us with many questions. Numbers are, of course, necessary, but shouldn’t be relied on alone. Statistics should be complemented by deep stories of the impacts on an individual, family, or a community, and we should spend as much time thinking about how to effectively craft these stories as we do focusing on how to present the numbers.
There is something comforting about number-centric, black-and-white metrics. But that’s not the whole picture.
Effectively evaluating your theory, application or product relies on connecting people’s unique perspectives and individual experiences with the data about them.