We started with a test of the Search Resources tabbed interface, which we found is the primary reason that users visit the Libraries homepage. Four librarians interviewed ten undergraduates and ten graduate students and proposed a number of fairly simple fixes that we hope will improve the overall design: Eliminating the Google Scholar tab and including text in the search boxes to help users understand what they're actually searching are two changes that are coming soon...
Math and Physics Librarian Michael Peper and I then headed to the Bryan Center (Duke's student center) to poll users about their experience with the Research by Subject page, which Michael and I helped redesign this past summer. This study was significantly simpler than the Search Resources test -- we wrote the script, conducted the tests, analyzed our data, wrote our report and shared our results in about three weeks. Changes that resulted? Not many (our work with students affirmed that the page works fairly well) -- we added a chat widget and rewrote text to clarify the purpose of the search box.
Next up, SILS field experience student Alice Whiteside and I tested the LibGuides interface early this semester -- you may recall that I blogged about our work last month.
Finally, Head of Reference Jean Ferguson, Divinity Librarian Luba Zakharov and I decided to follow up on our Search Resources tabbed interface study by exploring further how users interact with the tab labeled "Articles" on the Libraries homepage. We observed 18 students and staff use the Articles search tool to find resources on topics of their choice and are in the process of analyzing our data. Stay tuned for the results.
So what have we learned from all of this testing and analyzing? Here are my top five lessons learned:
- Usability tests take considerable time and effort -- each requires a lead person who is committed to making the test happen in a timely fashion.
- Specialized software (we use Morae at Duke) is not always necessary -- a test can be conducted with nothing more than a laptop for participants and a laptop for taking notes (candy for participants doesn't hurt!).
- Guerrilla tests seem to work best at Duke -- we go to high-traffic areas (the student center, Perkins library's coffee shop) and ask students to chat with us for a few minutes ( a good reason to limit tests to 5-10 minutes). Scheduling tests ahead of time adds another layer of complexity -- and a significant amount of time and effort.
- Scripting is the most important part of the entire process. If you're not working with a script that truly gets at the questions you wish to ask, your test will not give you the results you want (seems obvious, but we've struck out with more than one test question...). Have several people take a look at the script, and schedule a pilot before actually beginning your testing.
- Share your results! Don't get bogged down in writing long, cumbersome reports that few will read. Write up your results in a couple of pages, append your test instrument, and get it out there so that you can begin to make changes to your interfaces and webpages. There's no point in conducting these tests if you're not going to use what you learned.
No comments:
Post a Comment