Friday, December 14, 2007
* What types of things are you typically trying to find?
* How would you go about doing exhaustive research on a topic within your area of expertise?
* How does this searching fit into your larger goal (e.g., writing your dissertation, writing a paper, teaching a class, putting together a presentation)
* What are some sample searches you've done recently?
I sent these questions out to the participants of the study ahead of time, along with the url to the draft of the portal for them to check out and scheduled a "brainstorming" conference call with 6 of the participants to jump-start the assessment.
In the next blog post, I'll talk about whether these questions elicited the responses we were looking for.
Friday, December 07, 2007
Tuesday, August 14, 2007
For the brainstorming session, the participants will be asked to write as many stories as they can. We won't ask the user to prioritize the stories at the same time they are generating them; we'll ask them to do that later. The idea is to go from basic functionalities to high level interactions with the portal. The users brainstorm the things they may want to do at various points while using the portal. The idea is not to identify actual display fields, but conceptual flows and build the prototype up iteratively during the brainstorming session.
We will start with the main screen of the portal and ask them what they would like to do from there. The participants start throwing out ideas about what actions they'd like to take next. Each action is a new story (Ex: The user would like to search by an historical era, like the depression era). Ideally, we move from story to related story. For example: The user would like to search for images ----- the user would like to search for images from the depression era ---- the user would like to store the images in her account --- the user would like to edit the the images in her account -- the user would like to annotate the images in her account -- the user would like to share the annotated images with a colleague. The facilitator initiates a discussion of the details about each story to give to the developers.
As we walk through the prototype of the portal, we ask questions that help identify missing stories, such as (Cohn, Mike. User Stories Applied. Boston, MA, Addison-Wesley, 2004.):
• What would the user (you) want to do next?
• What mistakes could the user make here?
• What could confuse the user at this point?
• What additional information could the user need?
The moderator keeps a list of issues to come back to and pays attention as to which participant is contributing the story. If the participants include examples for acceptance testing, we include that information to pass on to the developers.
During the brainstorming session, the focus is on quantity rather than quality. There is no debate as to the value of each story so as not to discourage the participants from contributing the story. Open-ended questions are used to facilitate responses, such as "Tell me about how you'd like to search for an image".
Prioritizing the Stories:
If we have time within the allotted hour, we ask the users to prioritize the stories. If we run out of time, myself and the SWG will ask the users to prioritize the stories after the brainstorming session. Cohn's book recommends using the MoSCoW rules:
*Must have - fundamental to the portal
*Should have - important but we can use a workaround
*Could have - leave out if we run out of time
*Won't have this time - desired but for later iteration/grant
Then Chick, Tom and the SWG, the TWG and the MWG conference call/email/wiki and sort the stories by discussing their technical aspects. Can they be implemented with the software/algorithms/metadata we're using? Which stories impact other stories? Which stories apply to a broad base of users and which apply to a small number? How long will the story take to implement? What are the nonfunctional needs of the story (like performance)? Which stories are too small and can be combined? Which stories need to be combined because of dependencies? For the actual deliverable to the developers, I need to ask Tom and Chick what they need so that the stories and the details about the stories are useful to them.
Every two weeks we ask the participants to test the developments on the portal and give feedback to the developers (more on this later this week.)
Thursday, August 09, 2007
As Chick and Tom are going to be developing the portal using the agile development method, I thought it would be interesting to experiment with incorporating user stories for the assessment of the development of the portal. The advantage of using user stories as opposed to use case studies is that it allows for a more iterative and flexible process.
The question I've been asked the most so far is how user stories differ from use case stories. A user story is smaller in scope than a use case and does not include as much detail. A user story should be simple, clear and brief – hopefully one sentence. The detail about functionality comes from conversations about the user story, so there’s a large verbal component to them.
The purpose of the user stories approach is to get user feedback right away so that it drives development, and to get user feedback on work-flow rather than display labels so that you tailor the system for them. I’ve been using “User Stories Applied: For Agile Software Development”, by Mike Cohn, to help plan the assessment.
- A user can do a basic simple search that searches for a word or phrase in both the author and title fields.
- A user can establish an account for storing digital objects.
- A user can edit the information stored in his account.
- A user can see what books are recommended by vetted peers on a variety of topics.
- A user can write a review of a book. She can preview the review before submitting it. The book does not have to be part of the Aquifer collections.
……..and some summaries of what user stories are:
Thursday, July 19, 2007
Our next step, and one the SWG began this week, is to draft a list of high-level display labels for the fielded search limits on an "Advanced Search" so that Chick can begin developing advanced searching capabilities. We will begin by focusing on the facet categories/fields (e.g., Subject, Date, Title): which to include, how to label/order them, etc. We plan on spending time on the facet values (the actual string in each field) once we begin getting feedback from the rapid prototyping assessment of the portal.
Between these two methods (using XPath notations and fielded search limits), we are describing how to build the indexes and how to search them. We are trying to achieve a level of indexing transparency to let the contributors know how we will index their collections with the MODS they provide. As David Reynolds puts it, "most of the guidelines would not result in an index, but rather, they would be used by data providers to check the quality and completeness of their MODS records against our standard. For example, the @encoding in the element mods/originInfo/dateIssued would not be used to create an index, but it would be a test of how well the date in question would conform to most of the records in Aquifer. Recommendations such as using @type on
In our case, this means that XSLT will transform the MODS record into a SOLR schema for inputting documents into an index (mapping MODS to how SOLR wants to look at things).
Thanks to David Reynolds for taking the time to review and correct my first attempt at XPath syntax.
Thanks to Steve Toub for clarifying the difference between Facets and Facet Values.