Knowledge base content creation

Grand Valley State University knowledge base 

Project goals: 

Create content for a self-service help center

Skills applied: 

Qualitative data analysis, content strategy, content creation

Tools used:

LibApps, LibAnalytics, Library H3lp, & Google Analytics

Duration + type:

10 weeks, Summer 2015 ; internship

 

The problem:

While creating user personas, librarians at GVSU noticed that there was a need for self-service help for library users who might not otherwise seek help. They decided to build a knowledge base to serve these students. The task of content creation involved data gathering and analysis, creation of governance documents, and, of course, the creation of the content itself.

The results:

I finished the project a full array of knowledge base entries based on user-data, and a set of governance documents for future maintenance of all knowledge base content.

The library:

Grand Valley State University Logo overlay on the knowledge base page

Grand Valley State University is a public university on the Western side of the state of Michigan. Their library system is nationally recognized and won the 2014 State Librarian's Excellence Award for superior customer service. The bulk of the students at GVSU are split between the two main campuses in the cities of Allendale and Grand Rapids. There are additional students, however, who are distance learners with only smaller satellite campuses nearby. While at GVSU, my mentors for this project were Kristin Meyer and Matthew Reidsma.

 

Data collection:

My data collection included the GVSU libraries' desk transactions, instant messaging transcripts, email interactions, and reference interactions for the period of April 15, 2014 to May 05, 2015. This time includes three major exam periods.

The data was collected in order to guide decision-making for inclusion in the knowledge base; essentially narrowing down what is common enough to deserve a spot in the KB. Previously-identified “common questions” (e.g. printing, course reserve assistance, study room reservations, etc.) were set aside for immediate inclusion. The data was then sorted through in order to determine the previously missed, high frequency questions. The full data set has been published on my mentor's website.

 

Content Strategy:

Once I had a list of data, I needed to set up some rules for governing the content I would create from the data. The governance document covered the following areas:

  • Inclusion of content: initial creation of KB and subsequent additions
  • Maintenance of new data analyses, review of current KB content, and criteria for update or deletion
  • Voice and tone guidelines
  • Style Decisions

Categorization of data:

Final design of the three categories: "how to find", "need some help?", and "using the library"

We aimed to have the content accessible both by browsing and searching. To facilitate browsing, we had to categorize the data in such a way that there weren't an overwhelming number of categories when the visitor first landed on the home page. Using an affinity wall style of categorization, I narrowed down the content to three groups: "How to find", "Need some help?", and "Using the library".

Creating the content:

I needed to find the answers to the high-frequency user questions in order to begin formulating the content. I searched through the desk interactions, spoke with faculty and staff, and searched through GVSU and library websites in order to find the correct answers.

Each entry had to be associated with a handful of tags that would allow the content to be searchable in the KB. Word choice was guided by user phrasing in desk interactions, searches performed through Summon (the library's e-resource search), and finally through the results of a series of cognitive mapping tests run by Kristin Meyer and other GVSU librarians.

 

As my time at GVSU had ended, I suggested the inclusion of an additional form along with the already in-place procedures for desk tracking at the various libraries in order to test the content. The form would be a way to track whether the content in the knowledge base is effectively answering the sorts of questions library users are asking.