Skip to content

November Update: Library Assessment Conference Debrief

April Cunningham and Carolyn Radcliff at Library Assessment Conference 2016
April Cunningham and Carolyn Radcliff at Library Assessment Conference 2016

We were honored to sponsor the 2016 Library Assessment Conference (LAC), October 31-November 2. As sponsors we gave a lunch-time talk about the test and we also attended the conference. Although Carolyn has been to this conference several times, most often presenting about the Standardized Assessment of Information Literacy Skills (SAILS), this was April’s first time attending LAC. The conference is a wonderful opportunity to gather with librarians from around the country and, increasingly, from around the world to learn about assessment methods and results that we can apply in our own settings. It was also a rich environment for engaging in conversations about the value of assessment data and what makes assessments meaningful.

Here are a few of the findings that stuck with us:

  • Representatives from ACRL’s Assessment in Action program shared the results of their interviews with leaders from throughout higher education including the Lumina Foundation, Achieving the Dream, and the Association of American Colleges and Universities. They learned from those conversations that as a profession, academic librarians already have strong data about how we affect students’ learning and which models have the most impact. The higher education leaders advised ACRL to encourage deans, directors, and front line librarians to make better use of the data we already have by telling our stories more effectively. You can read about the assessment results and instructional models they were referring to by visiting the Assessment in Action site.
  • Alan Carbery, founding advisory board member for the Threshold Achievement Test for Information Literacy (TATIL) and incoming chair of the Value of Academic Libraries committee for ACRL, co-presented with Lynn Connaway from OCLC. They announced the results of a study to identify an updated research agenda for librarians interested in demonstrating library value. Connaway and her research assistants analyzed nearly two hundred research articles from the past five years about effects on students’ success and the role of libraries. Her key takeaway was that future research in our field should make more use of mixed methods as a way of deepening our understanding and triangulating our results to strengthen their reliability and add to their validity. The report is available on the project site.

A common theme of these presentations is that clearly our profession has embraced assessment to such an extent that we no longer need to convince our colleagues of its value. Now we are more interested in how to hold one another accountable for making meaning from all of the data we are able to gather about our impact on students and on the rest of the institution. Lisa Hinchliffe brought this all into sharp relief during her keynote remarks. She was critical of assessment for its own sake and struck a nerve with attendees when she coined the term decision-based evidence-making, bringing our attention to the weaknesses of assessments that are done merely to reinforce a decision that is already a foregone conclusion. She brought to mind the many ongoing assessment projects we all have underway at our libraries where data have been collected and are collected regularly without then being analyzed or used to guide our actions because we lack the time, skills, or will to close the loop. It hit close to home for April who is finally meeting next week with her colleagues to complete the analysis of two homegrown quizzes she used last spring to assess students’ learning in one-shot research sessions.

The challenge of creating assessments that result in actionable knowledge was on our mind, then, when we got to meet with librarians over lunch on the second day of the conference. We shared with them stories of librarians using the Carrick Enterprises tests (the ones now available as well as the ones in field testing) to achieve their goals. April wrote about some of these last month on the blog. We emphasized the ways that our results reports facilitate decision making because of the options for comparing results within and between institutions and disaggregating results to find out more about how specific interventions by librarians are affecting students’ information literacy.

Additionally, we informed the attendees about data establishing SAILS reliability and the steps we have taken to ensure that the test is accessible to students with perceptual impairments or other difficulties who may be using assistive technologies. We also explained the contrasts between TATIL and SAILS, including the situational dispositions that we assess through the TATIL modules. We invited attendees to participate in the current phases of field testing all four of the modules.

A question from Lisa Hinchliffe invited us to explain how and why we are creating a test related to the Framework when the Framework writers themselves have opined that the Framework cannot be standardized. We described our position that while our work is inspired by the Framework we are not governed by it or its creators. The many librarians, faculty, administrators, and other educators who have participated in the development of TATIL demonstrate that the project brings value to libraries. We shared with the attendees that we have, in fact, undertaken the same process of writing outcomes and performance indicators that the Framework Introduction suggests that librarians will need to complete in order to generate learning outcomes. Rather than doing this work at each of our institutions, the advisory board members worked together to identify outcomes and performance indicators that they shared in common and then selected the ones that were most relevant based on our interpretation of the Framework. You can view the Carrick Information Literacy Outcomes by visiting the TATIL website.