Skip to content

Photo illustration of creativity and ideasThis spring I had the opportunity to attend the California Academic and Research Libraries (CARL) Conference in April, the LOEX Conference in May, and the California Conference on Information Literacy (CCLI) in June.  These were excellent learning opportunities, as always, and I’m happy to share a few highlights from each.

CARL: It was wonderful to see the excellent work being done by librarians throughout California.  Elizabeth Horan and Brian Green, two of my colleagues in the community college system, reflected on the results of their recent survey of students about their study habits and preferences.  Take-aways included the high number of students at both of their colleges who report studying in their cars.  This highlighted the importance of mobile interfaces for library websites and article databases, since many of these students also reported accessing information on their phones or tablets.  It also suggests the importance of creating spaces for individual studying, not just group studying, when libraries are redesigned and shows the value of permitting food in study spaces, when possible, in order to ensure that the library is as comfortable for studying as a car.  I also got inspired by Del Williams’ presentation about hosting hip hop and spoken word performances by a student art collective in the Cal State University, Northridge library over the past year.  ...continue reading "April’s Spring Conference Round-up"

This semester I provided two workshops for the part-time librarians I work with who do most of the teaching in our one-shot library/research instruction program.  Although I see them every day, it’s rare that we carve out time to meet as a group and getting together even depends on some librarians coming in on their time off.  But we get so much out of sharing our experiences with each other that we’re all willing to give a little extra to make it work.  At these meetings I had a chance to facilitate discussion about the Framework, which might seem a little late since it was first adopted nearly three years ago, but it was good timing for us because we recently got support from our college administrators to purchase the Credo InfoLit Modules and it’s helping us to think about the scope of our instruction in new ways.

In particular, we’ve been thinking about how to reach beyond our one-shots in new ways.  The information literacy lessons from Credo are one way to reach students before or after we see them in the library.  With a little coordination between the librarian and the professor who’s requesting instruction, students can be introduced to concepts like the value of information or the role of iteration in planning a search strategy before coming to the library.  Or they can get step-by-step, self-paced practice with MLA citations to follow up on our in-class discussions about how they should expect to use various types of sources in their analysis or argument.

...continue reading "Resources for One-Shots"

Last week I was fortunate to get to attend and present at LOEX 2017, in Lexington, KY.  I’m excited to have joined the LOEX Board of Trustees this year and it was great to see familiar faces and meet new, energized librarians, too.

I presented a one-hour workshop where I walked participants through a comparison of two common types of results reports from large-scale assessments.  We looked at an example of a rubric-based assessment report and a report from the Evaluating Process and Authority module of the Threshold Achievement Test.  We compared them on the criteria of timeliness, specificity, and actionability, and found that rubric results reports from large-scale assessments often lack the specificity that makes it possible to use assessment results to make plans for instructional improvement.  The TATIL results report, on the other hand, offered many ways to identify areas for improvement and to inform conversations about next steps.  Several librarians from institutions that are committed to using rubrics for large-scale assessment said at the end of the session that the decision between rubrics and tests now seemed more complicated than it had before.  Another librarian commented that rubrics seem like a good fit for assessing outcomes in a course, but perhaps are less useful for assessing outcomes across a program or a whole institution.  It was a rich conversation that also highlighted some confusing elements in the TATIL results report that we are looking forward to addressing in the next revision.

Overall, I came away from LOEX feeling excited about the future of instruction in the IL Framework era.  While the Framework remains an enigma for some of us, presenters at LOEX this year found many ways to make practical, useful connections between their work and the five frames. ...continue reading "May Update: Report from LOEX"

Dominique Turnbow is the Instructional Design Coordinator at University of California, San Diego Library, and she’s been a TATIL Board member since the beginning of the project in 2014. Dominique has been instrumental in drafting and revising outcomes and performance indicators as well as writing test items. Recently Dominique and her colleague at the University of Oregon, Annie Zeidman-Karpinski, published an article titled “Don’t Use a Hammer When You Need a Screwdriver: How to Use the Right Tools to Create Assessment that Matters” in Communications in Information Literacy. The article introduces Kirkpatrick’s Model of the four levels of assessment, a foundational model in the field of instructional design that has not yet been widely used by librarians.  

The article opens with advice about writing learning outcomes using the ABCD Model. Through our collaboration with Dominique, the ABCD Model provided us with a useful structure when we were developing the performance indicators for the TATIL modules. It is a set of elements to consider when writing outcomes and indicators and the acronym stands for Audience (of learners), Behavior (expected after the intervention), Condition (under which the learners will demonstrate the behavior), and Degree (to which the learners will perform the behavior). This structure helped us to write clear and unambiguous indicators that we used to create effective test questions.

Kirkpatrick’s Model of the four levels of assessment is another useful tool for ensuring that we are operating with a shared understanding of the goals and purpose of our assessments. Dominique and Annie make a strong case for focusing classroom assessments of students’ learning during library instruction on the first two levels: Reaction and Learning. The question to ask at the first level is “How satisfied are learners with the lesson?” The question to ask at the second level is “What have learners learned?” Dominique and Annie offer examples of outcomes statements and assessment instruments at both of these levels, making their article of great practical use to all librarians who teach.

They go on to explain that the third and fourth levels of assessment, according to Kirkpatrick’s Model, are Behavior and Results. Behavior includes what learners can apply in practice. The Results level poses the question “Are learners information literate as a result of their learning and behavior?” As Dominique and Annie point out in their article, this is what “most instructors want to know” because the evidence would support our argument that “an instruction program and our teaching efforts are producing a solid return on investment of time, energy, and resources” (2016, 155). Unfortunately, as Dominique and Annie go on to explain, this level of insight into students’ learning is not possible after one or two instruction sessions.  

To determine if students are information literate requires a comprehensive assessment following years of students’ experiences learning and applying information literacy skills and concepts. In addition to the projects at Carleton College and the University of Washington that Dominique and Annie highlight in their article, Dominique also sees information literacy tests like TATIL and SAILS as key tools for assessing the results of students’ exposure to information literacy throughout college. Having the right tools to achieve your assessment goals increases the power of your claims about the impact and value of your instruction at the same time that it reduces your workload by ensuring you’re focused on the right level of assessment.

If you’re attending ACRL, don’t miss Dominique’s contributed paper on the benefits of creating an instructional design team to meet the needs of a large academic library. She’s presenting with Amanda Roth at 4pm on Thursday, March 24.

We’re excited that this semester all four modules are available for field testing.  Modules 1 and 2 now offer students feedback when they finish the tests.  Modules 3 and 4, still in the first phase of field testing, do not yet provide immediate feedback to students.  But that doesn’t mean that students shouldn’t reflect on their experience taking the test.  When I have students take Module 3: Research & Scholarship and Module 4: The Value of Information, I create an online survey they can complete as soon as they’ve finished the last question.  Setting up the test through www.thresholdachievement.com makes that easy by providing an option for directing students to a URL at the end of the test.  You can view the brief survey that I give students.

When asking for students’ reflections on their experiences, whether for the TATIL modules or for any instructional interaction, I always rely on critical incident questionnaires as my starting point.  Stephen Brookfield, a transformative educator who is an expert in adult learning, has been promoting critical incident questionnaires since the 1990s.  Building upon Dr. Brookfield’s work, faculty have used the instrument to survey students about their experiences in face-to-face classes as well as online.  Read more about his work and the work of his colleagues here: http://www.stephenbrookfield.com/ciq/

If you would prefer to collect information about students’ perceptions of the test content rather than or in addition to their experience taking the test, consider survey questions like:

  • Where did you learn the skills and knowledge that you used on this test?
  • What do you think you should practice doing in order to improve your performance on this test in the future?
  • What were you asked about on this test that surprised you?

By surveying students at the end of the test, you lay the groundwork for class discussions about the challenges the test presented, areas of consensus among your students, and misconceptions that you may want to address.  The test gives students a chance to focus on their information literacy knowledge and beliefs, which they do not always have the time or structure to do.  Writing briefly about their experience taking the test while it is still fresh in their mind will help students to identify the insights they have gained about their information literacy through the process of engaging with the test.