Skip to content

Today we talk with Joseph Aubele, Librarian at California State University Long Beach in California. Joseph joined the TATIL Advisory Board in 2015 and has been instrumental in making the new test come to life. Learn how his approach to teaching has evolved from feeling like an imposter to handing over control to students. Read his perspective on using assessment results, the library patron as customer, and more!

Q: Please tell us about your job. What do you do? What do you like about your job?

Joseph: At the most basic level I am a reference and instructional librarian -- and almost anyone reading this will have some idea of what that entails. Beyond the obvious, as a tenure track librarian, I engage in research/writing. I also have an administrative assignment as Internship Coordinator for our library which has me meeting with graduate students who are interested in participating in our semester-long experience and then mentoring them once they’re here (and beyond!).

I spent many years in the private sector before coming to librarianship, working too many hours, doing work that -- while rewarding in its way -- lacked the intellectual stimulation that is so much a part of what I do know. So, while I hate it when others say this, I have to say that there is not any single part of my work that is absolutely my favorite. Instead, I enjoy each aspect -- assisting students and faculty, teaching, research, and contributing to the preparation of those who are joining our profession -- and the satisfaction I feel is actually greater than the sum of the parts.

Q: Why did you join the Advisory Board for the Threshold Achievement Test for Information Literacy (TATIL)?

Joseph: A great deal of library assessment measures everything BUT information literacy, and that is understandable -- measuring a student’s ability to recognize when information is needed, or the ability to evaluate information, especially in the context of a one shot session, is daunting. The work that TATIL is doing enables educators of all stripes to assess where students are at when they arrive on campus and how far they progress during their time in college. Colleges and universities talk a lot about helping students become critical thinkers but the only regular assessments are the grades they earn in their classes. The assessments TATIL has developed focus on something much more fundamental to the individual, and being able to make a very small contribution to that effort is as exciting as it is rewarding.

Q: Please tell us about a project you are currently working on.

...continue reading "Meet the TATIL Advisory Board: Joseph Aubele"

Lyda Fontes McCartin
Lyda Fontes McCartin, Professor, Head of Information Literacy & Undergraduate Support, University of Northern Colorado, Greeley, Colorado, USA

In 2014, my library Curriculum Committee started work on developing new student learning outcomes for our 100-level LIB courses. We teach five distinct credit courses; four are 100-level courses and one is a 200-level course. The learning outcomes had not been revisited in years and we had added new courses since that time. With the debut of the Framework, we took the opportunity to update our learning outcomes. It was at this time we began considering all of our 100-level courses as one “program.” An overview of the process we used to create the outcomes is provided in a C&RL News article titled “Be critical, but be flexible: Using the Framework to facilitate student learning outcome development.” The 100-level student learning outcomes are:

  1. Students will be able to develop a research process
  2. Students will be able to implement effective search strategies
  3. Students will be able to evaluate information
  4. Students will be able to develop an argument supported by evidence

Since 2015, I’ve been guiding the library Curriculum Committee through the creation of signature assignments to assess our credit courses so that we can look at student learning across 100-level sections. A signature assignment is a course-embedded assignment, activity, project, or exam that is collaboratively created by faculty to collect evidence for a specific learning outcome. Most of the time you hear about signature assignments in relation to program level assessment, but they can also be used to assess at the course level and are especially useful if you want to assess a course that has many sections taught by multiple instructors (hint – this model can be used for one-shot instruction as well).

I like signature assignments because ...continue reading "Assessing Credit Courses with Signature Assignments"

This semester I provided two workshops for the part-time librarians I work with who do most of the teaching in our one-shot library/research instruction program.  Although I see them every day, it’s rare that we carve out time to meet as a group and getting together even depends on some librarians coming in on their time off.  But we get so much out of sharing our experiences with each other that we’re all willing to give a little extra to make it work.  At these meetings I had a chance to facilitate discussion about the Framework, which might seem a little late since it was first adopted nearly three years ago, but it was good timing for us because we recently got support from our college administrators to purchase the Credo InfoLit Modules and it’s helping us to think about the scope of our instruction in new ways.

In particular, we’ve been thinking about how to reach beyond our one-shots in new ways.  The information literacy lessons from Credo are one way to reach students before or after we see them in the library.  With a little coordination between the librarian and the professor who’s requesting instruction, students can be introduced to concepts like the value of information or the role of iteration in planning a search strategy before coming to the library.  Or they can get step-by-step, self-paced practice with MLA citations to follow up on our in-class discussions about how they should expect to use various types of sources in their analysis or argument.

...continue reading "Resources for One-Shots"

Sometime around 1996 I attended a conference on communication studies. I was working on a master’s degree in Comm Studies and this was my first conference in an area outside of librarianship. I was happy to discover a presentation on research related to libraries, specifically nonverbal behaviors of reference librarians. As the researcher described her findings and quoted from student statements about their interactions with librarians, I experienced a range emotions. Interest and pride soon gave way to embarrassment and frustration. The way I remember it now, there were a host of examples of poor interactions. “The librarian looked at me like I was from Mars,” that sort of thing. Most memorable to me was one of the comment/questions from an audience member. “Librarians need to fix this. What are they going to do about it?,” as though this study had uncovered a heretofore invisible problem that we should urgently address. (Did I mention feeling defensive, too?) I didn’t dispute the findings. What I struggled with was the sense that the people in the room thought that we librarians didn’t already know about the importance of effective communication and that we weren’t working on it. Was there room for improvement? For sure! But it wasn’t news to us.

I thought about that presentation again recently after viewing a webinar by Lisa Hinchliffe about her research project, Predictable Misunderstandings in Information Literacy: Anticipating Student Misconceptions To Improve Instruction. Using data from a survey of librarians who provide information literacy instruction to first year students, Lisa and her team provisionally identified nine misconceptions that lead to errors in information literacy practice. For example, first year students “believe ...continue reading "We’re Working On It: Taking Pride in Continuous Instructional Improvement"

Dominique Turnbow is the Instructional Design Coordinator at University of California, San Diego Library, and she’s been a TATIL Board member since the beginning of the project in 2014. Dominique has been instrumental in drafting and revising outcomes and performance indicators as well as writing test items. Recently Dominique and her colleague at the University of Oregon, Annie Zeidman-Karpinski, published an article titled “Don’t Use a Hammer When You Need a Screwdriver: How to Use the Right Tools to Create Assessment that Matters” in Communications in Information Literacy. The article introduces Kirkpatrick’s Model of the four levels of assessment, a foundational model in the field of instructional design that has not yet been widely used by librarians.  

The article opens with advice about writing learning outcomes using the ABCD Model. Through our collaboration with Dominique, the ABCD Model provided us with a useful structure when we were developing the performance indicators for the TATIL modules. It is a set of elements to consider when writing outcomes and indicators and the acronym stands for Audience (of learners), Behavior (expected after the intervention), Condition (under which the learners will demonstrate the behavior), and Degree (to which the learners will perform the behavior). This structure helped us to write clear and unambiguous indicators that we used to create effective test questions.

Kirkpatrick’s Model of the four levels of assessment is another useful tool for ensuring that we are operating with a shared understanding of the goals and purpose of our assessments. Dominique and Annie make a strong case for focusing classroom assessments of students’ learning during library instruction on the first two levels: Reaction and Learning. The question to ask at the first level is “How satisfied are learners with the lesson?” The question to ask at the second level is “What have learners learned?” Dominique and Annie offer examples of outcomes statements and assessment instruments at both of these levels, making their article of great practical use to all librarians who teach.

They go on to explain that the third and fourth levels of assessment, according to Kirkpatrick’s Model, are Behavior and Results. Behavior includes what learners can apply in practice. The Results level poses the question “Are learners information literate as a result of their learning and behavior?” As Dominique and Annie point out in their article, this is what “most instructors want to know” because the evidence would support our argument that “an instruction program and our teaching efforts are producing a solid return on investment of time, energy, and resources” (2016, 155). Unfortunately, as Dominique and Annie go on to explain, this level of insight into students’ learning is not possible after one or two instruction sessions.  

To determine if students are information literate requires a comprehensive assessment following years of students’ experiences learning and applying information literacy skills and concepts. In addition to the projects at Carleton College and the University of Washington that Dominique and Annie highlight in their article, Dominique also sees information literacy tests like TATIL and SAILS as key tools for assessing the results of students’ exposure to information literacy throughout college. Having the right tools to achieve your assessment goals increases the power of your claims about the impact and value of your instruction at the same time that it reduces your workload by ensuring you’re focused on the right level of assessment.

If you’re attending ACRL, don’t miss Dominique’s contributed paper on the benefits of creating an instructional design team to meet the needs of a large academic library. She’s presenting with Amanda Roth at 4pm on Thursday, March 24.