Skip to content

Download sample student reportThis semester Carolyn Radcliff and I had the opportunity to discuss the test and the students’ results reports with our own classes or with students in our colleagues’ classes.  You can see an example of students’ personalized results reports by clicking the thumbnail to the right.  These reports are currently available for the field testing versions of modules 1 and 2 and will be available for field testing versions of modules 3 and 4 in 2017.

Students’ Responses to their Personalized Results

Our conversations with students gave us a new perspective on the test.   As with any test results, some students were disappointed by their results and others disagreed with the evaluation of their performance, but overall students found value in the reports.  Here are some samples of reflective responses from students:

  • I felt most engaged when the results said that I ‘have the habit of challenging (my) own assumptions.’ That’s something I definitely do and I was surprised that the test was able to detect that.
  • I was most surprised that the report said that I defer to particular kinds of authority a bit more than others; I will be sure to keep the recommendations in mind.
  • It was surprising that I wasn’t as proficient as I thought but I felt most engaged by the results when I learned that most college students are also at my level.
  • It was surprising that the results reminded me to seek out additional perspectives and not only ones that support my claim or topic.
  • The chart of my score was interesting.
  • I felt most engaged at the beginning [of the results report] when they analyzed my results directly by using [the pronoun] ‘you.’
  • The test was beneficial by making me think about the use of different sources.
  • Nothing was surprising, but I did agree with the recommendations to strengthen my writing/reading abilities, which I found very helpful.

Students appreciate having results immediately, and in one class where we promised them results but an error on my part during the test set-up delayed their reports, students expressed disappointment and were relieved when they understood that they would still get their personalized reports later.  Nevertheless we know that not every testing situation is intended to result in direct feedback to students, so the student reports are an optional feature that you can turn on or off when you set up the test each time.

...continue reading "December Update: How Students Experience the Test"

April Cunningham and Carolyn Radcliff at Library Assessment Conference 2016
April Cunningham and Carolyn Radcliff at Library Assessment Conference 2016

We were honored to sponsor the 2016 Library Assessment Conference (LAC), October 31-November 2. As sponsors we gave a lunch-time talk about the test and we also attended the conference. Although Carolyn has been to this conference several times, most often presenting about the Standardized Assessment of Information Literacy Skills (SAILS), this was April’s first time attending LAC. The conference is a wonderful opportunity to gather with librarians from around the country and, increasingly, from around the world to learn about assessment methods and results that we can apply in our own settings. It was also a rich environment for engaging in conversations about the value of assessment data and what makes assessments meaningful.

Here are a few of the findings that stuck with us:

  • Representatives from ACRL’s Assessment in Action program shared the results of their interviews with leaders from throughout higher education including the Lumina Foundation, Achieving the Dream, and the Association of American Colleges and Universities. They learned from those conversations that as a profession, academic librarians already have strong data about how we affect students’ learning and which models have the most impact. The higher education leaders advised ACRL to encourage deans, directors, and front line librarians to make better use of the data we already have by telling our stories more effectively. You can read about the assessment results and instructional models they were referring to by visiting the Assessment in Action site.
  • Alan Carbery, founding advisory board member for the Threshold Achievement Test for Information Literacy (TATIL) and incoming chair of the Value of Academic Libraries committee for ACRL, co-presented with Lynn Connaway from OCLC. They announced the results of a study to identify an updated research agenda for librarians interested in demonstrating library value. Connaway and her research assistants analyzed nearly two hundred research articles from the past five years about effects on students’ success and the role of libraries. Her key takeaway was that future research in our field should make more use of mixed methods as a way of deepening our understanding and triangulating our results to strengthen their reliability and add to their validity. The report is available on the project site.

...continue reading "November Update: Library Assessment Conference Debrief"

We’ve finished usability testing of the Module 4: The Value of Information items with a diverse group of undergraduates at a variety of institutions.  Soon we’ll have a version of the module ready for field testing.  At that point, all four of the modules will be available for you to try out with your students.

We’re also preparing for our lunch-time presentation at the ARL Library Assessment Conference on Tuesday, November 1.  So I’ve been thinking a lot about how TATIL can be used to support many different kinds of assessment needs.  Because of accreditation, we all need assessments that can compare students at different institutions, compare students over time, and compare students’ performance to selected standards or locally defined outcomes.  We also know that in order for assessment results to improve teaching and learning, they need to be specific, immediate, and actionable.  It can be hard to find assessments that can be used in these multiple ways and we’ve paid a lot of attention to making sure that TATIL is versatile, just like SAILS.

...continue reading "October Update: TATIL’s Versatility"

Thanks to the help of librarians from throughout southern California, we made a big step forward with test modules 1 and 2 this summer.  Because TATIL is a criterion referenced test (rather than a norm referenced test like SAILS) we rely on the expertise of librarians and other educators to set performance standards so that we can report more than a raw score when students take the test.  By setting standards, we can make and test claims about what students’ scores indicate about their exposure to and mastery of information literacy.  This standard setting process is iterative and will continue throughout the life of the test.  By completing the first step in that ongoing effort, we now have two module result reports that provide constructive feedback to students and educators.

Standard setting plays an important role in enhancing the quality of the test.  For more detailed information about the standard setting method like the one we used, I recommend these slides from the Oregon Department of Education. The essence of this approach to standard setting is that we used students’ responses from the first round of field testing to calculate the difficulty of each test item.  Then the test items were printed out in the order of how difficult they were for students.  Expert panelists went through these item sets, using their knowledge of student learning to identify points in the continuum of items where the knowledge or ability required to correctly answer the questions seemed to cross a threshold.  These thresholds indicate the boundary between beginning students, intermediate students, and expert students’ performance.  We then used the difficulty levels of the items at the thresholds to calculate the cut scores.

...continue reading "September Update: Our Standard Setting Process"

I was fortunate to get to attend ALA in Orlando.  When I’m at ALA, I make sure to always attend the ACRL Instruction Section panel.  This year, I was especially interested because the panel took on Authority is Constructed and Contextual, a very rich concept in the Framework that we’ve had many conversations about as we’ve worked on the first module of the test: Evaluating Process and Authority.

The panelists described how they have engaged with the concept of authority in their own teaching and how the Framework has inspired them to think about this concept in new ways.  Though the panel itself raised many interesting questions, a comment from the audience particularly piqued my interest.  Jessica Critten, from West Georgia University, highlighted the gap in librarians’ discourse about what constitutes evidence and how students are taught to understand what they’re doing with the information sources we’re asking them to evaluate.  She clearly identified the implication of the Authority is Constructed and Contextual Frame, which is that we evaluate authority for a purpose and librarians need to engage in more meaningful discussion about those purposes if we are going to do more than leave students with the sense that everything is relative. Jessica has been thinking about these issues for a while.  She co-authored a chapter called “Logical Fallacies and Sleight of Mind: Rhetorical Analysis as a Tool for Teaching Critical Thinking” in Not Just Where to Click: Teaching Students How to Think about Information.

Jessica’s remarks showed me a connection that we need to continue to strengthen between our work in libraries and our colleagues’ work in composition studies and rhetoric.  Especially at a time of increasing polarization in public discourse, the meaning of concepts like authority, facts, and evidence cannot be taken for granted as neutral constructions that we all define the same way.  When I got back from Orlando, I sat down with our Rhetoric and Composition consultant, Richard Hannon, to ask him to elaborate on the connection between the Framework and how he gets students to think critically about facts, evidence, and information sources.
Read more