At the Library Assessment Conference in August, Rick and I met Kathy Clarke, librarian and associate professor at James Madison University Libraries. Kathy oversees JMU's information literacy assessment and described the program in a lightning talk titled, "The Role of a Required Information literacy Competency Exam in the First College Year: What Test Data Can, Cannot, and Might Reveal." We asked Kathy her thoughts about standardized assessments. Here's what she shared with us:
What’s the future of an information literacy test?
Kathy Clarke
James Madison University has been a pioneer of the multiple choice information literacy tests of student skills. Many of you are probably pretty tired of hearing that, but it is hard to know how to begin these types of pieces without that statement/disclaimer. It’s a professional blessing/curse.
The JMU General Education program adopted the original ACRL Information Literacy Competency Standards for Higher Education as learning outcomes shortly after their adoption in 2001. As such, all JMU students have had to meet an information literacy competency requirement since that time.
Whenever we talk about a competency requirement, we mean all students will achieve a certain standard, and we will be able to demonstrate that they have met it. At JMU this is accomplished via a required test, Madison Research Essentials, that all our incoming students (n=4500) must complete before the end of their first year at JMU.
When we are speaking about a group that large and a strict reporting mandate, we realistically have one option – a multiform, fixed-answer multiple-choice test with two set scores (proficient and advanced). Our first year class is too large to offer an information literacy course (too many students and too few librarian-faculty) and our GenEd program is distributed such that departments offer specific courses, none that could meet or absorb the IL Standards/Learning outcomes.
At the August Library Assessment Conference (and in the library literature recently) there was and is much talk of rubrics, but scant attention to tests. One might go so far as to say that tests are starting to seem passé in favor of rubrics. It might surprise many to learn that rubrics play an important role in information literacy assessment even at JMU.
But not to the tune of an n=4500 reported out every single year.
As I have become more familiar with assessment issues and concerns, I have been taught by my assessment colleagues that you assess what you need to know about what students are able to do, but that you do it strategically to find out:
- Is there is a difference?
- Students who complete XYZ program perform better on the ABC instrument than those students who did not complete the program
- Is there a relationship?
- Students who get an A in the Basic Communication course get better grades on presentations in downstream courses.
- Is there change or growth over time?
- Student writing improves from first year assessment day to sophomore assessment day
- Do you need all students to meet a set standard or competency?
- All first year students will pass the University’s research competency exam by a given day.
When we test our first year students, we are doing so to set a competency and in all honesty, it works well. They all have to pass it by a certain date, and we report out results (for example, 97.9% of first year JMU students demonstrate information literacy competence) by the end of their first year. But the other three questions -- difference, relationship, and change -- could certainly be measured by a rubric. But also by a well-configured pre-post test design. So, it depends on what you want to know, but it also depends on how many you have to do, how often, why and for whom.
Assessing information literacy with either the old standards or by setting up new metrics for the new framework is fairly new to most librarians and downright foreign to many. Fixed-choice multiple-choice instruments, like SAILS, our locally grown Madison Research Essentials Skills Test (MREST) or Madison Assessment’s Information Literacy Test (ILT) do one kind of assessment. But they do that kind of assessment efficiently, quickly and for a large number it might be a good or the even the only do-able option.
____________________________________________________________
You can read Kathy's Library Assessment Conference presentation here:
http://libraryassessment.org/bm~doc/8clarkelightningtalk.pdf