Skip to content

DIY or Purchase? Comparing InfoLit Tests

At the Library Assessment Conference in Houston Texas earlier this month, Kathy Clarke and I had an opportunity to talk with attendees about the use of information literacy tests. We focused on comparing locally-created tests and commercially-developed tests. Here’s a recap of our 13-minute presentation.

Information literacy tests are one viable option for measuring information literacy. Testing offers specific strengths, including familiarity to students, ease of administration, and efficiency for large-scale assessment. Tests can simplify comparing groups or conducting longitudinal studies and they can suggest improvements to instruction programs. Tests can also offer interpretation of quantitative data for students as individuals and in groups.

Testing has its challenges, too. Information literacy tests are often low-stakes because a student’s final score on the IL test is typically not included in their course grades or used for other purposes that are important to the student. Therefore, it is challenging to motivate students to try their best on the test meaning that results may not indicate what a student actually knows. Also, testing assumes that if we find out what a student knows that that knowledge is reflected in their actions.Testing may feel inauthentic in that we are asking students to answer questions or address scenarios that we have created as opposed to examining products that students create themselves. Further, tests have associated costs, usually some combination of time, expertise, and money.

Locally-developed: James Madison University’s Information Literacy Tests

At James Madison University, six information literacy outcomes are part of the General Education Curriculum and students go through a tutorial-test model during their first year. The test, the Madison Research Essential Skills Test (MREST), requires the involvement of several people and offices across campus. The test was created and is updated by Kathy Clarke, a JMU librarian. As she writes new test questions they are evaluated to make sure they are meeting acceptable standards for item performance. Another person created and maintains the testing delivery system, and Kathy works closely with JMU Center for Assessment & Research Studies to analyze results.

Approximately 6,000 first-year and transfer students complete the test annually. Students must demonstrate competency on the test before they can progress to their second year at JMU. They may take the test multiple times to achieve this and practically everyone does. Data from the test are analyzed for trends in student achievement and areas for improvement. Kathy uses MREST scores and subscales to determine which objectives students are struggling with and to make tutorial changes accordingly.

A shorter version of the MREST, InfoCore, is given to incoming first-year students as a part of their orientation prior to classes and again after completing three semesters. This pre-test / post-test model shows the gains JMU students are making in this learning domain over the course of three semesters. Their scores consistently go up across all six objectives.

A team spends about a month analyzing data from the MREST each summer. They have worked hard to make their reports meaningful and easy to digest. That work has paid off; the reports are being used as a model for assessment reports from other departments on campus.

One recent development is that Virginia’s State Council for Higher Education has issued a reporting mandate for public institutions that requires them to report on four standard learning domains plus optional domains. Schools are being encouraged to make their assessment results easy to understand and easy to find in order to increase transparency for multiple constituencies, including students and parents. JMU has chosen to include information literacy as one of its optional learning domains and they are well prepared with their exemplary information literacy testing and reporting.

Commercially-developed: Threshold Achievement Test for Information Literacy from Carrick Enterprises

The Threshold Achievement Test for Information Literacy, also known as TATIL, was inspired by the ACRL information literacy framework document. Like the JMU test development process, the creation of TATIL has been a group effort. We began with a project lead, April Cunningham, and an advisory board of librarians and other educators. The group worked to create learning outcomes and performance indicators (with the final version available in the ACRL framework sandbox), draft and review test questions, and carry out cognitive interviews with students. Once the test questions were viable, we conducted large scale field testing with the valuable participation and support of librarians and other educators at institutions across the country. Throughout this iterative process, test questions were revised and enhanced. I wrote about the question-development process in more detail in an earlier blog post.

TATIL has four modules that collectively address the full scope of the information literacy construct. Each module is a standalone test that addresses relevant knowledge outcomes, performance indicators, and information literacy dispositions. Extensive reports for each module provide a full analysis of results, including data files plus:

  • Personalized reports for each student
  • Institutional results for knowledge and disposition dimensions
  • Performance indicator rankings of students' strengths and weaknesses
  • Performance levels indicators ranging from conditionally ready to college ready to research ready
  • Breakouts for subgroups such as first year students or transfer students
  • Cross-institutional comparisons with peer institutions and other groupings

Comparing Approaches

Locally-developed tests have the distinct advantage of aligning with local outcomes and objectives. They require little or no out-of-pocket expenditures but do typically call for local expertise and time for development and updating. When done successfully, local tests can serve as a model for other campus units, highlight library contributions to information literacy and campus-assessment priorities, and may offer justification for resources allocated to information literacy efforts.

Commercially-developed tests typically reflect professional standards and practices which may align with local outcomes and objectives. They require little local testing expertise and limited time to implement but do charge fees. These tests may be able to provide context and comparison through benchmarking with other institutions and they often have a community of users.

The decision to use an information literacy test and choosing among the options is best done with one’s assessment needs and resources in mind.

  • What are your specific assessment goals? What questions are you trying to answer?
  • What information do you need in order to answer those questions?
  • What will you do with the results?
  • How will this assessment project benefit a class, a program, or the institution?
  • Who are your potential campus partners?
  • What resources are available? Consider local expertise in test development and evaluation, staff time, and funding.
  • Given answers to these questions, which assessment tools come the closest to meeting your needs?
  • How can you account for any gaps between what the tools provide and what you need?