By Nicole Eva, Rumi Graham, and Sandra Cowan
University of Lethbridge, Lethbridge, Alberta, Canada
Have you ever wondered what the benefits and drawbacks of using a SAILS Cohort Test vs a Build-Your-Own Test (BYOT) might be? Wonder no more – we’re here to share our experiences in using both kinds of tests to help you make an informed decision that fits your own institution’s needs and purposes.



When we first looked into using a SAILS information literacy (IL) test in 2015, the only one available to us as a Canadian institution was the International Cohort Test. We had hoped to gather reliable, objective data on IL levels of first-year undergrads before and after librarian-created instruction. Our key questions were: What level of IL is possessed by incoming first-year students? and Is there a significant improvement in students’ IL abilities after receiving IL instruction? Our aim was to explore potential answers by looking for possible correlations between students’ IL attainment levels and their year of study as well as the amount or format of IL instruction.The results of our 2015 study were less than conclusive, as we discovered that the Cohort Test was likely not the best one for a pre-/post-test study design. The results did provide an indication of whether our students as a whole improved their IL abilities over the semester. As the only international participant in our particular cohort, however, we became part of our own benchmark, comparing ourselves to ourselves (pre- vs post). We also found the cohort results were somewhat skewed by larger institutions (for example, one institution accounted for 43% of the test takers). Since we could not track individual results from pre- to post-test, we couldn’t tell if any individual student improved their score over the semester. Finally, we could not choose which questions students received, which was a limitation as not all of the concepts were taught to all of our students who wrote the test.
No sooner had we finished analyzing the results from our 2015 attempt, than we saw that the BYOT option was now available to us as an international institution. This piqued our curiosity, as we wondered how the BYOT might alter our testing results. For those not aware of the BYOT, it is a customizable version of the SAILS “individual scores test” where test scores are tracked for each test-taker, and you can hand-pick the test questions up to a maximum of 50 questions (with no minimum). We liked the flexibility of this option as we could ensure that only concepts actually taught were included in the tests we used, and we could make the tests shorter (as we’d found the original cohort test on the long side).
We therefore used essentially the same protocols to run our study again in 2016 using the BYOT. We were pleased with this approach. Overall, student scores improved from pre- to post-test. In some classes the improvement was more apparent than in others, though interestingly this did not hold true for students in third year and above. This may suggest some pragmatism on the part of more experienced students, who perhaps just ‘took the test,’ realizing that effort was not required to earn the bonus marks offered to incentivize participation in the study. The largest class showed the most improvement and also had the most statistically significant results.
Things continue to change, of course, as today the standardized IL testing options are even greater than before. For example, some of us were recently involved in field-testing TATIL and have also used a production version of TATIL in a pre-/post-test study. So perhaps you’ll see another analysis of the differences and benefits of that approach in a future post.
In summary, here are our thoughts on the BYOT vs the Cohort tests:
BYOT advantages
- You determine which questions are included and overall test length
- Permits singular focus on only your students’ test results
- Permits tracking individual students’ scores over time
- Affords wide range of statistical analyses
COHORT TEST advantages
- Easier to prepare for (no need to select questions)
- Useful for institutions committed to large-scale, longitudinal testing
- No data analysis! (just interpretation)
- Gives overall results plus breakouts by skill sets
We hope this brief analysis of the two test types helps you choose the one that’s right for your own institutional context. For more complete discussions of the two tests and our study protocols, please see the following:
Cowan, S., Graham, R. & Eva, N. (2016). How information literate are they? A SAILS study of (mostly) first-year students at the U of L. Light on Teaching, 2016-17, 17-20. Retrieved from https://bit.ly/2ExEnCZ
Graham, R. Y., Eva, N., & Cowan, S. (2018). SAILS, take 2: An exploration of the “Build Your Own Test” standardized IL testing option for Canadian institutions. Communications in Information Literacy, 12(1), 19-35. Retrieved from https://bit.ly/2GymkPN