Professional Documents
Culture Documents
Each module had its own challenge. Here are some examples. In Module 1, Evaluating
Process & Authority, the challenge was to create realistic scenarios that present an
audience need and ask a student to address that need. We needed to make complex
scenarios readable. For Module 2, Strategic Searching, we had to create environments
similar to what students would see in a database. Module 3, Research & Scholarship,
required us to find the core research practices that are common across different
disciplines, adding questions that were somewhat discipline-specific but at a level
that undergraduates would be expected to be able to handle. For Module 4, The Value
of Information, the trick was to create questions that get at new ways students create
information.
Luckily we had very creative and technically skilled professionals addressing these
challenges. From custom programming to item development by teams to item review,
cognitive interviews, and data analysis, we are set up to identify items that work,
revise items that needed it, and eliminate items that do not meet our standards for
inclusion.
SLILC: Are the questions always asked in the same order or is the order randomized?
RW: T here are several different sequences of the questions for a module. When a
student begins the test, one of these sequences is randomly selected for their use. Each
module also include Decision Task Item Sets, composed of three or four items, where
the questions build upon each other and are always asked in the same order.
SLILC: What skill/ability are the questions that ask test takers to rate strategies on
the "usefulness" scale meant to identify?
AC: T hese are information literacy disposition items which investigate metacognitive
awareness of strategies that align with information literacy dispositions. From the
beginning, we’ve known that we wanted to address dispositions, as well as
knowledge, in any new instrument we created. And in reading the literature, especially
in education, we found a way to do that with scenario-based problem solving items.
Through our rhetorical analysis of the Framework we identified four information
literacy dispositions: mindful self-reflection, productive persistence, responsibility to
community, and toleration for ambiguity. Each TATIL module evaluates one or more
disposition with items that present a scenario describing an ill-defined information
literacy challenge related to the content of the module. Students evaluate the
usefulness of various strategies for addressing the challenge. Their choices allow us to
determine how strongly inclined a student is toward the relevant disposition. These
items do not have correct answers and they are scored and reported separately from
the knowledge items.
You can read more about information literacy dispositions with these essays by myself
and another Advisory Board member, Hal Hannon.
Getting at the Dispositions - h
ttp://blog.informationliteracyassessment.com/?p=739
Dispositions and Training Transfer - h ttp://blog.informationliteracyassessment.com/?p=760
SLILC: There is a question in the Strategic Searching module about search tools,
where the learner matches the names of tools to a description of what they contain.
When one of our testers got to this question, there were several tools included that
they had never used before because their library doesn't subscribe to them. Can
questions like this one be customized to reflect the tools a library subscribes to
and/or teaches/promotes through LibGuides, etc.? It would seem without the ability
to customize test questions like this one, the assessment results will not accurately
reflect students' learning at that particular library, which seems problematic. Can
you comment on this?
AC: I t is important to us that we ensure the assessment is not specific to any one
library or set of libraries. With that in mind, we generally did not refer to existing
resources in the test questions. (One exception is Google, which, although not
universally used, is well-known as an Internet search engine.) We invented source
titles, authors, excerpts, quotations, abstracts, database names, and more. In the test
question mentioned, the tools do not exist, except for Google. That said, there is no
option for customizing the questions.
RW: This brings up a bigger point, about the validity and reliability of the test
questions. A critical component of the test development process is to ensure that every
test question functions properly. This is why, even after all the expert review of draft
questions, we conduct field testing and bring in a psychometrician to perform
analyses of the questions based on responses from hundreds of students. We are
checking to see if the questions perform as expected. Are there any questions that are
exceptionally difficult, even for students who answered most questions correctly?
Does each item help to differentiate novices from more advanced test-takers? Items
with problems are either fixed through revision or eliminated from the test. All the
questions in the finished modules, Evaluation Process & Authority, and Strategic
Searching, meet established standards for effectiveness.
SLILC: Would you like to share any other thoughts with our audience about your
future endeavors or the future of the TATIL?
CR: N
ow that we’re nearing completion of the second half of the assessment, we’ve
been able to redirect our efforts to more listening to our customers. We’re
interviewing people about their use of TATIL modules, seeking ways to improve the
test management process, the testing experience, and the reports in order to meet
their needs. We love hearing about their information literacy programs and talking
with them about how assessments can be leveraged to improve information literacy
on their campuses.
This concludes the end of our interview with Carrick Enterprises. If you have questions
or are interested in speaking with Carolyn Radcliff or other members of their team
about the Threshold Achievement Test visit their website:
https://thresholdachievement.com/.