You are on page 1of 21

READING LEXILE LEVELS VS.

UNIT TEST SCORES

Reading Lexile Levels vs. Unit Test Scores


Todd Underwood
Coastal Carolina University

READING LEXILE LEVELS VS. UNIT TEST SCORES

Table of Contents
I. Problem Investigated.4-6
A. Purpose of the Study........4
B. Justification of the Study.4
C. Research Question and Hypothesis..4
D. Definition of Terms..4
E. Brief Overview of the Study.5-6
II. Background and Review of Related Literature6-8
A. Theory..6
B. Studies Directly Related...6-7
C. Studies Tangentially Related7-8
III. Procedures....9-14
A. Description of Research Design...9
B. Description of the Sample.10
C. Description of Instruments.11
D. Description of the Procedures...11-12
E. Discussion of Internal Validity.........12-13
F. Discussion of External Validity.13
G. Description and Justification of the statistical techniques13-14
IV. Findings...14
V. Summary and Conclusions.14-16
A. Brief summary of the research question...14-15
B. Discussion of the implications of the findings .15

READING LEXILE LEVELS VS. UNIT TEST SCORES

C. Limitations ..15-16
D. Suggestions for Further Research..16
References17
Appendices..18-21

READING LEXILE LEVELS VS. UNIT TEST SCORES

I.

Problem Investigated
A. Purpose of the study
The purpose of the study was to determine if Reading Lexiles correlate to End Of
Unit Tests in Honors Chemistry?
B. Justification of the study
I believe that it is possible that students with lower Lexile reading levels may not
be as successful on Tests written on Higher Lexile levels and would benefit from
the same content being presented on tests written at Lexile reading levels that
more closely align to their determined Lexile reading level.
C. Research question and hypotheses
1.

Question: Do 11th grade students in an Honors Chemistry course scores


on tests created using test banks, provided by the state adopted book
publisher, at given Lexile levels correlate to the student Lexile Scores as
determined through standardized MAP testing?

2.

Hypothesis: There will be a direct correlation between Student Lexile


Score and Test Scores using test banks provided by the book publisher at
given Lexile Levels. The higher the student lexile score the higher the test
scores and the lower the student lexile score the lower the test scores.

D. Definition of terms
Lexile Level- A unit of measurement used when determining the difficulty of text
and the reading level of readers. A Lexile is equivalent to 1/1000th of the
difference between the comprehensibility of basal primers (the midpoint of first-

READING LEXILE LEVELS VS. UNIT TEST SCORES

grade text) and the comprehensibility of an electronic encyclopedia (the midpoint


of workplace text).
MAP testing- Measures of Academic Progress are a series of tests developed by
Northwest Evaluation Association(NWEA) that measure childrens general
knowledge in reading, math, and science.
E. Brief overview of the study
Students have been required to complete End of Course tests in various content
areas in High Schools in South Carolina as part of their requirements for
graduation from a South Carolina School. The reading abilities of students and
their performance on these tests has been a concern of mine as a teacher who may
one day be evaluated on the basis of student performance on such tests. The state
adopted textbooks for various courses claim to adhere or be written to the State
Standards. These textbooks come with teacher resources which include a testing
software program called Exam View Pro. The research question that I wanted to
ask for the 11th grade Honors Chemistry classes that I teach is: What is the
correlation between students in an 11th grade Honors Chemistry course test scores
on tests created using test banks provided by the book publisher at given Lexile
levels and the student Lexile Scores as determined through standardized MAP
testing regardless of content standard? The hypothesis is that there will be a
direct correlation between Student Lexile Score and Test Scores using test banks
provided by the book publisher at given Lexile Levels. The higher the student
lexile score the higher the test scores and the lower the student lexile score the
lower the test scores.

READING LEXILE LEVELS VS. UNIT TEST SCORES

II.

Background and Review of Related Literature


A.

Theory
In descriptive learning theory, Benjamin Bloom devised, what is today called
Blooms taxonomy. This taxonomy is a description of three educational objective
domains that progress through higher levels of dependence on pre-acquired
knowledge. It has can be said that as children grow and learn their depth of
knowledge increases. There is a natural progression of flow to increased higher
order thinking that occurs as a child grows. Higher order thinking skills can also
be developed. As students increase their reading ability their comprehension and
evaluation of written content increased. Therefore, it may be appropriate to
assume that students with greater reading levels may be able to understand and
comprehend, to a greater degree than students with a lower reading level, content
being read.

B.

Studies directly related


Little empirical evidence suggested that independent reading abilities of students
enrolled in biology predicted their performance on the Biology I Graduation Endof-Course Assessment (ECA). An archival study was conducted at one Indiana
urban public high school in Indianapolis, Indiana, by examining existing
educational assessment data to test whether a relationship between reading
proficiency and student performance on the Biology I ECA existed. The Pearson
product-moment correlation coefficient was r = 0.712 (P < 0.01). A strong
positive relationship between Biology I ECA and Lexile reading scores accounted
for 50.7% of the variance. The results suggested that any measure to increase

READING LEXILE LEVELS VS. UNIT TEST SCORES

reading levels would increase standardized biology assessment scores. (Allen, D.


A., 2014 p.247). Although this study was trying to predict performance on End of
Course Assessment it does not rule out the relationship between reading level and
reading comprehension of the material in these courses. There may be other
factors that affected the student performance on the End-of Course Assessment.
The method used for acquiring Reading Levels of students for this study will be
done through MAP testing scores. The MAP (Measures of Academic Progress),
offered by the Northwest Evaluation Association (NWEA), is based on the Killion
Model which is a student-focused, data-driven, multi-step process (Bach, Thul
& Foord, 2014, p. 28)
C.

Studies tangentially related


Assessment of the Lexile Framework -- The panel affirmed the value of both
sentence length and word frequency as overall measures of semantic and syntactic
complexity, although participants diverged on whether these constructs were best
viewed as proxies or as direct measures; they can in fact be viewed as both.
Although most efforts to establish readability measures use similar indicators for
semantic and syntactic complexity, the LF, in the words of one panel members
report, appears ...exceptional in the psychometric care with which it has been
developed; the extent of its formal validation with different populations of texts,
tests, and children; in its automation; and in its developers continual quest to
improve it.
Researchers in psychology, education, and reading have carried out extensive
research to identify the factors that underlie reading comprehension and difficulty.

READING LEXILE LEVELS VS. UNIT TEST SCORES

This research has not, however, led to indicators that improve substantially upon
those in the LF in power or practical utility. There remain a number of concerns,
or areas in which further research is needed. However, some potential areas of
application for the LF with regard to student assessments of interest to the Center
can be contemplated.
In an article written by Hiebert (2012, p. 13), she explains that, Quantitative
systems can contribute to our understandings of what makes a text complex. As
with any data analysis, however, human beings need to interpret the results. Data
need to be viewed from lenses such as the purpose of reading, the type of text, and
the nature of readers and their backgrounds. I think that it is important that
teachers are reading, analyzing and interpreting the results of Reading Lexiles in
order to better meet the needs of our students. In another article by Marino (2010,
p. 40) he explains that struggling readers comprehension of scientific concepts
and vocabulary could be increased when the readability levels were altered using
technology-based science curriculum. The readability levels were altered by the
use of enhancements such as pictorial representations, video captions, and
interactive tutorials. His findings indicated that students with severe reading
difficulties actually scored similarly to higher reading ability level students.

III.

Procedures
A.

Description of the research design


The question or hypothesis for my study is: Do 11th grade students in an Honors
Chemistry course scores on tests created using test banks, provided by the state

READING LEXILE LEVELS VS. UNIT TEST SCORES

adopted book publisher, at given Lexile levels correlate to the student Lexile
Scores as determined through standardized MAP testing? Data collection from
the administration of the Measures of Academic Progress (MAP) and the
assessment scores from Unit Tests prepared by using the Teacher Resources from
the text publisher for the course in question. I will be collecting the winter data of
the MAP testing RIT to Reading Lexile Scores as a means to compare the
Reading Level to the assessments made by the Unit Tests made with the text
publisher.
This study will be conducted using my own eleventh grade honors students
enrolled in Honors Chemistry. I will be using the administration of The Math
Measures of Academics Progress (MAP) RIT to reading Lexile Scores
administered in the winter of 2014 and the Scores from Assessments using the
Unit Tests prepared using the Publisher provided software program Examview
Pro and the test banks associated with the state adopted textbook. First, I will
record the Winter RIT to Reading Lexile Scores. This will allow me to look at the
reading levels of my students. Then for my analysis, I will compare the Reading
levels to the scores made on the Unit Test for one of the tests using test questions
provided by the publisher.
The student problems that I am concerned with that may be a threat to external
validity of test results are as follows: I am concerned with students who may be
missing instruction, because I already have two test subjects that are in danger of
missing too many days of school.

Also, some students may have had some

chemistry in a physical science class while others may have skipped Physical

READING LEXILE LEVELS VS. UNIT TEST SCORES

10

Science since it is not mandatory. Another problem is that some students may
already have either a like or dislike for science. The final issue that I foresee is
that student learning of information is occurring on a daily basis. Students who
are not tested on the same day as others will have more time to be instructed,
digest and understand the material being tested.
B.

Description of the sample


My intended sample consists of twenty three honors students at Myrtle Beach
High School. The sample is taken using the students that are already in place in
my Honors Chemistry classroom. The students are the ones that come to this class
during their 90 minute block during third block five days a week for the entire
semester, which is half of the school year. No outside resources are needed to
obtain this sample data.
The Demographics are as follows:
1.

Age range: 15 years-17 years old

2.

Sex distribution: three boys, twenty girls

3.

Ethnic breakdown: two African American, two Asian, one Hispanic, and
eighteen Caucasian

4.

Location: Honors Chemistry Classroom

The type of sample is a convenience sample. The sample is taken using the
students that are already in place in my Honors Chemistry classroom. The
students are the ones that come to this class during their ninety minute block
during third block five days a week for the entire semester, which is half of the

READING LEXILE LEVELS VS. UNIT TEST SCORES

11

school year during the 2014-2015 school year. No outside resources are needed to
obtain this sample data.
C.

Description of instruments
The types of instruments I plan to use to measure my variables are standardized
achievement tests and test provided by the book publisher for the subject area of
Honors Chemistry. I plan to use an existing instrument. These are the existing
instruments I plan to use: MAP- Measures of Academic Progress RIT to Reading
Scores and tests created using the test banks provided by Holt, Rinehart, and
Winston Modern Chemistry, Examview test banks. Both of these instruments are
self-grading. The MAP test is a proven reliable and valid test used in education to
test academic progress for students at every grade level. The Examview test bank
questions are based on the chapter content for each chapter in the Modern
Chemistry textbook which is aligned to the South Carolina Science Standards
which are correlated to the National Science Standards.

D.

Description of the procedures


All twenty-three of my honors chemistry students take the Measures of Academic
Progress (MAP) test each year. I will collect the scores of students who have been
administered the Unit Test created using the State Approved Text Book correlated
to the state standards. I will then compare the RIT to Reading Lexile scores of the
students to the scores received on the Unit Test. I will then construct a graph that
compares the two data sets.

E.

Discussion of internal validity

READING LEXILE LEVELS VS. UNIT TEST SCORES

12

The MAP RIT to Reading scores, which provide Reading Lexiles, are my
independent variable and the achievement scores on the tests prepared using the
publisher provided test banks will be the depended variable. Both of these
variables are quantitative data. My question for this research paper is: Do 11th
grade students in an Honors Chemistry course scores on tests created using test
banks, provided by the state adopted book publisher, at given Lexile levels
correlate to the student Lexile Scores as determined through standardized MAP
testing? The concerns that I have with this study that could affect the internal
validity have to do with mortality, history, attitude, and maturation. I am
concerned with mortality because I already have two test subjects that are in
danger of missing too many days of school. If the numbers of attendance days
does not exceed the allowed number of days for the class then the effect would be
moderate. Students failing to meet the attendance requirements will be dropped
from the data. The concern that I have with history is that some students may have
had some chemistry in a physical science class while others may have skipped
Physical Science since it is not mandatory. I will control this by verifying that
any student who has not taken Physical Science will be dropped from the data
analysis. The concern for attitude is that some students may already have either a
like or dislike for science. This attitude may have an effect on performance. I will
control by having test subjects complete a survey on their attitudes towards
science to see if there are strong feelings either way. The concern for maturation
is that student learning of information is occurring on a daily basis. Students who
are not tested on the same day as others will have more time to be instructed,

READING LEXILE LEVELS VS. UNIT TEST SCORES

13

digest and understand the material being tested. I will control this by making sure
that each student is tested on the same day and no one has an advantage of taking
the test later than other students.
F.

Discussion of external validity


The external validity can be ensured through further investigation by comparing
Honors Chemistry students in other schools and districts that collect data on
Reading Lexile Levels. The target population could be expanded to include the
other Honors Chemistry Students within Myrtle Beach High School.

G.

Description and Justification of the statistical techniques


The descriptive statistics I would use to describe the relationship I am
hypothesizing would be: The student scores on textbook provided test questions
are dependent on the reading lexile scores of the 11th grade students in an honors
chemistry course. These are quantitative data that can be related with a
correlation coefficient. The appropriate inference technique for my study would
be to conduct a t-test for r to see whether a correlation coefficient calculated on
sample data is significant. I would use a parametric technique because I will be
conducting statistical analysis of two sets of quantitative data to see if there is a
correlation coefficient using a t-Test for r. I would do a significance test because:
I would want to determine if the data was statistically significant and to eliminate
the idea that the relationship may have just happened by chance. I would not
calculate a confidence interval because I am not going to be collecting data on all
11th graders, only the ones in my classroom. The type of sample used in my study
is a convenience sample that has been non-randomly chosen. The sample size is

READING LEXILE LEVELS VS. UNIT TEST SCORES

14

adequate for a correlational study and is made up of all of the current students that
fit the test criteria. The type of sample used in my study places the following
limitations on my use of inferential statistics: The sample may not be
generalizable among all 11th grade honors chemistry students in this state, country
or the world. Also, the mean data for each class may also not be the same and
create a new mean data. Using the two different classes which have their own set
of biases and sample differences could create overall differences which would
limit the inferential statistics.
IV.

Findings
Upward slope of the trendline in Figure 1: Unit 1 Test Scores vs Reading Lexile Levels
suggests that there is a positive correlation between the Reading Lexile scores and the
Unit Test scores. The coefficient of determination, R2 = 0.1391, does not prove that there
is a significant correlation between the Reading Lexiles and Test Scores.
In Figure 2: t-Test: Two-Sample Assuming Unequal Variances, it can be shown that the t
Stat > t Critical two-tail. According to the hypothesis test, the null hypothesis can be
rejected which would show a correlation and now the alternate hypothesis must be
considered which says there is a difference in the two instruments, Reading Lexiles and
Test Scores (No correlation).
More study will need to be done since a random sample was not used.

V.

Summary and Conclusions


A.

Brief summary of the research question

READING LEXILE LEVELS VS. UNIT TEST SCORES

15

In this study I wanted to know if students in an 11th grade Honors Chemistry


course scores on tests created using test banks provided by the book publisher at
given Lexile levels correlate to the student Lexile Scores as determined through
standardized MAP testing regardless of content. My concern was that I was
administering test questions provided by the publisher that may exceed the
reading level of some of the students taking these tests and negatively affecting
test scores. The independent variable would be the Lexile levels of the students
and the textbook provided test questions. The dependent variable would be the
student scores on the textbook provided test questions. The Lexile scores were
obtained using the Enrich program which provides the results of student MAP
testing scores.

B.

Discussion of the implications of the findings


When reviewing the graph created by the comparison of the Reading Lexiles and
Unit Test scores the upward slope of the trendline suggests that there is a positive
correlation between the Reading Lexile scores and the Unit Test scores. The trend
is slight and there are many outliers, but this does indicate that further
investigation may be warranted. If reading ability gives an advantage to students
when taking standardized tests, then a greater emphasis should be placed on the
goals and improvements made to reading levels. Also, analysis should be done
for the test questions so that they do not create a situation of discrimination
against slower readers or lower reading level students.

C.

Limitations

READING LEXILE LEVELS VS. UNIT TEST SCORES

16

Some of the limitations of this study include the small sample size. Universally
generalized statements concerning student reading levels and test success may not
be extrapolated from this data. A much larger data set would need to be analyzed.
Also, one factor that I did not take into account was academic integrity. I did not
consider the fact that some students may have cheated on the test which would
have provided false data. If lower level readers felt helpless or disparate enough
to cheat then their scores may have been higher than what they actually should
have been.
D.

Suggestions for Further Research


Further research could include expanding the sample size to include a greater
number of students. Also, further investigation could be done by analyzing the
actual test questions for their written lexile level. One thing that could help
expand the data set and help to determine if there are trends within only honors
students or general education students would be to do a comparison between
honors students and non-honors students.

READING LEXILE LEVELS VS. UNIT TEST SCORES

17

References
Allen, D. A. (2014). A Test of the Relationship between Reading Ability & Standardized
Biology Assessment Scores. American Biology Teacher (University Of California
Press), 76(4), 247-251. doi:10.1525/abt.2014.76.4.6
Bach, J. V., Thul, N. G., & Foord, K. A. (2004). Tests that Inform. Principal Leadership,
4(6), 28-31.
Chadwick, R., Walters, H., & Cosumnes River Coll., S. A. (1975). Reading Relationship
Study: Success/Failure Rate of First Semester College Students Identified as Poor
Readers. March 1975, p. 13
Hiebert, E. H. (2012). The Common Core State Standards and Text Complexity. Teacher
Librarian, 39(5), 13.
Marino, M., Coyne, M., & Dunn, M. (2010). The Effect of Technology-based Altered
Readability Levels On Struggling Readers' Science Comprehension. Journal Of
Computers In Mathematics & Science Teaching, 29(1), 31-49.

READING LEXILE LEVELS VS. UNIT TEST SCORES

18

Appendices

Unit 1 Test Scores

Figure 1: Unit 1 Test Scores vs Reading Lexile


Levels
120
100
80
60
40

y = 0.0209x + 55.268
R = 0.1391

20
0
0

200

400

600

800

1000

1200

Reading Lexile Levels

1400

1600

1800

READING LEXILE LEVELS VS. UNIT TEST SCORES

19

Figure 2: t-Test: Two-Sample Assuming Unequal Variances

Variable

Variable

Mean

1248.217

81.34783

Variance

30322.81

95.14625

Observations

23

23

Hypothesized Mean Difference

df

22

t Stat

32.08641

P(T<=t) one-tail

2.85E-20

t Critical one-tail

1.717144

P(T<=t) two-tail

5.71E-20

t Critical two-tail

2.073873

READING LEXILE LEVELS VS. UNIT TEST SCORES

20

TABLE 1: Full Data Set

Last Name

First Name

MAP RIT

MAP

to

Math

Reading

RIT

MAP

MAP

Unit 1

RIT to

Math

Test

Reading

RIT

McElheny

John

1645

255

90 High

1645

269

Good

Veronica

1555

269

93 Low

943

238

Garris

Rebecca

1501

254

99 Avg

1138

249

McKnight

Amelia

1393

245

87

Minor

Dominique

1375

243

79

Rowan

Allison

1357

245

81

Tabib

Yuval

1357

240

70

Wylie

Kathryn

1303

253

84

Lewis

Mary

1285

251

90

Wideman

Shelby

1285

263

73

Buchanan

Angus

1267

239

76
R=

Mitchell

Taylor

1249

238

61

Godbold

Emily-Anne

1195

256

96

Sincavitch

Carly

1177

259

96

Slabinski

Magdalene

1177

241

76

Knox

Shantelle

1159

255

76

Hinde

Riley

1123

243

70

Pham

Angel

1087

250

81

0.3730

READING LEXILE LEVELS VS. UNIT TEST SCORES

Sessions

21

Joyce

1087

246

76

Garcia

Jarely

1069

257

87

Thurman

Scarlett

1069

242

70

Dodds

Tatum

1051

253

81

Hood

Ethan

943

238

79

Heredia

You might also like