You are on page 1of 7

Summative Evaluation Report (SER)

Takiela Langley.CIED 7603.Fall 2017

Part 1: Online Classroom Impact Overview

The title of my learning module was Reading Comprehension Strategies and Practice.
The student participants were a group of eight 7-8 year-old 2nd grade students from a Title 1
elementary school, Altama Elementary School. The participants were 87.5% African American
and 12.5% Hispanic. The students were identified by their teacher as reading below grade level
or displaying reading comprehension deficiencies. The goal of the learning module was for the
students to be able to accurately answer such questions as who, what, where, when, why, and
how (when texts were read aloud). Out of 17 activities (excluding chat and discussion forums)
built into the module, 14 were assessment activities. Each of the assessment activities included
the YouTube version of a Pete the Cat book for students to view in order to provide
responses to the questions. The YouTube videos lasted no more than five minutes, and there
were no more than 6 questions on either of the assessments. Each assessment took the
participant 10-15 minutes on average to complete.

I honestly cannot provide evidence that I effectively assessed my students learning.


There are two reasons for this. The first reason is a flaw on my behalf in the way that I collected
my data. I did not include a name field on my assessment forms, so I had to result to matching
up the timestamps from my module platform, Moodle, to my Google assessment forms to
identify the participants. The data that I collected from my pre/post-tests was pretty much
useless because of the oversight with the name field omission. I could not determine which pre-
tests matched to which post-tests. The other reason that I cannot provide evidence of effective
assessment is that my mentor did not consistently put all eight students on my learning module
as agreed upon. Some students did some assessments, while others did not. Some students
took the pre-test, but not the post-test. In an attempt to try to move as many of the eight
students through my course as possible, I started to bring them into my classroom and have
them login to the learning module. This created another issue because some students took the
same assessment more than once. Out of the 12 assessments (not including the pre/post tests),
6 of the assessments had a participant average score of less than 80%. The inconsistency issues
made my test scores and data unreliable.

During the planning phase of my PPLM I consulted my mentor teacher. I told her that I
wanted to create a reading comprehension course, because I knew that a large segment of our
student population struggled with reading comprehension. She suggested eight of her students
that she thought would benefit from a learning module on reading comprehension. I obtained
the students current reading levels and incorporated a series of childrens books (Pete the Cat)
already introduced to their grade level. I also made accommodations for the students by
including voice overs for all texts; in case they couldnt fluently read the texts. I knew that the
eight students had already been exposed to key ideas and story elements because they were
using the same reading curriculum that my kindergarten students were using, Lucy Calkins
Readers Workshop. I used all of those factors to construct and plan for my learning module.
That is the evidence that I effectively planned and implemented instruction that
incorporated knowledge of the students skills, concepts, ability levels and prior
experiences.

I included a total of 14 assessments over 3 course sections of my learning


module. I included pre/post assessments, assessments following the introduction of a
learning strategy/concept, and assessments following the end of a course section.

I implemented the following evidenced-based practices in my online learning


module: screencasts, practice, and pre-assessment. I used a screencast to show
students how to navigate the online learning module. This was important because what
may have been apparent as an instructional designer, may not have been apparent to
the students. I gave a lot of practice to the students over a span of 3 course sections. I
introduced a strategy/concept and then gave the students practice using the
strategy/concept before they were summatively assessed. I provided a pre-assessment
for the students via a Google forms document. According to the article referenced
below, I implemented 3 evidence-based/best practices.

Killian, S., (November 21, 2013). Top 10 Evidence Based Teaching Strategies.
Retrieved from http://www.evidencebasedteaching.org.au/evidence-based-teaching-
strategies/

McTighe, J., & OConnor, K., (November 2005). Seven Practices for Effective Learning.
Educational Leadership, 63. Retrieved from
http://www.ascd.org/publications/educational-leadership.aspx

Ruffini, M., (October 31, 2012). Screencasting to Engage Learning. Retrieved from
https://er.educause.edu/articles/2012/11/screencasting-to-engage-learning

Part 2: Learner Assessments

All of my assessments were Google forms. The assessments are hyperlinked below. I set
access so that anyone with these links would be able to view the forms. Please let me know, if
for some reason, you cannot view a form.
1. Pre/Post Tests
2. Who are the characters (Pete the Cat: I Love My White Shoes)?
3. Who are the characters (Pete the Cat's Train Trip)?
4. Who are the characters (Pete's Big Lunch)?
5. What is the setting (Pete the Cat: I Love My White Shoes)?
6. What is the setting (Pete the Cat's Train Trip)?
7. What is the setting (Pete's Big Lunch)?
8. What is the main idea (Pete the Cat: I Love My White Shoes)?
9. What is the main idea (Pete the Cat's Train Trip)?
10. What is the main idea (Pete's Big Lunch)?
11. Key Ideas & Details Questions (Pete the Cat: I Love My White Shoes)
12. Key Ideas & Details Questions (Pete the Cat's Train Trip)
13. Key Ideas & Details Questions (Pete's Big Lunch)

One point was given for each correct answer. On the questions that asked students to name
the characters, half a point was given if the student named at least one of the multiple
characters from the story. Google forms scored all questions except for short answer questions.
Short answer questions were scored by me, and the appropriate point value was assigned.

Part 3: Pre-test and Post-test Data

Table 1: Assessment Overview

Pre-
Name of Assessment/
Name of School Post
Topic of the Length of
Assessment where online Student Assessment
P-12 No. of Unit of
P-12 Demographic: (Example: Used
Practicum students instruction
(e.g., Practicum 3rdgrade reading
Learning taught (Example:
Vocabulary Learning students) (e.g., written
Module 3 weeks)
test) Module was pretest
implemented written
posttest)
Reading
Altama
Comprehension nd Google form
Elementary 2 grade reading students 8 3 weeks
Strategies and for both
School
Practice
Info same
Info same
Pre/Post Tests Info same for Info same for for Info same for
Info same for columns below for columns
columns below columns below columns columns below
below
below
Who are the
characters
(Pete the Cat:
I Love My
White Shoes)?

Who are the


characters
(Pete the
Cat's Train
Trip)?
Who are the
characters
(Pete's Big
Lunch)?

What is the
setting (Pete
the Cat: I
Love My
White
Shoes)?
What is the
setting (Pete
the Cat's
Train Trip)?

What is the
setting
(Pete's Big
Lunch)?

What is the
main idea
(Pete the
Cat: I Love
My White
Shoes)?

What is the
main idea
(Pete the
Cat's Train
Trip)?
What is the
main idea
(Pete's Big
Lunch)?

Key Ideas &


Details
Questions
(Pete the
Cat: I Love
My White
Shoes)

Key Ideas &


Details
Questions
(Pete the
Cat's Train
Trip)

Key Ideas &


Details
Questions
(Pete's Big
Lunch)

Table 2: Assessment Individual Data

Student Identifier (no Pretest score in percentage Posttest score in


names) percentage
Student JB --- 66.67
Student JD --- 33.33
Student MH --- 66.67
Student LK --- ---
Student JL --- 66.67
Student PR --- 50.00
Student AT --- 66.67
Student AW --- ---

Table 3: Assessment Group Data


Group Pretest Mean Group Posttest Mean score Percent Change (+ or - %)
score in % in %
--- 58.30 ---

Part 4: Analysis and Interpretation of Data

I created all of the assessments using Google forms. I planned that the assessments would be
scored against the answer key that I created or that I would assign a score for the assessments
that were short answer items. I embedded the assessments within each module assessment
section so that the students never had to leave the learning module. I planned to compare the
pre-test to the post-test scores. I used an add-on in Google called Flubaroo. Flubaroo has the
capability to create a spreadsheet of test responses while itemizing the percent correct for each
test item. The criteria that I used to determine the success of the instructional unit was the
assessment scores; excluding the pre/post tests.

Part 5: Recommendations for Revisions

During the analysis phase, I realized that I made a major mistake. I used the same form
for my pre/post tests, and I did not include a field for the students name. I should have
duplicated the pre/post tests, designated one as the pre-test and the other as the post-test,
and included a name field on all assessments. This mistake left me with two sets of pre/post
tests that I could not match up. I didnt know which pre-tests belonged with which post-tests,
so I couldnt measure any possible growth. After I discovered this blunder, I went back into the
Google form and added a field for the name, but it was too late. There was no time for students
to retake the assessments. There was also no way for me to add the students names to each
form, nor was there any way to know which name would have needed to be added to which
form. This was a major oversight on my behalf.

Another major issue that affected my data was student participation inconsistencies. My
mentor teacher did not fulfill her commitment to me to put her students on my learning
module as we had agreed upon in the beginning. The students did not begin the module on
time, so the original timeframe had to be extended by one week. Once I analyzed the activity
log from my module platform for the cohort of students, I realized that some students had not
logged into the module at all, and some had not logged in in quite some time. At that point, I
took a more direct approach in getting the students to log on. I began to gather the students
from my mentor teachers class and bring them into my own classroom to log on. I had the
students cover as much content as was possible in the short timeframe, but none of the
students were able to complete the learning module. I was not able to further extend the
learning modules timeframe because doing so would have put me behind in my coursework.
The inconsistencies in student participation led to inconsistencies in my data, which made the
data unreliable.

I included course objectives and standards on the front page of the learning module, but
I should have restated related objectives, in student-friendly language, at the beginning of each
module section. This would have reinforced and made clear the reason that the learning
strategies and practice were important for students to learn. I also should have created the
Google assessments such that specific feedback was provided for all assessments immediately
following students response submissions, except for the pre/post tests and key ideas and
details questions. That would have also made my module less assessment-heavy and more
instructionally beneficial to the students. I also regret not bringing the students into my
classroom from the beginning. I feel as though I would have been better able to adhere to the
original timeline and make sure that they were moving through the course activities as
designed. In hindsight, making all of these editions would have made my module more
successful and beneficial to the students.

You might also like