You are on page 1of 9

NEW NORTHERN MINDANAO COLLEGES, INC.

Cabadbaran City, Agusan del Norte



GRADUATE SCHOOL



Report


in



EDUC 100
( EDUCATIONAL RESEARCH )




THE INTERPRETATION OF DATA
Title





Submitted by:


GLEMAR S. MONSALES





Submitted to:



DR. VICTORIA B. PABIA




INTRODUCTION



The process of interpretation is essentially one of stating what the findings show.
What do they mean? What is their significance? What is the answer to the original
problem? This process call for a critical examination of the results of ones analysis in
the light of his previous analyses concerning the gathering of data. That is, all of the
limitations of his data-gathering must enter into and become a part of his conclusions,
which grow out of his interpretations of the results. This step is as important as any
other step in the research studt. It is almost purely subjective, and many errors are
made at this point. If however, one is careful, and critical of his own thinking, he should
be able to make satisfactory interpretations.


- GOOD, BARR And SCATES, 1941


Data analysis entails that the analyst break down data into constituent parts to
obtain answers to research questions and to test hypotheses. The analysis of research
data does not in its own to provide the answers to research questions.

The purpose of interpreting data is to reduce it to an intelligible and interpretable
form so that the relations of research problems can be studied and tested, and
conclusions drawn. On other hand, when the researcher interprets the research results,
he/she studies them for their meaning and implications.













THE INTERPRETATION OF DATA


Interpretation means

An adequate exposition of the true meaning of the material presented in terms of
the purposes of the study.
It is also a process of assigning meaning to the collected information and
determining the conclusions, significance, and implications of the findings. The
steps involved in data analysis are a function of the type of information collected,
however, returning to the purpose of the assessment and the assessment
questions will provide a structure for the organization of the data and a focus for
the analysis.


Reason why we interpret data

It throws light on the real significance of the material in the context.
Gives a wider implication of the data
Constitute a conclusive data that shows greatest values and important
generalization.
Setting up objective that includes meaning and conclusions.

Phases of Interpretation

Deductive Phase the researcher applies the process of deduction.
Inductive Phase formulation of generalizations or principles that may
substantiate hypotheses.


Three modes in Presenting Data
Before the data could be interpreted they must be presented first by;

Textual mode embraces the discussion the discussion and analysis.
Tabular mode used to present through tables , the data of study.
Graphic mode presented through graphs, charts and other devices.



Classification of Data
- Analyzing the characteristics of a large group classified as;

1. Homogeneous no breakdown into subgroups
2. Dichotomous twofold categories


Sorting and Tabulating Data

Tabulation a process of transferring data from the data-gathering instrument to the
tabular form in which they systematically evaluated.

Hand Sorting, Hand Recording and Hand Tabulation this method recommended in
tabulation to save time and to ensure greater accuracy.


Tables and Figures
- To help researcher to see the similarities and relationship of the data.

Table a systematic method of presenting statistical data in vertical columns and
horizontal rows, according to some classification of subject matter.



Rules for the Handling of Tables
- Good tables relatively simple, concentrating on a limited number of ideas
- Text reference should identify tables by number
- Tables should not exceed in the page size
- The word table is centered between the page margins and typed in capital
letters followed by the table number in capital Roman numerals or Arabic.
- The top of the table is placed three spaces below the last line of the title.
- Numerical data are usually arranged in descending order
- Decimal points should always be aligned in the column

Figure a device that presents statistical data in graphic form. Figures include graphs,
charts, drawings, diagrams. Maps, photographs, blueprints and other computer print-
outs



Characteristics of Good Figures

- The title should clearly describe the nature of data
- Simple and enough to convey a clear ideas
- Numerical data should be presented
- Data should be presented carefully and correctly
- Figures should be used sparingly
- Figures are numbered with Arabic
- The title of the figure is place below




STATISTICAL TECHNIQUES USED IN ANALYZING DATA

Statistics defined as a body of mathematical techniques or process for gathering,
organizing, analyzing and interpreting numerical data.


Descriptive Statistics
-summarizes data collected for a sample population. Whether a research study is
a survey or an experiment, it is essential to describe what was observed. We need
a means of summarizing some of the basic characteristics of the observations


Three aspects in Descriptive Statistics

A. Frequency Distribution
- It is any listing of a set of classes (test scores) and frequency of distribution in
the class (no. of students who made that score).
- A first step in summarizing and describing data. It removes the names of the
subjects and provides a way of grouping the measurements. Whether the class
category is a measurement of intelligence, anxiety, or reaction time, we can
create a frequency distribution to show how many observations fall in each
class.


Ways in presenting frequency distribution


1. Histogram or Bar Graph - is a graphical representation of the distribution of
data.

















2. Frequency Polygon or curve - A graph made by joining the middle-top points
of the columns of a frequency histogram.












The frequency distribution can be describe as symmetric or skewed. In
symmetric distribution, the mean and median are identical. In skewed
distribution, one tail of the distribution is disproportionately longer than the
other. In positive skewed distribution, most of the scores are at the low end of the
scale, and few scores are at the high. In negative skewed distribution, most of the
scores are at the high end of the scale and few scores are at the low end.

B. Measures of Central Tendency
- One way of identifying typical, most likely and value in a group of scores.
- Do not summarize a distribution of scores completely.

1. Mode - is the value that appears most often in a set of data
2. Median - is the numerical value separating the higher half of a data sample, a
population, or a probability distribution, from the lower half.
3. Mean - is the sum of a collection of numbers divided by the number of
numbers in the collection. The collection is often a set of results of an
experiment, or a set of results from a survey.

C. Measures of Variability
- To cluster together or spread the score across in a wide range distribution.

Methods in describing variability in a distribution

1. Range
- A crude measure of variability.
- the distance between the highest and the lowest score.

2. Standard Deviation
- Stable measure of variability
- A distance of every score from the mean. Whatever the value of the standard
deviation direct proportional to average distance if the score.


Correlation Coefficient
- Assess the strength of a correlation (degree and direction of relation between
two variables)
- A number ranging from -1, which indicates a perfect negative correlation
between the two variables, through 0, which indicates no correlation, then, +1
which indicates a perfect positive correlation.


Inferential Statistics
To determine the relationship between the hypotheses and variables are
supported by the findings of the research.

Probability
- The basic tool in inferential statistics
- Developed when the researchers and mathematicians can estimate reasonably
and accurately the chances that particular event will occur.

Statistical Significance
- Allow researchers to determine exactly how small the probability is that their
results have come about by chance.



LIMITATIONS AND SOURCES OF ERROR IN THE ANALYSIS AND
INTERPRETATION OF DATA

1. Confusing statements with facts. A common fault is the acceptance of statement
is facts. What individuals report may be a sincere expression of what they believe
to be the facts in a case, but these statements are not necessarily true.
2. Failure to recognize limitations. The very nature of research implies certain
restrictions or limitations about the group or situation described its size, its
representativeness, and its distinctive composition. Failure to recognize these
limitations may lead to the formulation of generalizations that are not warranted
by the data collected.
3. Careless or incompetent tabulation. When one is confronted with mass of data, it
is easy to make simple mechanical errors. Placing a tally in the wrong cell or
incorrectly totaling as et of scores can easily invalidate carefully gathered data.
4. In appropriate statistical procedures. The application of wrong statistical
treatment may lead to invalid conclusions. This error may result from a lack of
understanding of statistics or the limitations inherent in a particular statistical
operation.
5. Computational errors. Since the statistical manipulation of data often involves a
large numbers and many separate operations, there are many opportunities for
error. There is no way to eliminate completely human error, but the use of their
mechanical or electronically tabulating devices will help to reduce it.

6. Faulty logic. This rather inclusive category may embrace a number of the sources
of errors in the thought processes of the researcher. Invalid assumptions,
inappropriate analogies, inversion of cause and effect, confusion of a simple
relationships with causation, failure to recognize that group phenomena may not
be used indiscriminately to predict individual occurrences, failure to realize that
the whole may be greater than the sum of its parts, belief that frequency of
appearance is always measure of importance and other errors are limitation to
inaccurate interpretation.
7. The researchers unconscious bias. Although objectivity is the ideal of research,
few individuals achieve it completely. There is great temptation to omit evidence
unfavorable to the hypotheses and to overemphasize favorable data.
8. Lack of imagination. The quality of creative imagination distinguishes the true
researcher from the compiler. Knowledge of the field of inquiry, skill of research
procedures, experience, and skill in logical thinking are qualities that enable to
the resourceful researcher to see relationships leading to possible generalizations.
It is this ability to see all implications in the data that produces significant
discoveries.


ANALYSIS OF DATA

1. Must analyze his research problem carefully to see what necessary to provide a
solution to it. The researcher must assume himself, and be able to satisfy those to
whom he reports his study, that this method of attacking the problems provides a
crucial approach.
2. Must see the factors that he chooses for study will satisfy the conditions of the
problem.
3. Must examine his source of data carefully to see the factors in which he is
interested will have an opportunity to demonstrate themselves.
4. Must examine the means which he expects to employ in gathering data, to see
that these means are capable of registering variations of appropriate magnitude ,
simplicity and at the same time complexity.



COMPUTER RESEARCH

The electronic digital computer is a versatile and indigenous development of the
technological age which has contributed significantly to the development of complex
modern institutions such as business, finance and government. The use of computer in
research has a number of applications which include the search of related literature and
analysis of complex data. Equipped with the capability to perform calculation at the
speed of light. The computer has made complicated research designs both possible and
practicable, and has become one of the most useful tools of research in the humanities
and in physical and behavioral sciences.


BIBLIOGRAPHY


https://oira.syr.edu/assessment/assesspp/Analyze.htm
http://en.wikipedia.org/wiki/Histogram
http://www.mathsisfun.com/definitions/frequency-polygon.html
http://en.wikipedia.org/wiki/Mode_%28statistics%29

https://en.wikipedia.org/wiki/Median

http://en.wikipedia.org/wiki/Mean

https://en.wikipedia.org/wiki/Arithmetic_mean

https://en.wikipedia.org/wiki/Statistical_inference

https://en.wikipedia.org/wiki/Descriptive_statistics

Aquino. Educational Research

You might also like