You are on page 1of 5

A field of applied statistics, survey methodology studies

the sampling of individual units from a population and the


associated survey data collection techniques, such as questionnaire
construction and methods for improving the number and accuracy of
responses to surveys.
Statistical surveys are undertaken with a view towards
making statistical inferences about the population being studied, and
this depends strongly on the survey questions used. Polls about public
opinion, public health surveys, market research surveys, government
surveys and censuses are all examples of quantitative research that use
contemporary survey methodology to answer questions about a
population. Although censuses do not include a "sample", they do
include other aspects of survey methodology, like questionnaires,
interviewers, and nonresponse follow-up techniques. Surveys provide
important information for all kinds of public information and research
fields, e.g., marketing research, psychology, health
professionals and sociology.
Overview[edit]
A single survey is made of at least a sample (or full population in the
case of a census), a method of data collection (e.g., a questionnaire)
and individual questions or items that become data that can be
analyzed statistically. A single survey may focus on different types of
topics such as preferences (e.g., for a presidential candidate), opinions
(e.g., should abortion be legal?), behavior (smoking and alcohol use),
or factual information (e.g., income), depending on its purpose.
Since survey research is almost always based on a sample of the
population, the success of the research is dependent on the
representativeness of the sample with respect to a target population of
interest to the researcher. That target population can range from the
general population of a given country to specific groups of people
within that country, to a membership list of a professional
organization, or list of students enrolled in a school system (see
also sampling (statistics) and survey sampling).
Survey methodology as a scientific field seeks to identify principles
about the sample design, data collection instruments, statistical
adjustment of data, and data processing, and final data analysis that
can create systematic and random survey errors. Survey errors are
sometimes analyzed in connection with survey cost. Cost constraints
are sometimes framed as improving quality within cost constraints, or
alternatively, reducing costs for a fixed level of quality. Survey
methodology is both a scientific field and a profession, meaning that
some professionals in the field focus on survey errors empirically and
others design surveys to reduce them. For survey designers, the task
involves making a large set of decisions about thousands of individual
features of a survey in order to improve it.
[2]

Survey methodology topics[edit]
The most important methodological challenges of a survey
methodologist include making decisions on how to:
[2]

Identify and select potential sample members.
Contact sampled individuals and collect data from those who
are hard to reach (or reluctant to respond).
Evaluate and test questions.
Select the mode for posing questions and collecting responses.
Train and supervise interviewers (if they are involved).
Check data files for accuracy and internal consistency.
Adjust survey estimates to correct for identified errors.
Selecting samples[edit]
Main article: Survey sampling
Survey samples can be broadly divided into two types: probability
samples and non-probability samples. Stratified sampling is a method
of probability sampling such that sub-populations within an overall
population are identified and included in the sample selected in a
balanced way.
Modes of data collection[edit]
Main article: Survey data collection
There are several ways of administering a survey. The choice between
administration modes is influenced by several factors, including
1. costs,
2. coverage of the target population,
3. flexibility of asking questions,
4. respondents' willingness to participate and
5. response accuracy.
Different methods create mode effects that change how respondents
answer, and different methods have different advantages. The most
common modes of administration can be summarized as:
[3]

Telephone
Mail (post)
Online surveys
Personal in-home surveys
Personal mall or street intercept survey
Hybrids of the above.
Cross-sectional and longitudinal surveys[edit]
There is a distinction between one-time (cross-sectional) surveys,
which involve a single questionnaire or interview administered to each
sample member, and surveys which repeatedly collect information
from the same people over time. The latter are known as longitudinal
surveys. Longitudinal surveys have considerable analytical advantages
but they are also challenging to implement successfully.
Consequently, specialist methods have been developed to select
longitudinal samples, to collect data repeatedly, to keep track of
sample members over time, to keep respondents motivated to
participate, and to process and analyse longitudinal survey data
[4]

Response formats[edit]
Usually, a survey consists of a number of questions that the respondent
has to answer in a set format. A distinction is made between open-
ended and closed-ended questions. An open-ended question asks the
respondent to formulate his or her own answer, whereas a closed-
ended question has the respondent pick an answer from a given
number of options. The response options for a closed-ended question
should be exhaustive and mutually exclusive. Four types of response
scales for closed-ended questions are distinguished:
Dichotomous, where the respondent has two options
Nominal-polytomous, where the respondent has more than two
unordered options
Ordinal-polytomous, where the respondent has more than two
ordered options
(Bounded) continuous, where the respondent is presented with
a continuous scale
A respondent's answer to an open-ended question can be coded into a
response scale afterwards,
[3]
or analysed using more qualitative
methods.
Nonresponse reduction[edit]
The following ways have been recommended for reducing
nonresponse
[5]
in telephone and face-to-face surveys:
[6]

Advance letter. A short letter is sent in advance to inform the
sampled respondents about the upcoming survey. The style of
the letter should be personalized but not overdone. First, it
announces that a phone call will be made/ or an interviewer
wants to make an appointment to do the survey face-to-face.
Second, the research topic will be described. Last, it allows
both an expression of the surveyor's appreciation of
cooperation and an opening to ask questions on the survey.
Training. The interviewers are thoroughly trained in how to ask
respondents questions, how to work with computers and
making schedules for callbacks to respondents who were not
reached.
Short introduction. The interviewer should always start with a
short instruction about him or herself. She/he should give her
name, the institute she is working for, the length of the
interview and goal of the interview. Also it can be useful to
make clear that you are not selling anything: this has been
shown to lead led to a slightly higher responding rate.
[7]

Respondent-friendly survey questionnaire. The questions asked
must be clear, non-offensive and easy to respond to for the
subjects under study.
Brevity is also often cited as increasing response rate. A 1996
literature review found mixed evidence to support this claim for both
written and verbal surveys, concluding that other factors may often be
more important.
[8]
A 2010 study by SurveyMonkey looking at 100,000
of the online surveys they host found response rate dropped by about
3% at 10 questions and about 6% at 20 questions, with dropoff
slowing (for example, only 10% reduction at 40 questions)
[9]
Other
studies showed that quality of response degraded toward the end of
long surveys.
[10]

Interviewer effects[edit]
Survey methodologists have devoted much effort to determine the
extent to which interviewee responses are affected by physical
characteristics of the interviewer. Main interviewer traits that have
been demonstrated to influence survey responses are
race,
[11]
gender
[12]
and relative body weight (BMI) .
[13]
These
interviewer effects are particularly operant when questions are related
to the interviewer trait. Hence, race of interviewer has been shown to
affect responses to measures regarding racial attitudes ,
[14]
interviewer
sex responses to questions involving gender issues ,
[15]
and interviewer
BMI answers to eating and dieting-related questions .
[16]
While
interviewer effects have been investigated mainly for face-to-face
surveys, they have also been shown to exist for interview modes with
no visual contact, such as telephone surveys and in video-enhanced
web surveys. The explanation typically provided for interviewer
effects is that of social desirability. Survey participants may attempt to
project a positive self-image in an effort to conform to the norms they
attribute to the interviewer asking questions.
Exploratory Research
As the term suggests, exploratory research is often conducted because
a problem has not been clearly defined as yet, or its real scope is as yet
unclear. It allows the researcher to familiarize him/herself with the
problem or concept to be studied, and perhaps generate hypotheses
(definition of hypothesis) to be tested. It is the initial research, before
more conclusive research (definition of conclusive research) is
undertaken. Exploratory research helps determine the best research
design, data collection method and selection of subjects, and
sometimes it even concludes that the problem does not exist!
Another common reason for conducting exploratory research is to test
concepts before they are put in the marketplace, always a very costly
endeavour. In concept testing, consumers are provided either with a
written concept or a prototype for a new, revised or repositioned
product, service or strategy.
Exploratory research can be quite informal, relying on secondary
research such as reviewing available literature and/or data,
or qualitative (definition of qualitative research) approaches such as
informal discussions with consumers, employees, management or
competitors, and more formal approaches through in-depth interviews,
focus groups, projective methods, case studies or pilot studies.
The results of exploratory research are not usually useful for decision-
making by themselves, but they can provide significant insight into a
given situation. Although the results of qualitative research can give
some indication as to the "why", "how" and "when" something occurs,
it cannot tell us "how often" or "how many". In other words, the results
can neither be generalized; they are not representative of the whole
population being studied.
Wiki of Exploratory research :
Exploratory research of research conducted for a problem that has
not been clearly defined. It often occurs before we know enough to
make conceptual distinctions or posit an explanatory
relationship.
[1]
Exploratory research helps determine the
best research design, data collection method and selection of subjects.
It should draw definitive conclusions only with extreme caution. Given
its fundamental nature, exploratory research often concludes that a
perceived problem does not actually exist.
Exploratory research often relies on secondary research such as
reviewing available literature and/or data, or qualitative approaches
such as informal discussions with consumers, employees, management
or competitors, and more formal approaches through in-depth
interviews, focus groups, projective methods, case studies or pilot
studies. The Internet allows for research methods that are more
interactive in nature. For example, RSS feeds efficiently supply
researchers with up-to-date information; major search engine search
results may be sent by email to researchers by services such as Google
Alerts; comprehensive search results are tracked over lengthy periods
of time by services such as Google Trends; and websites may be
created to attract worldwide feedback on any subject.
When the purpose of research is to gain familiarity with a phenomenon
or acquire new insight into it in order to formulate a more precise
problem or develop hypothesis, the exploratory studies ( also known as
formulative research ) come in handy. If the theory happens to be too
general or too specific, a hypothesis cannot to be formulated.
Therefore a need for an exploratory research is felt to gain experience
that will be helpful in formulative relevant hypothesis for more
definite investigation.
[2]

The results of exploratory research are not usually useful for decision-
making by themselves, but they can provide significant insight into a
given situation. Although the results of qualitative research can give
some indication as to the "why", "how" and "when" something occurs,
it cannot tell us "how often" or "how many".
Exploratory research is not typically generalizable to the population at
large.
Social exploratory research "seeks to find out how people get along in
the setting under question, what meanings they give to their actions,
and what issues concern them. The goal is to learn 'what is going on
here?' and to investigate social phenomena without explicit
expectations." (Russell K. Schutt, "Investigating the Social World,"
5th ed.). This methodology is also at times referred to as a grounded
theory approach to qualitative research or interpretive research, and is
an attempt to unearth a theory from the data itself rather than from a
predisposed hypothesis.
Earl Babbie identifies three purposes of social science research. The
purposes are exploratory, descriptive and explanatory. Exploratory
research is used when problems are in a preliminary
stage.
[3]
Exploratory research is used when the topic or issue is new
and when data is difficult to collect. Exploratory research is flexible
and can address research questions of all types (what, why, how).
Exploratory research is often used to generate formal
hypotheses. Shields and Tajalli link exploratory research with
the conceptual framework working hypothesis.
[4]

Skeptics, however, have questioned the usefulness and necessity of
exploratory research in situations where prior analysis could be
conducted instead.
[

Research Diagnostic Criteria:
The Research Diagnostic Criteria (RDC) are a collection of
influential psychiatric diagnostic criteria published in late 1970s.
[1]
As
psychiatric diagnoses widely varied especially between the USA and
Europe, the purpose of the criteria was to allow diagnoses to be
consistent in psychiatric research.
Some of the criteria were based on the earlier Feighner Criteria,
although many new disorders were included; "The historical record
shows that the small group of individuals who created the Feighner
criteria instigated a paradigm shift that has had profound effects on the
course of American and, ultimately, world psychiatry."
[2]

The RDC is important in the history of psychiatric diagnostic criteria
as the DSM-III was based on many of the RDC descriptions.
Action research
Action research is a research initiated to solve an immediate problem
or a reflective process of progressive problem solving led by
individuals working with others in teams or as part of a "community of
practice" to improve the way they address issues and solve problems.
There are two types of action research: participatory action research,
and practical action research. This is supported by Denscombe (2010,
p.6) who states that an action research strategy's purpose is to solve a
particular problem and to produce guidelines for best practice.
Action research involves the process of actively participating in an
organization change situation whilst conducting research. Action
research can also be undertaken by larger organizations or institutions,
assisted or guided by professional researchers, with the aim of
improving their strategies, practices and knowledge of the
environments within which they practice. As designers and
stakeholders, researchers work with others to propose a new course of
action to help their community improve its work practices.
Kurt Lewin, then a professor at MIT, first coined the term action
research in 1944. In his 1946 paper Action Research and Minority
Problems he described action research as a comparative research on
the conditions and effects of various forms of social action and
research leading to social action that uses a spiral of steps, each of
which is composed of a circle of planning, action and fact-finding
about the result of the action.

You might also like