You are on page 1of 23

32 question journal

1. Is the journal considered reputable? Is the journal appropriate to find


an article relating to this particular subject?
(A journal is considered reputable if it is peer reviewed.)
2. Do the researchers appear to have the appropriate qualifications for
undertaking the study? Was the research performed in an appropriate
medical facility?
(Appropriate qualifications often include some sort of research background
or an author on the team who has statistics background).
Do the researchers have expertise in the area of study?)
3. What was the source of financial support for the study?
(Often at the end of the article there is information on funding. NIH funding
etc. is often considered unbiased. Questions arise if the company marketing
the product funds the sponsorship. (This does not necessarily mean it is bad,
just a concern.)
4. Do the authors give sufficient background information for the study?
Did they demonstrate that the study was important and ethical?
(Sufficient background information would include a good review (timely) of
the drug, disease state, or research topic. Was the background concise but
comprehensive. Did they indicate why the authors thought this was
important or why they needed to know?)
5. Are the purpose and the objectives clearly stated and free from bias?
(Purpose is the reason for doing a study; the objectives are how they are
going to accomplish the purpose. Very few studies really outline the
objectives if they mention the purpose. Some journals now require the
authors to state the objective in the abstract)
6. Was the study approved by an investigational review board?
(Investigational Review Board (IRB), also known as Institutional Review
Board, Human Subjects Committee, etc. They should indicate this over and
beyond talking about informed consent)
7. Does the investigator state the null hypothesis? Is the alternative
hypothesis stated?
(The null hypothesis should be clearly stated as the hypothesis of no
difference, with the alternative being the hypothesis of difference.
Many times the research question is stated but not in the form of the
hypothesis.
Ask what you think the null hypothesis would be or should be based on the
information given in the article.
This formulation of the null hypothesis by the reader will be helpful later to
establish type I and type II error as well as trying to obtain information
related to external validity)
8. Is the sample size large enough? Is the sample representative of the
population?
This question is directed at knowing if the sample is large enough to
statistically prove differences between groups or statistically identify trends
in the data.
It also is directed at knowing if the sample size is large enough to truly
represent the overall population being studied. Good research will identify
how they arrived at their sample size.
This usually involves a calculation that takes into consideration things like type
I and type II error (often you will see power used here instead), standard
deviation, and the clinical difference to be detected.
9. Are the inclusion and exclusion criteria clearly stated, and are they
appropriate?
Inclusion criteria define who is included in the study, and the exclusion criteria
define who is eliminated or not included in the study.
Exclusion criteria need to make sense and not be so restrictive that they
exclude important or good data.
Inclusion criteria need to be specific enough that all of the researchers
understand who really belongs. Definitions of inclusion criteria are often
helpful. For example, the patients must have fasting blood glucose < 120
mg/dL.
10. Was the study randomized correctly? Even if the study is adequately
randomized, are the groups (treatment and control) equivalent?
(Did they randomize the study? How did they do it? Random number tables or
names pulled from a hat are legitimate ways to do this.
Did they provide a table or chart comparing the demographic information
between groups? Does it look as though the groups are relatively equal, or
are they characteristically (demographically) similar? There are other ways
to randomize besides simple random samples. These can be legitimate
ways to allocate subjects. Research design textbooks will elaborate on
these other methods.
11. What is the study design? Is it appropriate?
What is the study design?
Common study designs include :
the clinical trial (experimental design comparing therapies between groups),
cohort studies (long-term studies observing disease patterns related to risk
factor exposures),
case-control studies (comparison of cases who have a condition with controls
without the condition to determine if a risk factor could have caused the
differences),
intention-to-treat (a type of clinical trial that often controls for subjects
dropping out of studies prematurely),
meta-analysis (statistical combination of previous studies’ data and
determining if the conclusions would be different).
Does the type of design they chose make sense? Would a different study
design have been better to answer the proposed hypothesis?
12. Was the study adequately controlled? Were the controls adequate
and appropriate?
13. Was the study adequately blinded?
14. Were appropriate doses and regimens used for the disease state
under study?
15. Was the length of the study adequate to observe outcomes?
16. If the study is a crossover study, was the washout period adequate?
17. Were operational definitions given?
18. Were appropriate statistical tests chosen to assess the data? Were the
levels of α
and β error chosen before the data were gathered? Were multiple statistical
tests
applied until a significant result was achieved?
19. Was patient compliance monitored?
20. If multiple observers were collecting data, did the authors describe
how
 variations in measurements were avoided?
21. Did the authors justify the instrumentation used in the study?
22. Were measurements or assessments of effects made at the
appropriate times and frequency?
23. Are the data presented in an appropriate, understandable format?
24. Are standard deviations or confidence intervals shown along with
mean values?
25. Are there any problems with type I (α) or type II (β) errors?
26. Are there any potential problems with internal validity or external
validity? Internal
 validity types include history, maturation, instrumentation, selection,
morbidity,
 and mortality.
27. Are adverse reactions reported in sufficient detail?
28. Are the conclusions supported by the data? Is some factor other than
the study
 treatment responsible for the outcomes?
 29. Are the results both statistically and clinically significant?
 30. Do the authors discuss study limitations in their conclusions?
 31. Were appropriate references used? Are references timely and
reputable? Have
 any of the studies been disproven or updated? Do references cited
represent a
 complete background?
 32. Would this article change clinical practice or a recommendation
that you would
 give to a patient or health-care professional?
 Pharmacists and pharmacy students are often required to do professional
writing.
 This may come in different formats, including drug information responses,
case presentations, meeting abstracts, research papers, drug monographs,
journal clubs, and newsletters.

PROFESSIONAL WRITING
 Establish an outline for the paper that is appropriate to the format
required for
 the exercise. Do all research before establishing the outline. Check to
make
 sure that you have primary literature to support the document when
appropriate
 The skills of drug information, drug literature evaluation, and professional
 communication are essential components of professional pharmacy
practice.
 Information is always a guiding principle in sustaining the professional’s
 knowledge while opening the door for a better-educated patient. One
cannot
 be expected to have all of the answers stored away in one’s brain, but one
 should be able to use one’s skills to find the answer.

SUMMARY

You might also like