Professional Documents
Culture Documents
To cite this article: Keithia L. Wilson , Alf Lizzio & Paul Ramsden (1997) The development,
validation and application of the Course Experience Questionnaire, Studies in Higher Education,
22:1, 33-53, DOI: 10.1080/03075079712331381121
PAUL R A M S D E N
Griffith Institute for Higher Education, Gnffith University, Australia
Introduction
The widespread demands from both governments and consumers for greater accountability
in higher education have been widely documented. These, in conjunction with the quality
improvement movement, have resulted in a need for valid, reliable and comparable perform-
ance data on teaching quality. The Course Experience Questionnaire (CEQ) (Ramsden,
1991) was designed as a performance indicator (PI) of teaching effectiveness, at the level of
whole course or degree, in higher education institutions and is a development of work
originally carried out at Lancaster University in the 1980s. The CEQ is based on a theory of
university teaching and learning in which students' perceptions of curriculum, instruction
and assessment are regarded as key determinants of their approaches to learning and the
quality of their learning outcomes (Marton & S/~lj6, 1976; Entwisfle & Ramsden, 1983;
Ramsden, 1992). The instrument was designed to measure differences in the quality of
teaching between comparable academic organisational units in those important aspects of
teaching about which students have direct experience and are therefore validly able to
comment (viz. quality of teaching, clear goals and standards, workload, assessment, emphasis
on independence). Student evaluations have been established as valid, reliable and useful
indicators of teaching quality (see Marsh [1987] for a review) and have the added value and
appealing benefit of being a direct measure of consumer satisfaction with higher education
(Ramsden, 1991).
results of studies with different sample characteristics, and in drawing conclusions regarding
the structure of the CEQ. One aim of the present study was to overcome this sampling
problem by also applying a series of analyses to each individual sample. Thus, a convergent
methodology was employed with both exploratory and confirmatory factor analytic proce-
dures applied to analysing the structure of each sample.
1. to provide further data on the construct validity of both the tong ([CEQ36]
CEQ30 + Generic Skills scale) and short (CEQ23) forms of the CEQ, using exploratory,
factor analysis with a number of large cross-disciplinary samples of students and gradu-
ates;
Validation of the Course Experience Questionnaire 37
2. to provide, using confirmatory factor analysis, a more stringent assessment of the validity
of both the long and short forms of the CEQ;
3. to establish the reliability and validity of the new Generic Skills scale;
4. to provide further data on the criterion validity of the CEQ as a measure of teaching
quality by establishing the relationship between scale scores and levels of approaches to
learning, overall course satisfaction and academic achievement; and
5. to provide discriminant validity data on the capacity of the CEQ to discriminate between
pedagogically distinct programmes in a field of study (e.g. to discriminate between
problem-based and traditionally taught courses).
Method
Materials
Questionnaires included two versions of the CEQ (Ramsden, 1991). The full form (36 items;
CEQ36) containing all six scales was administered to the 1992 graduate, and 1994 student
samples. The 23-item shorter form (CEQ23) containing the full version of the Generic Skills
scale, shortened versions of the Good Teaching, Appropriate Workload, Appropriate Assess-
ment and Clear Goals and Standards scales, and excluding the Emphasis on Independence
scale was administered to the 1993 student sample (see appendix for the items relevant to all
forms of the CEQ).
In order to establish the relationship between students' evaluations of teaching as
measured by the CEQ and approaches to learning, two short scales (6 items each) were
constructed, using items from the Approaches to Studying Inventory (Entwistle et al., 1979)
to represent deep and surface approaches. Both the reliability and validity of these short
scales were established using data from the 1992 sample. Both scales evidenced moderate
38 K. L. Wilson et al.
TABLE I. Coefficient alpha values from Ramsden's (1991), Richardson's (1994) and the
present
Present study
levels of i n t e m a l consistency ( C r o n b a c h ' s alpha coefficients of 0.67 for deep approach and
0.69 for surface approach), which are in excess of those typically reported for these subscales
(Entwistle et aL, 1979; Entwistle & Kozeki, 1985), and close to the limit of 0.71 suggested
by C o m r e y (1973) as indicating acceptable i n t e m a l consistency. Construct validity was
confirmed b y means of a principal c o m p o n e n t s factor analysis with a varimax rotation on the
12-item set. T w o distincx factors accounting for 4 0 % of the variance (factor 1 = 24%; factor
2 = 16%) were clearly identifiable, with all six deep items and all six surface items loading in
excess o f 0.30 on the first a n d second factors respectively. A small b u t significant negative
correlation ( - 0 . 2 1 , p < 0 . 0 0 1 ) between the two scales further supports their construct
validity as measures o f deep and surface approaches to learning.
A c a d e m i c achievement was m e a s u r e d by averaging each student's grade p o i n t average
(GPA), m e a s u r e d on a scale from 1 to 7, from the c o m m e n c e m e n t of their degree to the point
at which the survey was conducted. This data (available only for the 1992 sample) was
obtained from official university records.
Results and D i s c u s s i o n
Item C E Q scale 1993 1994 1992 1993 1994 1992 1993 1994 1992 1993 1994 1992 1993 1994 1992 1993 1994 1992
ment. Similarly, the single item from the Independence scale which focused on student-
teacher interaction (item 30: 'We discuss with staff how we are going to learn') loaded more
strongly on the Good Teaching (0.44, 0.51) than its nominated Emphasis on Independence
factor (0.30). It should be noted that these cross-loading items are those eliminated from the
original version to create the 23-item short form of the CEQ.
Short form of the questionnaire--CEQ23. The results for the short form were very similar to
those for the long form of the CEQ. A five factor solution accounted for 57% of the variance:
factor 1, Good Teaching (27%); factor 2, Appropriate Workload (11%); factor 3, Generic
Skills (7%); factor 4, Appropriate Assessment (7%); and factor 5, Clear (Goals and Stan-
dards (5%) (see Table III). The pattern of factor loadings provided clear identification of all
five CEQ scales. All 23 of the items loaded on their nominated scales. Two items evidenced
cross-loading on to a second scale. While item 35 (clear expectations) loaded most strongly
on its designated scale of Clear Goals and Standards, it also loaded moderately on Good
Teaching. Similarly, while item 11 (development of skills as a team member) loaded most
strongly on to Generic Skills, it also loaded moderately on to Appropriate Assessment. This
may reflect the increasing trend towards team- or group-based assessment practices. This
23-item shorter form of the CEQ offers a stable factor structure equal to that of the 36-item
full form, with the advantage of cleaner relationships between items and scales.
Item factor analysis. The results of the item confirmatory analyses revealed a moderate
overall fit of the data to the model for the long form of the CEQ (chi-square = 3865.72,
p < 0.001; CFI = 0.85; normed fit index = 0.84; non-normed fit index = 0.83; based on 579
degrees of freedom; RMS = 0.04) and a good fit for the short form (chi-square = 1221.16,
p < 0.001; CFI = 0.90; normed fit index = 0.89; non-normed fit index = 0.89; based on 220
42 K . L . Wilson et al.
degrees of freedom; RMS = 0.04). The inflated chi-square values are clearly attributable to
the large sample sizes. Not surprisingly, the short form model (CEQ23) offers a better fit than
the long form model (CEQ36). The difference in fit between the two forms can be accounted
for largely by the Emphasis on Independence factor on the long form where a number of
items evidencing low factor loadings (range of 0.25 to 0.60) and high structural coefficients
for the error term (range of 0.80 to 0.97) detract from the overall goodness of fit of the
model. This demonstrated improvement in fit is consistent with Bentler & Bonett's (1980)
proposition that models with overall fit indices of less than 0.9 can usually be improved
substantially.
All factors or scales on both the long and short forms were confirmed. For the short
form, the factor loadings for each item ranged from 0.70 to 0.79 for the Good Teaching scale,
0.61 to 0.70 for the Clear Goals and Standards scale, 0.56 to 0.85 for the Appropriate
Assessment scale, 0.48 to 0.71 for the Appropriate Workload scale and 0.35 to 0.71 for the
Generic Skills scale. The structural coefficients for the error terms were generally modest.
The only exceptions were one item each from the Generic Skills (item 11), Appropriate
Workload (item 19) and Appropriate Assessment (item 10) scales, which combined lower
factor loadings and higher structural coefficients, indicating the potential for further improve-
ment of these scales.
Given the widespread use of the CEQ23 (i.e. the short form), it was considered
important to confirm further its structure using samples of graduates from other universities.
Confirmatory factor analyses of two large multi-disciplinary samples, one from a traditional
university (n = 3561), and one from a newer university with a more applied focus (n = 3080),
confirmed the item to scale CEQ23 model, viz. traditional university--chi-square = 4175.79,
p < 0.001; CFI = 0.87 (normed fit index = 0.87; non-normed fit index = 0.85) based on 220
degrees of freedom; RMS = 0.05; and applied university--chi-square = 2576.99, p < 0.001;
CFI = 0.89 (normed fit index = 0.89; non-normed fit index = 0.88) based on 220 degrees of
freedom; RMS = 0.04.
Scale factor analysis. Can the item scores on the CEQ be aggregated to yield a single global
score of teaching quality? Trigwell & Prosser's (1991) factor analysis of scale scores produced
a single higher order factor. Richardson's (1994) analysis of the CEQ30 structure produced
a solution which he describes as 'a monarchical hierarchy' (Cattell, 1965): correlated but
non-overlapping first order factors dominated by a single second order factor. Richardson
(1994) concludes that a single global measure of teaching quality can be derived from the
CEQ. However, Ramsden (cited in Richardson, 1994), suggests that the CEQ may comprise
two higher order factors, one reflecting good teaching and assessment, and the other
reflecting an appropriate workload. This proposition is consistent with the Appropriate
Workload scale consistently demonstrating lower factor loadings on a single higher order
factor relative to the other scales (Trigwell & Prosser, 1991; Richardson, 1994). Three higher
order models were tested by means of higher order path analysis--a one-factor (all scales), a
two-factor (Appropriate Workload and all other scales), and a three-factor (Appropriate
Workload, Generic Skills and all other scales) higher order scale structure.
Results indicated a moderate fit for the one-factor, and a good fit for the two- and
three-factor solutions on both the long (CEQ36) and short (CEQ23) forms (see Table III).
Whereas the separation of Appropriate Workload and the other scales (in the two factor
model) offers an improvement in fit over a one-factor solution, the separation of Generic
Skills (in the three-factor model) does not offer an improvement of the two-factor model.
Thus, it is suggested that the higher order structure of the CEQ can be most usefully
Validation of the Course Experience Questionnaire 43
TABLE III. Goodness-of-fitindicators for higher order path analysis on two samples
Presage to process criteria--relationship between CEQ ratings and deep and surface approaches to
learning. Correlational analyses, using data from the 1992 sample, were conducted between
students' perceptions of the learning environment (measured by the scales of the CEQ) and
reported approaches to learning (measured by deep and surface subscales of the Approaches
to Studying Inventory). All the CEQ scales evidenced significant positive correlations with a
deep approach, and significant negative correlations with a surface approach to learning (see
Table IV). A deep approach to student learning (emphasis on understanding and deriving
meaning) was related most strongly to good teaching, appropriate assessment and indepen-
dence in learning. In contrast, a surface approach to student learning (emphasis on reproduc-
ing facts) was most closely related to heavy workload and inappropriate assessment. While a
number of the correlations were small, but statistically significant owing to the large sample
sizes, these results are nevertheless consistent with a pattern of previous findings at both an
individual and course level in secondary and tertiary contexts using the CEQ, its forerunner
the Course Perceptions Questionnaire, and the School Experiences Questionnaire (Entwistle
& Ramsden, 1983; Kember & Gow, 1989; Meyer & Parsons, 1989; Ramsden et al., 1989;
Entwistle & Tait, 1990; Ramsden, 1991; Trigwell & Prosser, 1991; Eley, 1992; Gibbs &
Lucas, 1995). Clearly, then, the CEQ is measuring aspects of the teaching environment
which are systematically associated with students' reported learning processes.
Presage to product criteria--relationship between CEQ ratings and course outcomes. Based on the
proposition that more effective courses will produce greater student satisfaction, higher
academic achievement and higher generic skills development, correlational analyses were
conducted to investigfite the relationship between the CEQ scale scores and the external
TABLE IV. Correlations b e t w e e n deep a nd surface approaches to studying a nd C E Q scales for Trigwell & Prosser (cited in
R a m s d e n , 1991) a n d 1993 s a m p l e in the pre s e nt s t udy
Trigwell & Prosser Present study Tri gw e l l & Prosser Present study
C E Q scale (1991) 1993: s t u d e n t sample (1991) 1993: s t u d e n t samp le
criteria of (a) students' overall satisfaction with their courses for all three samples; (b) generic
skills development for all three samples; and (c) academic achievement for the 1993 student
sample. Significant positive correlations were found on all samples between all scales of the
CEQ and overall course satisfaction, academic achievement, and generic skills (see Table V).
The correlations with course satisfaction were consistent with, and stronger than, those in
Ramsden's (1991) original study. The Good Teaching and Clear Goals and Standards scales
correlated most strongly with satisfaction and academic achievement, and the Good Teach-
ing and Emphasis on Independence in Learning scales with Generic Skills. The Appropriate
Workload scale demonstrated the lowest correlations with satisfaction, academic achievement
and generic skills. The positive association between scores of the CEQ and the measures of
learning outcome--satisfaction, academic achievement and generic skills--further strength-
ens the instrument's validity as a measure of teaching quality.
Discriminant Valid#y of the CEQ--can the CEQ discriminate between courses with different
teaching philosophies and methods?
Richardson (1994) proposed that a further test of the CEQ would be an assessment of its
capacity to discriminate between courses in terms of their explicit objectives. In order to test
its discriminant validity, CEQ profiles of universities participating in the 1993 and 1994
national surveys of graduates were compared in two fields of study (medicine and psy-
chology) where there were clear examples of programmes with distinct course objectives and
teaching philosophies. One of the 10 medical programmes in the 1993 and 1994 surveys was
known to be conducted along problem-based lines, and one of the 34 psychology pro-
grammes in the 1993 survey, and 31 psychology programmes in the 1994 surveys was known
to be conducted along experiential and action learning lines. Only those departments with a
sample size in excess of 15 (which assumes a normal distribution) were included for analysis.
Analyses of variance were conducted on CEQ scale scores for each of the 10 medical
programmes for 1993 and 1994. While there was no significant difference between the
programmes on the Good Teaching scale, the pattern of differences on other scales was
consistent with those expected when comparing problem-based and traditionally taught
courses. CEQ scores on the Clear Goals and Standards scale were significantly lower for the
problem-based courses than all other programmes in boil: the 1993 and 1994 surveys.
Generic Skills scale scores for the problem-based programme were the highest in 1993 (and
significantly higher than eight other programmes) and significantly higher than all other
programmes in 1994. The problem-based programme achieved second highest and highest
scores on the Appropriate Assessment scale in the 1993 and 1994 surveys respectively, which
were significantly higher than four other programmes in 1993 and eight in 1994.
Comparisons of the CEQ scale scores across 34 psychology programmes in 1993 and 31
in 1994 again demonstrated the instrument's capacity to discriminate pedagogically distinct
courses. The experiential psychology programme achieved the significantly highest scores on
Good Teaching in 1993, and the second highest in 1994; the significantly highest scores on
Appropriate Assessment in 1993 and the third highest in 1994; and the highest scores on
Generic Skills in both 1993 and 1994.
v-
TABLE V. Correlations between the C E Q scales and course satisfaction, genetic skills and academic achievement (GPA) for the present study
1993 student 1994 student 1992 graduate 1993 student 1994 student 1992 graduate 1993 student
C E Q scale sample sample sample sample sample sample sample
Additionally, the CEQ would appear to measure constructs directly relevant to students'
reported approaches to, satisfaction with, and outcomes of, their learning in university
contexts. The CEQ's sensitivity to differences, along theoretically predictable lines, between
traditional and problem-based and experiential programmes would suggest its useful appli-
cation in research studies seeking to establish the comparative educational efficacy of learning
environments. The CEQ can thus be regarded as a valid, reliable and stable instrument.
Practical Applications
Use of the CEQ
Given its demonstrated validity as a performance indicator of' perceived teaching quality,
how is the CEQ being used currently in the Australian higher education system? The CEQ
is widely used as part of a national strategy of providing universities with system-wide
information which they can use to make informed judgements about the quality of the
courses they are offering (Ainley & Long, 1994). Since 1992, it has been distributed to all
Australian university graduates within a few months of course completion, as part of the
GCCA Graduate Destination Survey. The use of the CEQ as a standard national instrument
to measure graduates' perceptions of teaching quality has significant advantages for address-
ing the challenge of improving teaching quality. A nationally administered CEQ provides
comparative data to enable factually-based dialogue between institutions regarding what
constitutes 'best practice' in teaching. Individual institutions are able to work with a
data-based estimate of their relative performance, in comparison to similar institutions, both
as a whole institution and within a field of study (e.g. law, medicine). Additionally, national
field general trends (e.g. graduates' perceptions of law education across all Australian
universities) can be identified for field of study investigations and reviews by professional
bodies and government committees. Perhaps most importantly, the use of the CEQ as a
standard national instrument over a number of years will allow the accumulation of time
series data and the monitoring of change over time at various levels of academic organis-
ation--individual degree programme, institution, field of study and, indeed, the whole
national system.
Given the increasingly competitive higher education market-place, an understandable
temptation exists for university administrators to engage in simplistic cross-institutional
rankings of relative merit ('our CEQ scores are better than theirs'). However, greater
potential for long-term improvement and system-wide learning would appear to lie in a
co-operative strategy of inter-institutional benchmarking for best practice in fields of study.
The Australian Vice Chancellors' Committee (AVCC) is currently inviting universities to
nominate courses which they consider to be exemplars of excellent teaching, as the basis of
a national symposium for the dissemination of best practice.
A standard system for assessing teaching effectiveness can not only aid managerial, but
also consumer judgements of quality. CEQ national survey data are now also available in a
form that can be readily used by the consumers of higher education (i.e. potential students)
in making choices about what and where to study. The Good Universities Guide w Australian
Universities (Ashenden & Milligan, 1995), an annual evaluative and independent guide to the
quality of Australian universities, their campuses and courses, now includes course ratings by
recent graduates. Fields of study are compared across some CEQ scales using a five star
system (the more stars the better) based on scale averages. Degree programmes at individual
institutions are described as 'better', 'average' or 'worse' on teaching quality, workload and
overall satisfaction depending on whether recent graduates' ratings of that course were
48 K . L . Wilson et al.
significantly more or less favourable than graduates who had completed similar courses at
other institutions. The Good Universities Guide is careful not to endorse the credulous use of
such ratings. After presenting the possible limitations of such global comparisons, it advises
potential students that: 'the information in these ratings is significant and valuable, but must
be treated as one, and only one, among many pieces of information about courses and
campuses, and certainly must not be regarded as precise or infallible' (p. 6).
Publishing independent course ratings, with institutional approval, in the higher edu-
cation market-place would seem to indicate an increased willingness for Australian universi-
ties to regard their students as clients and consumers, and to be responsive and accountable
to their perceptions of course quality. Such data will become increasingly important if the
'user pays' principle is used to fund the expansion of postgraduate education in Australia.
TABI~ VI. Uses and misuses of the CEQ in the measurement of teaching quality
organisations. Part of the difficulty is that organisational reports often just 'present infor-
mation' rather than create a context which facilitates action.
Bate et al. (1995) have developed a promising reporting format for the dissemination of
CEQ survey data within an institution. Firstly, a working context is established for the user
in relation to the process of the CEQ survey--addressing questions regarding the validity of
the instrument, the purpose of the national strategy, guidelines for responsible interpretation,
and strategies for benchmarking best practice. Once the process of the survey has been
legitimated, the content of the CEQ results for a particular academic unit is presented in
context. Results are presented firstly for the institution as a whole, then profiles for individual
degree programmes, and finally comparative data are presented for degree programmes
offered by other institutions in the same field of study. Importantly, efforts are made to
differentiate which CEQ results may be national field of study effects (patterns common to
similar institutions in the same field of study), and which are specific institutional effects. The
CEQ data for an individual degree programme are then contextualised in relation to other
qualitative indicators (graduates' written comments on the programme) and quantitative
indicators (rates of completion, retention and progression, and employer satisfaction).
Finally, recommendations are made for further investigation and action.
Concluding Comments
This paper has sought to confirm further the validity and usefulness of the CEQ and to
outline conditions for its effective application as a performance indicator of university
teaching quality. In terms of its wider use, the CEQ's genesis in the U K higher education
system, its refined theoretical base, and its demonstrated measurement qualities indicate that
it could be generally adopted as a performance indicator for U K universities. Its capacity to
provide crucial information, at a remarkably low cost, about course quality for funding
agencies, universities, prospective students and employers of graduates is a particularly
attractive characteristic.
Acknowledgements
We would like gratefully to acknowledge the Graduate Careers Council of Australia for
providing some of the data used in the study, and Roland Simons for his patient and
persistent analysis.
Correspondence: Keithia Wilson, School of Applied Psychology, Faculty of Health and Be-
havioural Sciences, Griffith University, Nathan, Queensland 4111, Australia.
REFERENCES
AINLEY,J. & LONG,M. (1994) The Course Experience Survey: 1992 graduates (Canberra, Australian Govem-
ment Publishing Service).
ASHENDEN,D. & MILLIGAN,S. (t 995) Good Universities Guide 1996 to Australian Universities (Victoria, Reed
Reference).
AUSTRALIANVICE CHANCELLORS'COMMITTEE~STANDING COMMYI~'EEON STATISTICS (1995) A review of the
Graduate Careers Council of Australia (GCCA) Graduate Destination Survey and Course Experience Question-
naire (Canberra, AVCC).
BARRETt,P.T. & KLINE,P. (1981) The observation to variable ratio in factor analysis, Personality Study &
Group Behaviour, 1, pp. 23-33.
Validation of the Course Experience Questionnaire 51
BATE, F., BARRE~t~r,J. & MOIR, F. (1995) Results of the Course Experience Questionnaire: 1993 graduates
(Adelaide, Murdoch University, Academic Services Unit and the Planning Section).
BENTLER, P.M. (1989) EQS: structural equationsprogram manual (Los Angeles, BMDP Statistical Software).
BENTLER, P.M. & BONETr, D.G. (1980) Significance tests and goodness of fit in the analysis of covariance
structures, PsychologicalBulletin, 88, pp. 588-606.
BERNSTEIN,I.H. & TUNG, G. (1989) Factoring items and factoring scales are different: spurious evidence for
multidimensionality due to item categorization, PsychologicalBulletin, 105, pp. 467-477.
BIGGS, J.B. (1979) Individual differences in study process and the quality of learning outcomes, Higher
Education, 8, pp. 381-394.
BIGGS, J.B. (1985) The role of metalearning in study processes, British ffournal of Educational Psychology, 55,
pp. 185-212.
CATTELL,R.B. (1965) Higher order factor structures and reticular-vs-hierarchical formulae for their interpret-
ation, in: C. BANKS& P . L BROADHURST(Eds) Studies in Psychology Presented to Cyril Butt, pp. 223-266
(London, University of London Press).
CATTELL, R.B. (1966) The scree test for the number of factors, Multivariate Behavioral Research, 1, pp.
245-276.
COLE, D.A. (1987) Utility of confirmatory factor analysis in test validation research, Journal of Consulting &
Clinical Psychology, 55, pp. 584-594.
COMI~V, A.L. (1973) A First Course in Facwr Analysis (New York, Academic Press).
CRONBACH,L.J. (1951) Coefficient alpha and the internal structure of tests, Psychometrika, 35, pp. 297-334.
ELEY, M.G. (1992) Differential adoption of study approaches within individual students, Higher Education, 23,
pp. 231-254.
ENTWISTLE,N.J. & KOZEIa, B. (1985) Relationships between school motivation, approaches for studying and
attainment among British and Hungarian adolescents, British Journal of Educational Psychology, 55, pp.
124-137.
ENTWISTLE,N.J. & RAMSDEN,P. (1983) Understanding Student Learning (London, Croom Helm).
EN~rWISTLE, N.j'. & TArt, H. (1990) Approaches to learning, evaluations of teaching and preferences for
contrasting academic environments, Higher Education, 19, pp. 169-194.
EN~XnSX~E,N J., H a n d y , M. & HotnqSEta, D. (1979) Identifying distinctive approaches to studying, Higher
Education, 8, pp. 365-380.
GIBBS, G. & LUCAS,L. (1995) Using research to improve student learning in large classes, paper presented
at the 3rd International Improving Student Learning Symposium, Exeter (mimeo).
GUADAGNOLI,E. & VELICER,W.F. (1988) Relation of sample size to the stability of component patterns,
Psychological Bulletin, 103, pp. 265-275.
HIGHER EDUCATIONCOUNCIL (1990) Higher Education: the challenges ahead (Canberra, Australian Govern-
ment Publishing Service).
HIGHER EDUCATION COUNCIL~NBEET (1992) Higher Education: achieving quality (Canberra, Australian
Government Publishing Service).
KAISER, H.F. (1974) An index of factorial simplicity, Psychometrika, 39, pp. 31-36.
KEMBER, D. & Gow, L (1989) A model of student approaches to learning encompassing ways to influence
and change approaches~ Instructional Science, 18, pp. 263-288.
KERLINGER, F.N. & PEDHAZUR,E.J. (1973) Multiple Regression in Behavioral Research (New York, Holt,
Rinehart, & Winston).
Ln,r ~ , R.D. (1991) Report of the Research Group on PeoCormanceIndicators in Higher Education (Canberra,
Australian Government Publishing Service).
McIr,~Is, C. (1995) Enhancing the first year experience, report prepared by the Centre for the Study of
Higher Education, University of Melbourne (mimeo).
MACNAIR,G. (1990) The British Enterprise in Higher Education Initiative, Higher Education Management, 2,
pp. 60-71.
MA~H, H.W, (1987) Students' evaluations of university teaching: research findings, methodological issues,
and directions for future research, InternationalJournal of Educational Research, t 1, pp. 255-388.
MAgTON, F. & SALJO,R. (1976) On qualitative differences in learning II--Outcome as a function of the
learner's conception of the task, British Journal of Educational Psychology, 46, pp. 115-127.
MEYER, J.H.F. & PARSONS,P. (1989) Approaches to studying and course perceptions using the Lancaster
Inventory, Studies in Higher Education, 14, pp. 137-155.
NORUSIS, M. (1993) SPSS for Windows: professional statistics release 6.0 (Chicago, IL, SPSS Inc.).
PEDHAZUR,E.J. & PEDHAZURSCHMELKIN,L. (1991) Measurement, Design and Analysis: an integrated approach
(London, Lawrence Erlbaum Associates).
52 K.L. Wilson et al.
RAMSDEN,P. (1991) A performance indicator of teaching quality in higher education: the course experience
questionnaire, Studies in Higher Education, 16, pp. 129-150.
RAMSDEN,P. (1992) Learning to Teach in Higher Education (London, Roufledge).
RAMSDEN,P. (1995) Analysis of the Student Opinion Survey for Griffith University, unpublished manuscript.
RAMSDEN,P. & ENTWISTLE,N.J. (1981) Effects of academic departments on students' approaches to studying,
British Journal of Educational Psychology, 51, pp. 368-383.
RAMSDEN,P., MARTIN, E. ~: BOWDEN,J. (1989) School environment and sixth form pupils' approaches to
learning, British Journal of Educational Psychology, 59, 129-142.
PdCHARDSON,J.T.E. (1994) A British evaluation of the course experience questionnaire, Studies in Higher
Education, 19, pp. 59-68.
SARIS,W.E. & STRONKHORST,L H . (1984) Causal Modelling in Non-experimental Research: an introduction to the
LISREL approach (Amsterdam, The Netherlands Sociometric Research Foundation).
TABACHNICK,B.G. & FIDELL, L S . (1989) Using Multivariate Statistics, 2nd edn (New York, Harper CoUins).
TRIGWELL, K. & PROSSER,M. (1991) Improving the quality of student learning: the influence of learning
context and student approaches to learning on learning outcomes, Higher Education, 22, pp. 251-266.
VELICER~W.F., PEACOCK,A.C, • JACKSON,D.N. (1982) A comparison of component and factor patterns: a
Monte Carlo approach, Multivariate Behavioural Research, 17, pp. 293-304.
WELL, S. (1992) Creating capability for change in higher education: the RSA initiative, in: R. BARtlETt (Ed.)
Learning to Effect, pp. 186-203 (Milton Keynes, Society for Research into Higher Education & Open
University Press),.
WHEATON, B. (1987) Assessment of fit in overidentified models with latent variables, Sociological Methods and
Research, 16, pp. 118-I54.
YORKE, M. (1995) Siamese twins? Performance indicators in the service of accountability and enhancement,
Quality in Higher Education, 1, pp. 13-30.
V a l i d a t i o n o f the Course E x p e r i e n c e Q u e s t i o n n a i r e 53
Instructions
In answering this questionnaire, please think about the course as a whole rather t h a n identifying individual
subjects, topics or lecturers. T h e questions relate to general issues about your course, based on c o m m e n t s that
students have often m a d e about their experiences of university teaching and studying. Your responses are
strictly confidential a n d will n o t be seen by teaching staff.
Scoring
Items are scored on a scale from 1 to 5, where 1 m e a n s 'definitely disagree' and 5 m e a n s 'definitely agree',
save for those printed in italics, which are scored in the opposite direction.
to u n d e r s t a n d difficulties students m a y be having with their work AW
Items
* 1 It's always easy here to know the standard of work expected CG
* 2 T h i s course has helped m e to develop m y problem-solving skills GS
# 3 T h e r e are few opportunities to choose the particular areas you want to study IN
# * 4 T h e teaching staff of this course motivate students to do their best work GT
#* 5 The workload is too heavy AW
* 6 This course has sharpened m y analytic skills GS
# 7 Lecturers here frequently give the impression they have nothing to learn from students AA
# * 8 You usually have a clear idea of where you're going and what's expected of you CG
# * 9 Staff here p u t a lot of time into c o m m e n t i n g on students' work GT
# * 1 0 To do well on this course all you really need is a good memory AA
*11 This course has helped develop m y ability to work as a team m e m b e r GS
"12 As a result of doing this course, 1 feel more confident about tackling unfamiliar problems GS
"13 This course has improved m y written c o m m u n i c a t i o n skills GS
# 14 It seems to me that the syllabus tries to cover too many topics AW
# 15 T h e course has encouraged m e to develop m y own academic interests as far as possible IN
# 16 Students have a great deal of choice over h o w they are going to learn in this course IN
# ' 1 7 Staff seem more interested in testing what you've memorised than what you've understood AA
# ' 1 8 It's often hard to discover what's expected of you in this course CG
# ' 1 9 W e are generally given e n o u g h time to u n d e r s t a n d the things we have to learn AW
# * 2 0 T h e staff make a real effort to u n d e r s t a n d difficulties students m a y be having with their work G T
# 21 Students here are given a lot of choice in the work they have to do IN
# * 2 2 T e a c h i n g staff here normally give helpful feedback on how you are going GT
# * 2 3 O u r lecturers are extremely good at explaining things to us GT
# 24 The aims and objectives of this course are N O T made very clear CG
# * 2 5 T e a c h i n g staff here work hard to make subjects interesting GT
# * 2 6 Too many staff ask us questions just about facts AA
# * 2 7 There's a lot of pressure on you as a student here AW
*28 This course has helped me develop the ability to plan m y own work GS
# 29 Feedback on student work is usually provided O N L Y in the form of marks and grades AA
# 30 W e often discuss with our lecturers or tutors how we are going to learn in this course IN
# 31 Staff hem show no real interest in what students have to say GT
# 32 It would be possible to get through this course just by working hard around exam times AA
# 33 T h i s course really tries to get the best out of all its students GT
# 34 There's very little choice in this course in the ways you are assessed IN
#*35 T h e staff here make it clear right from the start what they expect from students CG
# * 3 6 The sheer volume of work to be got through in this course means you can't comprehend it all AW
thoroughly
37 Overall, I a m satisfied with the quality of this course
Note: G T = G o o d T e a c h i n g scale; C G = Clear Goals a n d Standards scale; GS = Generic Skills scale;
A A = Appropriate A s s e s s m e n t scale; A \ V = Appropriate Workload scale; I N = Emphasis on I n d e p e n d e n c e
scale. Items 1 - 3 6 = C E Q 3 6 ; # items=CEQ30 (used in R a m s d e n , 1991; Richardson, 1994);
*items = C E Q 2 3 .