You are on page 1of 11

The Use of Assessment Strategies to Develop Critical Thinking Skills in Science.

Dr Megan H. Hargreaves* Queensland University of Technology m.hargreaves@qut.edu.au Dr Al T. Grenfell, Queensland University of Technology, a.grenfell@qut.edu.au * Presenting Author Abstract: Critical thinking skills are considered to be invaluable generic skills in science education, particularly at University level. Unfortunately, they are frequently not taught explicitly, the assumption being that students will learn from the implicit values buried deep within our teaching philosophies. In recent years some universities have been attempting to bring the teaching of such generic skills to the forefront, and so improve student learning of these most basic and important skills. This study was part of a larger project undertaken by the Science Faculty at Queensland University of Technology, designed to develop assessment methods that would encourage and develop particular critical thinking skills in university science students. Appropriate skills were identified and categorised into hierarchical skill levels, according to the SOLO taxonomy. The taxonomy was applied to various specific disciplines and to levels of teaching within those disciplines. The small project described in this paper consisted of a research inquiry method used to teach and assess critical thinking skills with third (final) year Microbiology students, studying Environmental Microbiology. Students worked in groups to identify an environmental problem, research the background, carry out investigations and present a report in the form of a scientific paper suitable for publication, plus a seminar. Feedback was provided at all stages of the project by assigned tutors. The project has been underway for three years, and the results are very promising. Students abilities to develop hypotheses, critically assess data, solve problems, draw conclusions and investigate alternatives were demonstrated at high SOLO levels. Keywords: Critical thinking, Microbiology, Research-based learning, SOLO taxonomy, Assessment

Introduction Despite recognition by Universities over the past few years of the importance of generic skills, and the need to include such skills in lists of graduate outcomes, there is still a pervading culture amongst academics, which maintains that learning of skills such as oral and written communication, problem solving, information literacy and critical thinking, should be restricted to an implicit, rather than explicit curriculum. There seems to be a widely held belief that students will learn by example (Paul, Elder & Bartell, 1997), will be able to discern via overt teaching of content the philosophy and principles that underpin our belief systems, and the generic skills that are essential to build, maintain and communicate that content. As studentstaff contact hours decrease, the chances of students being able to perform this learning feat

intuitively become increasingly remote. It is rapidly becoming imperative that the vital generic skills be transformed into an explicit component of the curriculum. This is particularly important in Science disciplines, which are traditionally content-driven. Academics tend to bemoan the fact that they no longer have time to cover all the facts in their courses, and are loath to waste precious teaching time on such side-issues as generic skills. While many dedicated people are working tirelessly to transform this attitude, it is also possible to strategically address the very genuine concerns of teaching academics, particularly with respect to a perceived lowering of academic standards. The use of assessment as a learning motivator is one way of teaching generic skills without jeopardizing the learning of discipline skills. The project reported in this paper was part of the Assessment for Critical Thinking in Science project, based on a number of aspects of teaching and learning research, which will be briefly described in the following section. They are: Assessment for learning; Critical thinking in Science; the SOLO taxonomy; and Research-based Inquiry learning. Following this background, the study itself will be described in detail, followed by the study results and evaluation. Background 1. Assessment for Learning Assessment practice, more than any other practice in higher education, communicates to students the type of learning required of them (Biggs, 1992). Numerous researchers have found that assessment practices impact strongly upon what students learn, and the approach adopted toward study. Students alter their approach to learning in line with the perceived requirements of the learning context (Ramsden, 1992; Trigwell & Prosser, 1991). Indeed, Elton and Laurillard (1979) write of something approaching a law of learning behaviour for students; namely that the quickest way to change student learning is to change the assessment system (p100). Biggs (1995) describes this concept as backwash wherein assessment drives not only the curriculum, but also teaching methods and students approaches to learning. Entwistle and Entwistle (1991) found that, while lecturers may claim high quality learning outcomes such as conceptual understanding, critical analysis and independent interpretation, which require students to adopt a deep approach to learning, the assessment practices adopted often seem to encourage much more limited goals, namely the accurate reproduction of course content. One aspect of deep learning approaches is the development of generic skills such as problem solving, thinking critically and making judgements. These are identified as two of the eight clusters of abilities identified by Nightingale et al (1995), which they considered essential for University students to develop, regardless of the discipline area of study (i.e. generic skills). They pointed out that there was a need to assess a broader range of learning outcomes, which has arisen due to the change in conceptions of the goals of a university education. The logical implication of these studies is that assessment may, with careful design and considerable thought, be used to encourage/motivate students to develop appropriate generic skills, in addition to knowledge and understanding (OLeary & Hargreaves, 1997). The assessment design should include not only what the students need to do, but also how they are expected to do it. Academics may put considerable thought into the topic of an assignment, but

then choose an essay format. Since few professions use essays as a means of communicating, why not present the results in a format that will develop generic skills such as use of appropriate professional oral and written communication techniques? Why not encourage group interactions in the preparation of the result and have the students peer review each others work? Why not set assignments that require critical decision making, selection of valid viewpoints, proposal and justification of a hypothesis? The Assessment for Critical Thinking in Science project was based on the understanding that assessment tasks could be used for so much more than simply the testing of knowledge. Students could learn by undertaking such tasks as researching the assignment, performing synthesis and analysis of the gathered data, critical selection of the included material and last but certainly not least, adhering to the formatting requirements of the final presentation. 2. Critical Thinking in Science

In its simplest, dictionary definition (Websters New World Dictionary) critical thinking is described as characterized by careful analysis and judgment and critical, in its strictest sense, implies an attempt at objective judgment so as to determine both merits and faults. In an attempt to develop a more stringent and comprehensive definition, Scriven and Paul (date unknown) defined critical thinking as: The intellectually disciplined process of actively and skilfully conceptualizing, applying, analysing, synthesizing and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning or communication, as a guide to belief and action. Historically, critical thinking can be traced back as far as Socrates, and has developed through the centuries, via the writings and teachings of such renowned scholars as Thomas Aquinas, Francis Bacon, Descartes and Sir Thomas More. Scientists such as Robert Boyle and Sir Isaac Newton developed and used critical processes of thought that challenged the accepted views of the world and demanded a rigorous framework based on carefully gathered evidence and sound reasoning. The contribution of twentieth century educational philosophers such as Dewey, Wittgenstein and Piaget has been to highlight the importance of education in fostering critical thinking abilities, in order to challenge prejudice, over-generalization, misconceptions, self-deception, rigidity and narrowness. While it would be easy to assume that, given the historical and well-recognized importance of critical thinking skills, and the present recognition by Universities of their consequence, academics would not only be aware of the tenets, but also be actively seeking ways to teach such skills. Unfortunately this does not appear to be the case. In a large study designed to identify emphasis by academics on critical thinking in instruction, Paul, Elder and Bartell (1997) found that, while an overwhelming majority (89%) claimed critical thinking was a primary learning objective, only a small minority (19%) could clearly explain what critical thinking actually was, and only 9% were clearly teaching for critical thinking in a typical class session. A similarly small cohort were able to provide a clear conception of the critical thinking skills they considered important for students to develop, to enumerate any intellectual criteria or standards they required of students or could give an intelligible explanation of what those criteria and standards were.

While the survey shows poor results for Education academics, they fared better than Arts and Science faculty in most respects, except for the articulation of basic skills required by students in order to effectively address issues and concerns in their lives. These skills include clarifying questions, gathering relevant data or information, formulating or reasoning to logical or valid conclusions, interpretations or solutions. Given the demonstrated difficulties encountered by Science academics in identifying required critical thinking skills, the project described in this study designated a set of critical thinking skills at the outset, which was considered by the project participants to be appropriate for science students at university level. These are shown in Table 1. 3. The SOLO (Structure of Observed Learning Outcome) taxonomy (Biggs & Collis, 1982). Although the project team was in accord regarding the designated critical thinking skills and objectives, many felt that a range of levels of achievement would be encountered. The team was convinced that the critical thinking skills would, and should, progress from one level to another throughout the students program of study at the University. To this end, the SOLO taxonomy was investigated as a means of determining stages of critical thinking skills development. SOLO defines a set of 5 stages describing learning outcomes, each stage relating to a level of cognitive development. Beginning at the Pre-structural level, the stages progress through Unistructural and Multi-structural levels, to Relational and finally Extended Abstract, with each stage incorporating the attributes of the former stage, and extending on to higher abilities. Each SOLO level incorporates certain crucial characteristics of memory capacity, operations relating task content with cue and response, consistency within a response and closure in making the response. The SOLO taxonomy was considered ideal for the purposes of the ACTS project, and was superimposed over the Critical thinking attributes to produce the matrix in Table 2. 4. Inquiry (Research) Learning

Inquiry is certainly not a new concept in science teaching, although it appears to have had more impact in school education curricula than in university. An early advocate of the inquiry method was Dewey (1910), who insisted that: Only by taking a hand in the making of knowledge, by transferring guess and opinion into belief authorized by inquiry, does one ever get a knowledge of the method of knowing. (p.125) The premise was that, in order to better demonstrate the dynamic nature of science, science process, that is, the structure of the discipline itself, should be taught, rather than simply sets of irrefutable facts. In this way, students were engaged in learning how science operates, and were able to discuss alternative solutions, interpret data and closely examine conclusions in a similar complex and non-linear way to that used by real scientists (Huffman, 2002). In 1996, the National Research Council (U.S.A.) developed a set of standards for science education (NRC, 1996) which stated, with respect to inquiry: Inquiry is a multifaceted activity that involves making observations; posing questions; examining books and other sources of information to see what is already known; planning investigations; reviewing what is already known in light of experimental

evidence; using tools to gather, analyse, and interpret data; proposing answers, explanations and predictions; and communicating the results. Inquiry requires identification of assumptions, use of critical and logical thinking, and consideration of alternative explanations. (p. 23) It was this vision of inquiry that inspired the study reported here. The skills, procedures and philosophies achieved by an inquiry method, were closely aligned to those designated in the critical thinking categories, and so this method was considered to be an ideal tool to achieve the aims of the project. Research Methodology As the unit on which this study was focussed, Environmental Microbiology, was taught in the third (final year) of their undergraduate degree, students were expected to have already learned some critical thinking skills in most categories, but not to have progressed to the higher SOLO levels. The inquiry or research method was selected for the Environmental Microbiology strand of the project, as it was considered that this unit was ideally suited for such a method. Rather than using a fully constructivist approach, the students were guided, and given some boundaries within which to work, mainly due to practical constraints such as cost and time. The classes (2001 2003) each consisted of approximately 25 final year students in the Bachelor of Applied Science degree at QUT. These students had already undertaken two units of prerequisite microbiology, and were mostly majoring in Microbiology. Thus, they had a broad, if basic, knowledge of discipline-specific skills. They were introduced to the concept of their inquiry, styled as a mini-research project, at the beginning of semester. The students formed their own research groups, although some adjustment of group membership was occasionally necessary. Each group was assigned a tutor, to whom they had a designated time of access (5 hours total). Part of their planning process required them to draw on their tutor time carefully, so that they made the most of the available expert consultancy. They were encouraged to discuss possible topic areas and to agree on a suitable one. This was often influenced by one students access to an interesting site or environmental problem. The choice of topic included choice of medium (air, water or soil), identification of a potential problem, and formulation of a hypothesis. This was to be submitted as a research proposal for assessment, and included a rationale, or justification for the study, based on previous studies or published research. Following the acceptance of their proposal, possibly following requested modifications, students entered the planning phase. They were required to prepare and QC test all of their own growth media for isolation of the target micro-organisms, and organize all of the materials used to collect and process their environmental samples. During this stage, the groups learned not only to plan a project, but also to work together, often their biggest challenge. Each group was required to consult the national standards relevant to their project topic, to ensure that their project design complied with such methods or guidelines. The next stage involved the gathering and analysis of data. Samples were to be collected and transported to the laboratory appropriately, so that the results were not invalidated. The teams processed their samples, organized incubation of the growth media, and rostered their members to

carry on the analysis over the following week. During the course of the project the students were responsible for monitoring the progress of their experiment, including terminating the incubation when appropriate and storing the resulting cultures, and also for liaison with the technical staff to ensure access to supervision, lab space and equipment. Finally, the results were written up in a scientific paper-style report. This required not only a suitable format, but also an acceptable referencing style, and of course, analysis and discussion of results, linking them to previous research if possible. The outcomes of the project were to be viewed in terms of the hypothesis (generally disproved) and the limitations to be discussed. Students were required to suggest future research in the topic area, as well as to read a range of journal articles when preparing their report, in order to justify their research, and also to expose them to a variety of publication styles, and referencing systems. All of the reports were collated each year into a journal-style booklet, a copy of which was sent to each student, and distributed to other staff members. Assessment and Evaluation The assessment weighting for the mini-research project was decided by class discussion, and was usually in the order of 30% of the total marks for the unit. This consisted of 5% for the proposal and planning, 20% for the final report, and 5% for the seminar presentation. The practice of allocating marks for the initial proposal indicated clearly to the students the importance of correctly identifying a problem and formulating an appropriate hypothesis, as well as investigating current knowledge and identifying the knowledge gaps. The mark for the final report incorporated marks for critical-thinking criteria, as well as the excellence of results and laboratory skills. This was important, in that the experimental results were often unexpected and students needed to learn that things can always go wrong in real science. The students were encouraged to report their interpretation of their data and explanation of the experimental results, and also to link the explanations to existing scientific knowledge and to communicate and justify the explanations (NRC, 1996). All of these aspects were included in the criteria for the grading. The criteria also included most of those specified by the Centre for Critical thinking. In particular the following were considered relevant: 1. Is the question at issue well stated? Is it clear and unbiased? Does the expression of the question do justice to the complexity of the matter at issue? 2. Does the writer cite relevant evidence, experiences and/or information essential to the issue? 3. Does the writer clarify key concepts when necessary? 4. Does the writer show a sensitivity to what he or she is assuming or taking for granted? (Insofar as those assumptions might reasonably be questioned)? 5. Does the writer develop a definite line of reasoning, explaining well how he/she is arriving at his/her conclusions? 6. Is the writers reasoning well-supported? In addition to the formal assessment performed by the academic staff, students performed a peer assessment of their groups performance and dynamics. This information was used to moderate

the group mark, so that underperforming students were given a grade appropriate to their input. The tutor for each group also provided confidential commentary on the dynamics of the group. Evaluation of the project was performed in several ways. An open-ended survey was distributed to students following completion of their project. The responses indicated that they felt they had achieved SOLO levels 3 or 4 in the latter three critical thinking areas, but perhaps only SOLO level 2 in area 1. Written and oral feedback from students has been overwhelming and spontaneously positive, particularly when they received their journals, of which they are justifiably proud. Some comments include: The research project is a very valuable experience for future work The project is great for practical hands-on experience and getting thinking about scientific reports Group assignment was very interesting: we really got into it. It made me feel like I was a real scientist Student projects are a valuable insight into the real-world of this field The few negative comments reflected on the timing and weighting of the project. Students expressed a desire to start earlier, and to have the project worth more than 30%, reflecting the fact that these students recognized the importance and relevance of the project, and felt that it was not given sufficient weight and time to reflect this value. Some negative feedback was also received regarding the behaviour of fellow-group members, and the difficulties of working with groups. Tutors were alerted to the need to perform some interventions at times, if group behaviour became belligerent. Perhaps the strongest factor in evaluation was the outcome of the project assessment. As mentioned previously, the critical thinking criteria listed constituted a significant proportion of the expectations for the reports. The majority of reports demonstrated levels of critical thinking at least to SOLO level 3, frequently higher. Discussions that were stimulated during the report preparation phase also displayed a high level of critical thinking skills, as students decided which results were relevant, whether they had proved their hypothesis or not, if not why not? and so on. Tutors reports commented favourably on the levels of critical thinking skills demonstrated by their groups. Encouragingly, improvements were also noted in the separate theory examination, which consisted of essay style questions requiring similar levels of critical thinking skills. Grades for these questions have improved since the introduction of the research project, and it has been obvious that at least some of the students have learned to critically evaluate information and synthesise multi-dimensional arguments. This was a desirable, if somewhat unexpected outcome, as it is generally considered quite difficult to achieve transfer of knowledge and skills from one component of a course or unit to another. A further positive factor is that, since the introduction of the mini-project in its current format, over half of the class has progressed to post-graduate studies each year. This was not the case in previous years. However, even for students who have no desire to undertake higher studies, the critical thinking skills gained are likely to prove invaluable in their workplaces, particularly the

more practical skills achieved by their participation, such as working in groups, planning a scientific investigation, and writing and presenting a report. Conclusions Assessment of the project outcome, the research paper, demonstrated that the majority of students achieved critical thinking skills at least at SOLO level 3 (relational) and some at level 4 (extended abstract). While students did not provide alternate hypotheses initially, most were able to critically evaluate their original hypothesis, to suggest alternatives that would have been more appropriate, and to suggest alternative experimental designs that would have better achieved their objectives. The majority of the critical thinking skills proposed by the ACTS team were demonstrated to have been achieved by most of the students at moderate to high SOLO levels, as a result of their participation in the research project learning experience.

Table 1: Critical thinking skills and the related student learning outcomes .
Critical thinking skill Objectives (student learning outcomes)

Designing experiments and testing hypotheses

Analysing arguments

Solving problems

Thinking creatively

In the context of science and mathematics, the student will be able to: understand the need to isolate and control variables select appropriate experimental techniques use adequate sample sizes and avoid sampling bias distinguish observations from inferences critically evaluate the validity and reliability of data establish relationships among variables use inductive and deductive reasoning calculate uncertainties understand the limitations of extrapolation use sound statistical approaches In the context of science and mathematics, the student will be able to: distinguish among data, opinions, and interpretations structure an argument to support a proposal or interpretation distinguish among premises, reasons, and conclusions judge the credibility of an information source identify relevant components that are missing from an argument recognise common fallacies (e.g. circular reasoning, irrelevant reasons) In the context of science and mathematics, the student will be able to: restate the problem and the goal in order to consider different problem-solving approaches, particularly with ill-defined problems represent the problem schematically develop mathematical models design algorithms select appropriate problem-solving strategies consider useful analogies make sound decisions on the basis of critically reflective processes appreciate the value of persistence In the context of science and mathematics, the student will be able to: demonstrate insight in recognising a problem recognise patterns and visualise data recognise and critically evaluate a number of solutions to a problem select relevant information in relation to a problem and make unusual connections

(ACTS project, 2000, Appendix 2. Some of the objectives based on Halpern, 1997, p.284)

Table 2: Matrix of Critical thinking skills and SOLO levels (prepared by Dr D. Nulty, QUT, based on Biggs & Collis, 1982).
Critical thinking skill Designing experiments & testing hypotheses SOLO level 0 Pre-structural Unable to/ cannot design an experiment. Unable to/ cannot test hypotheses. SOLO level 1 Uni-structural Provides one experimental design. Provides one hypothesis. Designs experiments which are convergent - they seek only one answer. Experimental design assumes that the answer can be found in one step. If x then y. SOLO level 2 Multi-structural Provides more than one experimental design. Provides more than one hypothesis. Designs experiments that are convergent but which seek an answer amongst a number of possibilities. Experimental design still tends to assume the answer can be found in one step - but, a collection of single step experiments are offered to choose from. SOLO level 3 Relational Provides more than one experimental design. Provides more than one hypothesis. Also provides a way to relate these designs and hypotheses together - either through some reasoned logic, a theory or through a method of evaluating and selecting the most appropriate design and hypothesis. Designs experiments which are characterised as nested suites of inter-related experiments with a collective purpose. They are diagnostic in their convergence, accommodating a range of possibilities in an ordered and directed manner. There is no a priori commitment to a particular single objective or outcome. Experimental design is characterised by a multistage approach to a determination of the "facts". Provides more than one interpretation of an argument and/or Provides details of qualifying variables/factors and/or Provides limitations of the interpretations dependent upon context. Compares, contrasts and evaluates different arguments and their interpretations. Relates the different interpretations to an overarching theoretical understanding. Provides several solutions to a problem. Explains the relationships between the possible solutions. Explains the circumstances under which each of the offered solutions may be selected in preference to the others. SOLO level 4 Extended abstract Provides more than one experimental design. Provides more than one hypothesis. Provides a basis for designing experiments and developing hypotheses which goes beyond the initial problem - and/or may then go on to design those experiments and develop those hypotheses accordingly. This extended view of the problem provides a more sophisticated way to relate these designs and hypotheses together than is seen in the "Relational" level. Designs experiments which are characterised as multiple nested suites of inter-related experiments each with a collective purpose. They are diagnostic, not necessarily convergent. They accommodate a range of possibilities in an ordered and directed manner. There is no a priori commitment to a particular single objective or outcome but characteristically - there is likely to be a reasoned commitment to a particular set of values or beliefs which justify the particular range of approaches offered. Experimental design characterised by a choice between several multistage approaches to a guided exploration of relevant issues. Provides more than one interpretation of an argument and/or Provides details of qualifying variables/factors and/or Provides limitations of the interpretations dependent upon context. Compares, contrasts and evaluates different arguments and their interpretations. Relates the different interpretations to an overarching theoretical understanding. Able to re-frame the argument or arguments in a way that goes beyond the original argument. May analyse the argument from a philosophical rather than simply factual/rational standpoint. Provides several solutions to a problem. Provides solutions which transcend the original problem. Explains the relationships between the possible solutions. Explains the circumstances under which each of the offered solutions may be selected in preference to the others. Justifies the solution which transcended the original problem by reference to issues that go beyond only the science involved. eg. by taking account of socio-political and/or economic values. Demonstrates multi-dimensional thinking. i.e. can conceptualise multidimensional relationships between variables which interact and which may also change over time. Thinking is not constrained by the parameters of the matter under consideration. Information is evaluated critically in relation to relevance and interdependencies and context dependence. Makes connections between several items including items which are not directly related. Thinking may display insight, imagination and originality.

Analysing arguments

Unable to/ cannot analyse an argument

Provides one interpretation without qualification nor any contextual considerations

Provides more than one interpretation of an argument and/or Provides details of qualifying variables/factors and/or Provides limitations of the interpretations dependent upon context. The different interpretations are treated as separate, are not related or integrated.

Solving problems

Unable to/ cannot solve problems.

Provides one solution to a problem. Presents this one solution as the only solution.

Provides several solutions to a problem. Presents each solution as equally viable

Thinking creatively

Unable to/ cannot think creatively

Demonstrates a pattern of thinking that is uni-directional: i.e. focussing on one aspect or one strategy or one solution. Thinking is constrained by the parameters of the matter under consideration. Accepts information as either right or wrong. Makes single connections between items that are directly related.

Demonstrates a pattern of thinking that is 2 dimensional or which represents multiple level 1 outcomes. Thinking is constrained by the parameters of the matter under consideration but is characterised by consideration of multiple parameters. Tends to accept all information uncritically. Makes connections between several items that are directly related.

Demonstrates a pattern of thinking which is 3 or multidimensional. Thinking tends to remain constrained by the (multiple) parameters of the matter under consideration. Information is evaluated critically in relation to relevance and inter-dependencies. Makes connections between several items including some which are not directly related.

References ACTS project proposal (2000). Transforming teaching and learning in science and mathematics: changing existing cultures through teaching and assessment approaches that focus on the development of students critical thinking skills. Proposal for QUT Teaching and Learning Development Large Grant, prepared by A.Grenfell, QUT. Biggs, J. (1992). A qualitative approach to grading students. HERDSA News, 14(3), 3-6. Biggs, J. (1995). Assessing for learning: Some dimensions underlying new approaches to educational assessment. The Alberta Journal of Educational Research, 41(1), 1-17. Biggs, J.B. & Collis, K.F. (1982). Evaluating the quality of learning: the SOLO taxonomy (Structure of the Observed Learning Outcomes). New York, Academic. Centre for Critical Thinking, Sonoma State University, California. A sample assignment format. Available online at http://www.criticalthinking.org/University/univclass/AssignFormat.html Dewey, J. (1910). Science as subject matter and as method. Science, 121-127. Elton, L.R.B. & Laurillard, D.M. (1979). Trends in research on student learning. Studies in Higher Education, 4, 87-102. Entwistle, N. & Entwistle, A. (1991). Contrasting forms of understanding for degree examinations: The student experience and its implications. Higher Education, 22, 205-227. Halpern, D.F. (1997). Critical thinking across the curriculum: a brief edition of thought and knowledge. New Jersey, Lawrence Erlbaum. Huffman, D. (2002). Evaluating Science inquiry: A Mixed-Method Approach. In Altschuld J.W. & Kumar, D.D. (Eds) Evaluation of Science and Technology Education at the Dawn of a New Millennium, p. 243. New York, Kluwer Academic/Plenum Publishers. National Research Council (NRC), (1996). National Science Education Standards. Washington, DC: National Academy Press. Nightingale, P., TeWiata, I., Toohey, S., Hughes, C., Ryan, G. & Magin, D. (1995). A resource for improving the practice of assessment in higher education. Innovations in Education and Training International (IETI), 344-355. OLeary, J. & Hargreaves, M. (1997). The role of assessment in learning. In M. Hargreaves (Ed), The Role of Assessment in Learning. ASDU Issues, Queensland University of Technology, Brisbane. Paul, R., Elder, L. & Bartell, T. (1997). California teacher preparation for instruction in critical Thinking: Research findings and policy recommendations. California Commission on Teacher Credentialing, Sacramento California. Publisher: Foundation for Critical Thinking. Ramsden, P. (1992). Learning to teach in higher education. London: Routledge Scriven, M. & Paul, R. (Date unknown). Defining Critical Thinking: A draft statement prepared for the National Council for Excellence in Critical Thinking Instruction. Available on-line at http://www.criticalthinking.org/University/defining.html Trigwell, K. & Prosser, M. (1991). Improving the quality of student learning: The influence of learning context and student approaches to learning on learning outcomes. Higher Education, 22, 251-266.

You might also like