![]() |
Home
| Databases
| WorldLII
| Search
| Feedback
Legal Education Review |
![]() |
IMPROVING THE QUALITY OF HIGHER EDUCATION: LESSONS
FROM RESEARCH ON STUDENT LEARNING AND EDUCATIONAL
LEADERSHIP1
PAUL RAMSDEN
A few years ago David Minton published a book about teaching skills in adult
education.2
To introduce an idea about the
importance of the teacher’s experience as a key factor in the quality of
learning and teaching,
he described his own experience of eating a delicious
dish of garlic mushrooms in a restaurant in the Beaujolais. Madame served,
and
her husband did the cooking. But Minton made a mistake. He asked Madame for the
recipe. “Monsieur,” came the withering
reply “It is not what,
it is who.”
Minton realised then that he had asked the wrong question.
The difference between one dish of garlic mushrooms and another does not
depend
on the recipe, but on the person who cooks it.
Much the same is true of the
quality of university teaching and university courses. There are no certain
prescriptions for good teaching.
There are no foolproof techniques for
guaranteeing quality. There are only teachers, and educational effectiveness
depends on their
professionalism, their experience, and their commitment. We
must ask the right questions in the search for quality. We must emphasise
the
importance of the “who” in order to achieve quality.
What does
it take to improve the quality of learning and teaching in higher education?
More importantly what will help us, as teachers,
to achieve improvement? In this
article I would like to illustrate how some of the ideas from student learning
research might be
used to improve the quality of university education. There are
three areas I want to apply these lessons: helping the novice lecturer
to become
more expert; providing appropriate academic leadership; and using methods of
evaluating teaching and courses which combine
the need to assure quality with
the principal purpose of enhancing it.
THE IMPACT OF STUDENT LEARNING RESEARCH
The main lessons from the last fifteen years of
research into student learning will have an everyday ring to most readers. The
ideas
of a previously little-known group of academics from Britain and Sweden
have become accepted into the discourse of quality in higher
education. Powerful
people and statutory bodies now use phrases from what used to be a comfortably
private area of educational research
as part of their lingua franca.
It
seems now generally accepted that we need to look at students’ learning in
the natural environment in which it takes place.
University students’
experiences of teaching and assessment matter more than particular teaching
methods in determining the
effectiveness of their learning. Perhaps less
embedded in academic culture, though it follows directly, is the idea that
“teaching”
means more than instructing and performing and extends
more broadly to providing a context in which students engage productively
with
subject matter. There is a now a widespread view in academic development
circles, derived directly from the student learning
research, that we should
concentrate on learning, on what the learner does and why the learner thinks he
or she is doing it, rather
than what the teacher does.3
And, if teaching is about helping to make learning possible, assessment becomes
defined as being about understanding students and
what they have learnt.
Effective assessment helps students develop the skills of self-assessment.
I
want to go beyond this, however. These examples can be translated into another
set related to improving educational quality. University
teaching takes place in
a context, and understanding the academic’s experience of academic work is
a key to understanding how
to improve it. Improving teaching is about helping to
make teacher learning possible; evaluating teaching is about getting to know
teachers and their teaching. Effective teaching is professional, self-evaluating
teaching — and effective evaluation helps
develop the skills of
self-evaluation.
IMPROVING TEACHING: HELPING NOVICES BECOME EXPERTS
One reason for the impact of the research on student
learning is that it reflects, in a special way, how accomplished teachers go
about their work. The great body of research on teaching expertise makes it
clear that the experts focus primarily about what their
students are doing and
thinking. Expert teachers look at teaching from the point of view of the
learner, not the teacher. There is
a strong association between this way of
teaching and the quality and quantity of student learning.
Novices as well
as experts use models or theories of teaching when they teach. Experts and
novices express different conceptions of
teaching, and different intentions
underlie the strategies they use.4 For novice teachers,
the immediate reality of class management, lecture notes, teaching materials,
and numbers of students looms
large. They want to do what I did when I gave my
first lecture — to fit into the existing environment. How did my
predecessor
teach this class? How can I do the same? They see teaching primarily
as telling or transmitting knowledge, and organising it so that
it can be
efficiently transferred from teacher to learner. Events in the classroom are
interpreted from the teacher’s point
of view alone, and their implications
for students’ learning are rarely perceived. Novices typically believe
that reflection
on the effects of teaching on student learning is “only
theory”: they sharply distinguish educational theory from
“reality”.
The expert differs not only in terms of strategies
and the effectiveness of his or her students’ learning, but also in terms
of conceptions and intentions. Naturally the expert teacher often does the
things that a novice does. But something like class management,
for example,
does not usually occupy the foreground of his or her thinking. The expert thinks
about teaching as interacting with
students and monitoring their learning. This
may involve some presentation of information, but that is only a step on the
way; it
is not what an expert thinks teaching is. He or she intends to make the
educational environment, not simply respond to it, and sets
the ground rules by
making explicit what is expected from students as far as he or she is concerned,
not by reference to other teachers.
The expert is very alert to classroom
events, and fully understands the value of reflection on practice as a way of
adapting and
improving.
Although you can tell teachers about effective
strategies, this is not enough to improve their students’ learning, since
they
will often not use them, or will misuse them, unless they also change their
intentions and their conceptions. Failure to understand
these relationships
remains one of the serious errors of conventional staff development, just as it
remains the fundamental misconception
of conventional study skills courses. The
mistake has been repeated in many texts on teaching methods in higher education.
Too often,
the lecturer’s education in teaching methods has stopped at the
strategies. Sometimes it encourages a split between conception
and strategy by
marginalising theory (“So much for the theory about how rats learn to run
mazes. Now on to the real world of
teaching large classes in a converted
cigarette factory”). It is interesting that this dualist ontology —
quite different
from modern views of how professionals learn and practise
— has been recently formalised in an unfavourable contrast between
“practical strategies” and “theoretical ideas about
teaching” in universities.5 The dualist
conception embodies the novice’s error.6
The
results of ignoring the importance of teachers’ conceptions in staff
development are familiar to every staff developer.
If you understand teaching as
information transmission, and intend so to teach, how will you react to the
suggestion that you should
use buzz groups in lectures? Probably by saying that
you do not have time for student activity; you will not be able to get through
all the content. If you try a student-centred strategy, you will probably not
take it very seriously, and when it fails to work,
you will probably abandon it
rather than try to make it work; you may use it superficially in a way contrary
to its purpose. Teaching
strategies are important, and teachers must learn them;
but they must learn them and change their understanding if the strategies
are to
lead to better student learning.7
These ideas about
how university teachers learn to teach are expressed in the best of the
programmes for new staff and accredited
courses for lecturers in Australia and
the UK.8 SEDA’s scheme in particular represents
exemplary practice in professional teacher education. We are seeing a change
from a
dualist model to a unified one, where ideas about how students learn and
how assessment and teaching affect their learning are integrated
with the
experience of teaching. In these programmes, “classroom strategies”
and “theory” are in constant
dependence with each other, each taking
its meaning from the other. The new pattern is similar to the general movement
towards more
problem-based and experience-driven professional
education.9 It reflects today’s understanding of
how students and teachers learn.
A recent study of Australian new academic
staff programmes provides support for the proposition that courses of this type
lead to
more effective teaching than the traditional
ones.10 It also confirms the conclusions of a long-term
investigation of American courses for new faculty.11
The naive dualism of foundational theory versus teaching practice in university
staff development is no longer tenable.
The best courses involve staff in a
lengthy programme, related to their special needs alone, in which there are many
opportunities
for inter-colleague interaction. Even the most carefully designed
course, however, may have little impact on teaching quality unless
the much more
powerful effect of the academic’s normal environment — the
department, school or faculty — is taken
into account. There is no point
in having great ideas about new ways to help students to learn if the
departmental environment is
hostile to their application. New academics soon
abandon their innovatory strategies if their colleagues give them no
encouragement
to use them. They adapt to the context in which they find
themselves. This is another commonplace of student learning research that
we
must apply to educational development.
THE CONTEXT OF TEACHING: LEADERSHIP THAT ENABLES
Recently I was discussing the problem of how to
recognise and reward good university teaching with the Deputy Vice- Chancellor
(Staffing)
of an Australian university. The talk went through the usual topics:
perceptions that good teaching went unrewarded in comparison
with research; the
issues of how to measure good teaching; the ways of altering promotion systems
to take more account of teaching;
the use of portfolios and the pitfalls of
student ratings. Then he said, “Do you know the single most important
thing that
would lead to better teaching, and a feeling that good teaching is
properly rewarded? Appoint the right Vice Chancellor”.
He is right, of
course. Promotions are a necessary but small part of recognising and rewarding
the effort put into teaching. The problem
is much more fundamental. It is a
problem of environment and leadership. Its solution requires creating the
conditions in which staff
feel empowered to help their students. It involves
helping them feel that their work is valued, and praising and supporting their
efforts to assist their students, not ignoring or criticising them. It implies
the time and the resources and the behaviour that
helps teachers learn. It means
helping them to learn from each other.
There is an analogy between what
student learning research says about the effect of the context of learning on
approaches to learning
and the effects of the academic environment on approaches
to teaching. Just as good teaching can encourage active engagement with
academic
content, so good leadership can encourage staff to give their best to their
students. Good leadership helps create an environment
for teacher learning and
collaborative problem-solving.
Studies of school effectiveness demonstrate
this point so faithfully12 that I am surprised at how
little attention is still paid to academic leadership by educational developers
in higher education. The
nature of the principal’s leadership is the
crucial variable in determining the satisfaction and success of the staff. In a
good school, where the children learn a great deal and the staff enjoy their
work, the principal is typically someone who knows what
he or she wants the
school to achieve and helps teachers to work together towards shared goals. He
or she is primarily interested
in solving educational problems rather than
administrative ones. These principals provide leadership that enables staff to
operate
as a team. They monitor the effects of their management strategies,
striving continuously to improve them. They use consistent delegation
policies.
They model risk-taking in teaching. They emphasise educational values. They
focus on the value of caring about students
as a critical aspect of what the
school does. They actively use knowledge and ideas from outside the school to
improve what goes
on within it.13
To push this
analogy even further, the findings of studies of secondary school
teachers’ perceptions of what a good principal
does reflect those of
studies of students’ perceptions of good
teaching.14 The student learning research tells us
about the importance of intellectual challenge, clear goals, creating an
environment where
they take responsibility for their own learning, encouraging
cooperation between students, concern and respect for students as learners
and
people, understanding what students have learnt and what they still need to
learn, giving excellent feedback on learning, continuously
monitoring the
effects of one’s teaching in order to improve it, seeing teaching as a
conversation or dialogue rather than
a transmission process, and understanding
teaching as a process of enabling learners, rather than a set of recipes.
Each of these factors in good teaching has a counterpart in effective
academic leadership. If teaching is helping to make learning
possible,
educational leadership is helping to make effective teaching possible.
Proficient academic leadership involves building
a shared vision through
establishing clear goals, improving communication, and creating challenge in an
environment of collaborative
decision making and teamwork where each individual
feels a responsibility for achieving excellence in teaching and learning. It
involves
engaging in a conversation or dialogue with teachers. It implies
encouraging staff to become involved in the process of evaluating
and improving
their teaching as a normal part of their work. Very importantly, it also implies
putting ideas into practice through
observable action — nothing is more
disheartening than rhetoric about supporting good teaching that is not backed up
by appropriate
management behaviour that recognises the value of good teaching.
Viewed like this, leadership is indeed a process analogous to good teaching;
and like good teaching, its highest aim is to achieve
redundancy. Of the best
teachers the students say “We learned it all without you”. Of the
best academic leaders the academics
say “We did it all ourselves”.
Investigations of research productivity generally support the argument that
leadership and the academic context are important determinants
of individual
research output. Cooperatively- managed academic units with participative,
goal-directed management lead to higher
productivity.15
When researchers move from more supportive environments to less supportive ones,
their productivity declines; and vice-versa. The
context of research affects the
researcher’s activity and output.
Is this true about university
teaching? How does the context of teaching affect the quality of teaching? Mike
Prosser, Keith Trigwell,
Elaine Martin and I are looking at the associations
between the academic environment and lecturers’ approaches to teaching
in
an Australian Research Council funded project. Trigwell and Prosser have
previously identified different approaches to university
teaching among science
lecturers which are similar to the “knowledge transmission” versus
“facilitating learning”
conceptions of teaching which others have
previously described.16 The different approaches are
empirically connected to the use of different teaching strategies and, moreover,
they appear to elicit
different approaches to learning among students.
Our
hypothesis now is that these approaches to teaching, like students’
approaches to learning, are related to the perceived
academic environment. We
hope to be able to trace a path from departmental management to the quality of
student learning (see Figure
1). Early indications17
are that perceptions of transformational leadership (“The head motivates
you to do more in your teaching than you ever thought
you could”),
participatory management (“The head of this department listens to what you
have to say”) and teacher
involvement (“People discuss their
teaching problems with each other here”) may well form a link between
academic management
and good teaching.
FIGURE 1
Leadership and the Quality of Student
Learning
Departmental
leadership & management |
|
|
|
Perceptions of ‘context of teaching’
|
|
|
|
Approaches to teaching
|
|
|
|
Quality of student
learning |
ASSESSING EDUCATIONAL QUALITY: TAKING CONTROL OF EVALUATION
According to the student learning research,
assessment gives messages about the kind of learning required. If so, then
evaluation
gives messages about the kind of teaching required. The third area
where student learning research impacts on educational quality
is evaluation.
Any credible scheme for evaluation has to take account of two apparently
conflicting goals: the need to provide publicly-verifiable
information for
purposes of accountability and the need to develop a commitment to everyday
self-evaluation for improvement purposes.
If enhancing the quality of student
learning is the primary goal, it is imperative to prevent the task of collecting
and demonstrating
from overwhelming the process of reflection and change. It is
no use simply ignoring the need for rigorous reporting of good data,
but it is
no use either pretending that perceptions of the assessment process will not
determine its effectiveness. Luckily, if we
get the improvement part right, the
accountability part is generally sure to follow. Good evidence of improvement is
automatic evidence
of accountability.18
If Minton
is right about the importance of the “who” in teaching, then the
methodology must build a sense of ownership
in and responsibility for the
process among teachers. Like a good student assessment regime, it should provide
plenty of feedback
and encourage openness and cooperative activity It should
minimise anxiety and the sense of being continually inspected. It should
be
valid, beneficent, and fair. It should be the subject of a dialogue between
assessors and assessed. It should not do anything
that discourages people from
trying to criticise their performance candidly and from trying to use the
information they gather about
their performance to enrich what they subsequently
do. It should encourage responsible self-assessment. It should be integral to
teaching and learning, rather than additional to teaching and learning. It must
lead to trustworthy judgements about academic performance.
It is interesting
that the process of quality assessment of Australian universities has, for all
its other imperfections, used an
evaluation model that is remarkably up to date
and congruent with the student learning research findings about the effects of
assessment
on the quality of learning. Instead of using an expensive and clumsy
inspection model, the Australian system has approached the problem
by requiring
reliable self-evaluation linked to institutional objectives, followed by
external audit of the results of this process.
The message that this system is
trying to convey is that outcomes and evidence of improvement matter more than
the existence of quality
management processes in themselves; and that the
responsibility for demonstrating excellence lies with the institution. The
external
assessment is an attempt to verify the university’s claims. The
results of the assessment are linked directly to funding incentives.
There are
immediate parallels with systems of student self-assessment.
Unfortunately
the internal quality management processes of Australian universities have not
always achieved the same level of rigour
and fairness. The analogy is again
appropriate: even the best assessment is sometimes interpreted by the students
in a way different
from the intentions of the teacher. In many cases far too
much emphasis has been placed on quite trivial (but often costly) processes
(such as the existence of compulsory student ratings of lecturers) which may
have damaging side-effects on teacher morale and student
goodwill, and too
little on evidence of improvements in the quality of student learning outcomes.
To use a geological time scale,
millennia seem to have been spent on devising
“objective” quantitative indicators, a day on what the indicators
are supposed
to indicate, an hour or two on whether the indications show that
students are learning well, and seconds on whether their learning
has improved.
My conversations with quality managers in large corporations have convinced
me of what I had never thought I would be convinced of
— that universities
have much to learn from the best industry practice on quality. Their approach to
quality, unlike many of
the universities’, is closely aligned with the
student learning research findings. Excellence in products and services requires
a focus on cooperation (even between competitors in the same market, in
benchmarking best practice, for example), commitment, rigour,
ownership of
processes and vision, adding value, and above all an environment where
improvement is normal and support for improvement
is freely given. In contrast
the universities’ approach to quality in learning and teaching often still
reeks of unskilful
assessment practice, especially a conception that high
standards are the almost automatic consequence of high quality inputs (good
students, good researchers, plenty of money), high pressure to perform, and high
levels of secrecy and competition. No wonder that
ICI Australia, Eastman Kodak
and the rest are sceptical about the pretensions of higher education to claim a
special place at the
table of quality management. I have been involved in two
schemes at Australian universities which have tried to grapple with the
problem
in a more adroit way. Both schemes draw in part on the excellent work of the
Scottish Office Education Department in devising
qualitative performance
indicators for secondary schools,19 as well as on the
lessons from student learning research, and the valuable work that has been done
on student self-assessment in
the past few years. It is absolutely necessary to
provide course teams and departments with models on which they can base their
self-evaluations,
and suggested criteria which might be used. It is equally
important to ensure that teaching staff develop ownership over the process,
and
that they find it useful.
The Griffith University scheme (Figure 2) is based
on the principle that good universities, like good learners and good teachers,
are constantly learning about how they can improve their performance. Quality
improvement and the development of students are primary
purposes. Separation is
made between the process of evaluating individual teaching (column 3) and the
process of evaluating courses
and faculties (columns 1 and
2).20 Examples of excellent performance, phrased
generally are provided initially to help course teams to evaluate different
aspects of
their work (see Figure 3).21 New examples
related to particular disciplines are constantly being created from actual
practice. Principles that guide the process
are listed in Figure 4.
FIGURE 2
Evaluating and Improving the Quality of
Teaching and Learning at Griffith University
Basic principles: The process of audit and improvement should be
systematic, flexible, empowering, devolved, collaborative, and rigorous.
A
variety of different sources of evidence should be used. The process should
consume the minimum amount of time and resources consistent
with demonstrating
accountability and genuine improvement. The process of rewarding and recognising
individual teachers’ performance
should be kept distinct, as far as
practicable, from the process of reviewing courses.
For each of the
three columns: Publicise definitions of effective teaching and learning;
agree criteria; list appropriate sources of evidence; undertake development
exercises (eg workshops on how to evaluate a course); set up trustworthy
reporting processes.
Audit and improvement of the teaching and learning environment
(University/Faculty level)
|
Audit and improvement of courses (Faculty/School level)
|
Recognising and rewarding teaching (Individual level)
|
Academic staff profile analysis (eg qualifications)
|
Quality of learning and teaching (eg quality of teaching process;
staff-student relationships and course ethos)
|
Planning and preparation for teaching (eg teaching sessions have clear
goals for learning)
|
Staff professional activities (eg teaching grants secured)
|
Quality and relevance of subjects and courses (eg expert review, including
external stakeholders; links between subjects)
|
Processes of teaching (e.g explanations, and questions are clear and at
appropriate level)
|
Staff development (eg participation in seminars and courses)
|
Student progress and achievement (eg quality of learning outcomes;
responsiveness to particular needs)
|
Assessment of students and their learning outcomes (eg students obtain high
quality
|
Faculty management: leadership and planning (eg effectiveness of
Dean’s leadership in shaping the learning and teaching environment)
|
Management for excellence in teaching and learning: leadership and planning
(eg effectiveness of Head’s leadership in promoting
successful learning
and teaching)
|
Evaluating and improving teaching (eg information from assessment used to
modify teaching)
|
Evaluation processes (eg surveys of student experiences and their
effects)
|
Evaluation processes (eg existence of effective methods for monitoring
student progress)
|
Subject/course coordination and leadership in teaching (eg models good
practice and innovation in teaching) Scholarship in teaching
(eg publications on
teaching)
|
FIGURE 3
A Qualitative Performance Indicator for
an Undergraduate Course
Indicator TL3: Assessment as part of teaching and learning
(Based on material in SOED, 1992)
FIGURE 4
Griffith Institute for Higher Education:
Principles of Quality Management for Teaching and Learning
• Quality improvement is a primary purpose
• Evaluation should seek to empower staff
• Focus on the quality of learning rather than the teaching process
• Quality outcomes matter more than the existence of quality procedures
• The distinctive mission of the University is vital
• Self-evaluation with stakeholder input should precede external audit
• International referencing is expected
• Excessive use of student questionnaires must be avoided
• Course evaluation is separate from subject and teaching evaluation
• Good teaching should be recognised and rewarded by appropriate behaviours rather than symbolic gestures
• Leadership in teaching and learning is crucial
Out of the process of self-evaluation, which will have identified strengths to build upon and weaknesses that need to be addressed, groups of teachers devise development plans to improve the quality of their courses and their students’ learning. These plans then become performance objectives against which they and external auditors can evaluate progress made. This evaluation process is facilitated by the use of quantitative indicators of effectiveness such as the results of the Course Experience Questionnaire,22 results from employer surveys, and data about student completion and progression. These quantitative indicators are useful for confirming strengths and weaknesses, assessing improvement, and assisting inter-university cooperation and sharing of good practice. The scheme at the Royal Melbourne Institute of Technology (RMIT) makes similar use of descriptions of criteria and examples of different levels of performance, and again emphasises the importance of developing ownership of the evaluation process through dialogue. An example of the results of this work — showing the development of self-assessment criteria — appears in Figure 5. The RMIT procedures involve individual lecturers’ reports to course coordinators and course coordinators’ reports to “Directors of Teaching”.
FIGURE 5
Levels of performance for “Quality
of the Teaching Process” (sub theme “Clarity of
questions/explanations
and linking topics”)
(Two levels of performance written by a group of social science lecturers
to evaluate their own teaching)
Excellent performance:
“Teachers clearly introduce concepts, stress key ones, and make links between them. They use language that most students find comprehensible. Concepts and explanations are demonstrated in examples that are relevant to the experience of most students. Learning is centred around the application of ideas, not the repetition of words. Concepts are introduced in steps, moving from simple to complicated; teachers check at each stage that students understand. Topics are introduced in logical sequence. Classes are well presented and handled. As a result, most students can recognise and use explanations and theories in new cases.” Performance showing more weaknesses than
strengths:
“Teachers present concepts and explanations unsystematically. There is little attempt to link them to the experiences and understandings that students already possess. Concepts, explanations and topics are not connected logically, and teachers do not always ensure that students understand at each stage. Expositions are not satisfactory; there is evidence that materials such as visuals and handouts are not well prepared and not closely linked to presentations. As a result, many students can echo the teachers’ words but not use the concepts in new situations.” |
Both these schemes imply the need for leadership development programmes, since so much of the effectiveness of a rigorous self-evaluation process depends on strong support from senior staff. Both also involve the support of designated leadership positions which the respective academic development units have worked to establish in cooperation with senior management. RMIT has its “Directors of Teaching” and Griffith has its Deputy Deans (Teaching and Learning). The importance attached to these posts is reflected in the emoluments they attract and the heavy responsibilities they demand. The function of their incumbents is to educate, enable, introduce new ideas, model best practice, and remove impediments to excellent teaching and learning. They signify a conception that the quality of learning and teaching is an issue that should be tackled by the smallest academic units that can deliver it, a view entirely compatible with the quality movement beyond higher education.
CONTRASTING MODELS OF LEARNING AND TEACHING
The main ideas about improving teaching, educational leadership in universities, and evaluation I have been trying to express may be placed in their wider context by reference to the two different models shown in Figure 6.23
FIGURE 6
Contrasting Models of Teaching,
Educational Leadership, and Evaluation in Higher Education
|
Model l: “Disseminating knowledge”
|
Model II: “Making learning
possible”
|
Epistemological assumptions
|
Knowledge exists separately from the people who possess it. Knowledge can
be conveyed. Concepts and facts are prerequisites for problem-solving
in a field
of study. Theory and practice are separate domains.
|
Knowledge doesn’t exist apart from people. Knowledge must be
reconstructed by learners. Facts and concepts are learned as they
are used.
Problem-solving, concepts and facts are mutually dependent, in learning as well
as in expert practice.
|
Evaluation and audit
|
Measurement focused, externally directed and value-free. Preferred
indicators are quantitative, such as pass rates and student ratings.
|
Process focused, user directed and permeated by values. Preferred
indicators are qualitative, such as student comments and evidence
of changes in
conceptions.
|
Educational effectiveness
|
Essentially technical: a problem to be solved.
|
Essentially problematic: an enduring human dilemma.
|
We are seeing a shift from the first model to the second, as undergraduate
education becomes more like a mass system and focuses more
in developing
lifelong learning competence, including generic employment-related skills ,
rather than on preparing a research elite.
This changing social context of
university education is presumably the reason why research on student learning
now seems to be so
relevant. The transit to a more student-centred view of
undergraduate education has been foreshadowed before, of course, notably
at the
time of the Hale Report and the founding of the “new” universities
in the 1960s; but the momentum was never as
great as it is today. It is now,
surely, an unstoppable phenomenon.
Model I is essentially a lecturer- and
discipline-dominated view of undergraduate teaching and learning. Lecturers
teach (or more
likely lecture); students do the learning. Its conception of
learning is foundationalist: first learn the basics before you go and
use your
knowledge. It emphasises the idea that learning is a profoundly individual
phenomenon. Assessment is largely about marking
and classifying and competition.
Teaching is improved through practice alone, Evaluation is about
“objective” numbers.
The second model is focused on learning and
students rather than on teaching. The problem is how to engage people with the
things
they learn. Its implications are consonant with the findings from student
learning research; but more significantly, it reflects
the changed environment
in which universities in the UK and Australia now find themselves. Model II
recognises the importance of
the social context of learning and the need in
undergraduate education to integrate knowledge with its practical use. It
focuses
on assessment as part of learning. It stresses the similarities between
how experts work and how students should learn to be experts.
It embraces views
of academic leadership and evaluation such as those I have tried to describe
above.
Of course we must not interpret Figure 6 in trivial dualist terms. Knowledge is often cumulative. Good teaching generally does involve good presentation. Effective leadership almost invariably requires transactional strategies as well as transformational insights. Quantitative indicators of performance can marry happily with qualitative ones. Grading students is not a bar to giving good feedback and focusing on formative processes. Producing publicly-verifiable data on educational performance should go hand in hand with self-evaluation. It is a matter of emphasis and not of simple dualities; it is a matter of balancing and integrating apparent opposites in an educationally valid way. Remembering Minton’s conclusions about the right questions and what makes the difference between quality and mediocrity, an understanding of the last row of Figure 6 is the one that matters most. We will continue to return to the same issues as we try to improve our students’ learning, our management of university teaching, and our evaluation of the effectiveness of higher education. In approaching these issues the search for right answers is a snare and a delusion. “Many of the issues facing teachers are not problems to be solved” says Welker. “They are dilemmas to be repeatedly encountered. Dilemmas don’t require answers; they require enduring human responsibility”.24
1 This article is based on a keynote paper presented at the First International Improving Learning Symposium held at Warwick University in September 1993. An earlier version appears in G Gibbs ed Improving Student Learning: Theory and Practice. Oxford: Oxford Centre for Staff Development.
2 D. Minton, Teaching Skills in Further and Adult Education (Basingstoke: Macmillan 1991).
3 TJ Shuell, Cognitive Conceptions of Learning (1986) 56 Rev of Ed Research 411; JB Biggs, Teaching: Design for Learning in B Ross (ed) Teaching for Effective Learning (Sydney: HERDSA 1990); JB Biggs, From Theory to Practice: a Cognitive Systems Approach (1993) 12 Higher Ed Research and Dev 73.
4 K Trigwell & M Prosser, Approaches Adopted by Teachers of First Year. University Science Courses, in A Viskovic ed Towards 2000: Trends in Tertiary Teaching (1993) 14 Research and Development in Higher Education 223; KS Cushing, DS Sabers & DC Berliner, Olympic Gold: Investigations of Expertise in Teaching (1992) Educ Horizons 108.
5 D Bligh, Review of Learning to Teach in Higher Education (1993) 18 Studies in Higher Educ 105.
6 It also makes the nature of the error invisible to those who have the conception, an effect that has been noted in phenomenographic studies of learning.
7 K Trigwell M Prosser & P Taylor, Qualitative Differences in Approaches to Teaching First Year Science Courses (1994) 27 Higher Educ 75.
8 Bligh supra note 5; L Andreson, Many Roads to One Place: Which Place? Which Roads? Paper presented to the Invitational Symposium on the Experience of Quality in Higher Education, Griffith Institute for Higher Education, Griffith University, 3-5 July 1994; Staff and Educational Development Association (SEDA) The Accreditation of Teachers in Higher Education (Birmingham: SEDA, 1994).
9 DA Schön, Educating the Reflective Practitioner (San Francisco: Jossey-Bass, 1987).
10 E Martin & P Ramsden, Evaluation of the Performance of Courses in Teaching Methods for Recently Appointed Staff (Canberra: AGPS, 1994).
11 R Boice The New Faculty Member: Supporting and Fostering Professional Development (San Francisco: Jossey Bass, 1992).
12 KS Louis, Beyond Bureaucracy: Rethinking How Schools Change, invited address, International Congress for School Effectiveness and Improvement, Norrkoping, January 1993.
13 GA Donaldson Learning to Lead: The Dynamics of High School Principalship (New York: Greenwood Press, 1991).
14 Louis supra note 12; P Ramsden Learning to Teach in Higher Education (London: Routledge, 1992) ch. 6.
15 CJ Bland & MT Ruffin, Characteristics of a Productive Research Environment: Literature Review (1992) 67 Academic Medicine 385.
16 See for example K Samuelowicz & JD Bain, Conceptions of Teaching Held by Academic Teachers (1992) 24 Higher Educ 93.
17 K Trigwell, P Ramsden, P Martin & M Prosser, Teaching Approaches and leadership Environment, paper presented at the Annual Conference of HERDSA, Rockhampton, July 1995.
18 I am indebted to Lee Harvey for this way of expressing the association.
19 Scottish Office Education Department (SOED) Using Performance Indicators in Secondary School Self-Evaluation (Edinburgh: SOED, 1992).
20 I am not able to go into the evaluation of individual teaching performance in this article, but the development of criteria and the use of individual portfolios in order to help recognise and reward good teaching is part of the wider scheme.
21 Based on material on SOED, supra note 19.
22 J Ainley & M Long The Course Experience Survey: 1992 Graduates (Canberra: AGPS, 1994).
23 I am most grateful to John Bain for allowing me to adapt his original ideas and table into the present form.
24 R Welker, Reversing the Claim on Professional Status: What Educators can Teach Experts (1992) Educational Horizons 115.
AustLII:
Copyright Policy
|
Disclaimers
|
Privacy Policy
|
Feedback
URL: http://www.austlii.edu.au/au/journals/LegEdRev/1995/1.html