Always Evidence-Based Practice. Or Not?

I wrote about research here (Research and rule)  and here (Lies, damned lies and statistics) and I would like to draw attention to an interesting yet less questioned aspect of educational research: what are the limitations of evidence-based practices?

Phil Wood tweeted several articles and links (by the way, if you do not follow him you should) and I was intrigued about two in particular: “Why What ‘Works’ in Education Won’t Work” by Gert Biesta (Educational Theory, Vol. 57, 2007) and Using a Living Theory Methodology by Jack Whitehead (Educational Journal, 2008).

Biesta takes a critical look at the evidence-based practice rhetoric by analyzing the epistemology behind it, its role in shaping education policies and the relationships it creates within a democratic system.

“(…) evidence-based education seems to favor a technocratic model in which it is assumed that the only relevant research questions are questions about the effectiveness of educational means and techniques, forgetting, among other things, that
what counts as ‘‘effective’’ crucially depends on judgments about what is educationally desirable.”

Thus, “effectiveness” alone becomes the sole criterion for implementing a specific teaching method or strategy. Although apparently neutral (who would contest the need to use most effective strategies?), evidence-based practice does bear an implied assumption that teaching is, in fact, a simple cause-effect chain of teaching-learning experiences, and that the teacher’s work is simply to apply a particular intervention.

“Evidence-based practice relies upon a causal model of professional action. It is based on the idea that professionals do something — they administer a treatment, they intervene in a particular situation — in order to bring about certain effects.”

Another aspect is that the evidence-based practice relies upon a separation between the means and the ends of professional action:

“Evidence-based practice assumes that the ends of professional action are given, and that the only relevant (professional and research) questions to be asked are about the most effective and efficient ways of achieving those ends.”

The problems are:

1. Teaching does not unfold in a causal model. Education is a highly complex phenomenon that takes place in an “open and recursive system”.

“If teaching is to have any effect on learning, it is because of the fact that students interpret and try to make sense of what they are being taught. It is only through processes of (mutual) interpretation that education is possible.” In other words, even if we are comfortable with using the concept of “interventions” (I definitely am not), these should not be seen as causes but rather as opportunities  for “students to respond and, through their response, to learn something from them.”; education is a mediated act.

2. Effectiveness of interventions should be questioned in relation to their moral and emotional impact. 

“There is always the question as to whether particular educational activities, strategies (or “interventions) are desirable in themselves; we always need to ask what are the educational effects of our actions.”

A very simple example is given then – corporal punishment. Even if we have conclusive empirical evidence that it works in managing disruptive behavior, should we use it? David Carr argues, “the practice should nevertheless be avoided because it teaches children that it is appropriate or permissible in the last resort to enforce one’s will or get one’s own way by the exercise of violence.’’ Biesta concludes that this mere example illustrates that

” (…) in education means and ends are not linked in a technological or external way but that they are related internally or constitutively. The means we use in education are not neutral with respect to the ends we wish to achieve. It is not the case that in education we can simply use any means as long as they are ‘effective.’ This is why education is at heart a moral practice more than a technological enterprise.”

Another aspect that is further analyzed is the complete faith in research. Regardless of the approach (less or more extreme – as in research provides a cookbook of methods and strategies to be applied in the classroom) there is a “unanimous expectation that research can tell us what works and it can provide sound evidence”. Of course I do not view research in in this light – I see it as a starting point for reflection on professional practice and I always look for different sources so that I can engage critically.

  Biesta uses Dewey’s transactional theory of knowing and concludes,

“Research can only tell us what worked but cannot tell us what works.” 

It is interesting that this reminds me of Hattie’s words himself, “Visible Learning is a literature review, therefore it says what HAS happened not what COULD happen.”

Now that Hattie’s work is in fashion it is very unlikely for educators to question his research methodology. We are not researchers and we simply do not have the time nor the skill to dig deeper into such a complex meta analysis. However, academics themselves do that and Paul Thomas has written about this criticism and provided references as well at the end of his post.

“I bring to the fore a concern about the use of statistics in educational research. First I give a brief review of some common problems associated with careless use of statistics. I then demonstrate that even in a highly profiled and much referenced book, Hattie’s Visible learning, part of the statistical methods seems to be misunderstood and ends up giving wrong results.” (Arne Kare Topphol,  Associate Professor at the University College of Volda)

“Qualitative studies are not considered, and methodological problems and debates are neglected.” (Ewald Terhart, Institute of Educational Science, Münster University , Germany)

David Didau also addressed the limitations of effect-sizes and referenced Dylan William (and some of these “simple artifacts”) as Tom Sherrington did in relation to homework . Oliver Caviglioli also questions the precision of research:

“For the most part, research findings in education take the form of generalizations. And yet their effect sizes are presented
with great accuracy. Hattie’s list of top strategies, for example, is specified down to two decimal places. This level of precision is at odds with the lack of identification of the methods themselves.” (Evidence-Based Teaching: the Ifs and Buts)

Even the most (currently) praised research does, therefore, have flaws. Should we abandon it? No, but let us be more critical of it.

That leads me to the last point of Beista’s paper: democracy and research. In its technical role, research provides strategies, methods for achieving an end result. But there is another role that research should take: a cultural one.

“Research can also play a valuable role in helping educational practitioners to acquire a different understanding of their practice, in helping them to see and imagine their practice differently.”

But evidence-based practice works only top-down. Meta-models that need to be implemented in the classroom, regardless of the context in which teachers and students are active:

” (…)  it simply overlooks the cultural option. It focuses on the production of means for given ends and reduces research questions to the pragmatics of technical efficiency and effectiveness.’’

This is where the Whitehead’s “living theory” comes into place and he quotes Allender:

“The belief that educational research trumps practice, historically and still, is one of the major obstacles. The results of scholarly inquiry have managed to become the top of a top-down world. The not-so-subtle message is that there is a better known way to teach and teachers ought to change their practices accordingly. Progress depends on giving up the hegemony of scholarly inquiry. Knowledge has many sources, and they are best honored when they are used as part of a lively dialectic. (Allender, 2008)

To conclude this post:

Read about research. Think for yourself. Use your classroom experience. Nothing beats this combination.

Advertisements

4 Responses to “Always Evidence-Based Practice. Or Not?”

  1. A great post that articulates and sums up so much of what I’ve been thinking recently. I completed my Masters of Education a few years ago, wanting to learn more about educational research but the biggest thing I learnt was to take it all with a grain of salt.

    Your last line sums it up perfectly for me – it’s not about relying on the research nor is it about doing ‘what we’ve always done’. It’s about bringing the knowledge of both and being reflective about it. If there was a perfect, foolproof and result-guaranteed way to teach, we’d all be doing so, in the meantime (ie, forever!), we’ll have to continue to use both new knowledge and old experience to do the best we can.

  2. I would add one thing to your conclusion…have evidence of your impact. One of the great flaws that Hattie points out is that literally everything works, and if we compare our impact to an effect of 0, that is the worst thing, as educators, we can do.

    Thanks for an insightful read.

  3. I used Living Theory to frame my doctorate – a truly liberating experience for this classroom practitioner

Trackbacks

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: