Phonics is not a fix-all drug that will get all children reading

How can there be such high profile disagreement about an issue as extensively researched and important as the teaching of reading to young children? In July, a group of teachers and phonics consultants wrote to the Times Educational Supplement, defending the Year One phonics check – a test given to all five year olds to examine their ability to decode unfamiliar words. This was in a response to an earlier letter from teachers, academics and representatives of teaching unions who had called for its abolition.

The reason for this disagreement lies not so much in the difficulty or inaccessibility of the research but in some widespread assumptions about the kind of evidence that should inform teaching.

The department for education currently promotes a model of “rigorous” educational research that draws on the use of evidence to inform practice in other sectors, most notably medicine. But rather than helping to select the best possible educational methods, this search for evidence forces educational activity to follow the model of the medical “intervention”.

There are many critics of the department for education’s guidance on teaching phonics. None of them denies that some of the advice has a place in the early teaching of reading.

Teaching the regular “phonic” correspondence between letters or groups of letters and particular sounds, as well as the process of blending these sounds from left to right to form whole word units, has been acknowledged as part of good educational practice.

But viewed as one of a range of approaches to learning to read, phonics cannot be pinpointed as a discrete “intervention”, and therefore as the “best” reading intervention from a range of options.

Teachers go off script

An intervention has distinct properties which can be reproduced across contexts. It can be given to one group and withheld from another – the core principle of the “control” in the randomised control trial. It has a beginning and an end so that its effects can be measured. The obvious example is a course of a drug, which has a particular quantity, regularity, and chemical composition.

But the problem with transferring evidence-based practice to the educational context is that teachers do not teach through interventions. The interactions between teachers and pupils cannot be broken up into the kinds of discrete activities tested through a randomised control trial.

The only way an educational activity could be given to one group and withheld from another, have a beginning and an end so that its effects could be measured, and then be effectively reproduced, is if the activity could be restricted to a script or reduced to a resource (such as a book or a film). The teacher would have to stick heavily to the script in order for the intervention’s effects to be measured against those pupils who didn’t get taught that way.

But any teacher who has tried to follow a lesson plan knows that classroom interaction cannot be captured in scripted activity. A teacher’s duty to continually monitor the progress of students as they learn means they will be constantly be making decisions in the moment about how to re-phrase questions, encourage particular individuals in their learning, or make use of additional examples. They need to go off script.

Too many eggs in one basket

The government’s guidance on phonics is a case in point. It emphasises the introduction of the “first and fast” principle – that in the earliest stages, phonics is to be taught exclusively as the way children read. The introduction of other reading strategies, such as inferring the word from narrative context, or using other clues such as pictures, are determined to be counter-productive to the aim of developing phonic knowledge.

Schools are encouraged to select from a range of available commercial programmes, each of which adhere to core phonic principles set out by the department for education. The guidance implies these programmes will have most value if, like a course of antibiotics, they are seen through to completion without detrimental interaction with other programmes.

Building on this, the year one phonics check is designed – with its incorporation of nonsense-words and words out of meaningful context – to explicitly rule out the possibility that students are employing other strategies.

The result of this, as the first open letter claimed, is that the phonics check tests the application of the intervention rather than its intended result: literacy.

It is easy to see how interventions like these are attractive at a policy level – particularly for those who see widespread problems with poor literacy as an epidemic that governments should be able to cure. But the question remains whether evidence has supported the identification of the best method to teach reading, or whether the desire for an evidence-based solution has forced that solution to take on the character of an intervention.

I believe that teachers are rarely concerned with employing an intervention, far less the “best” one. They are more often concerned with judging how to go on with a particular student, or what to do with a particular student at a particular time.

This is not to say that a teacher’s practice and the learning of his or her students are not enriched through a career-long interaction with the educational research community, as found by a recent enquiry.

The department for education has a responsibility to ensure education research is directed to areas of pressing concern and that this research is made available to teachers. But, the result of identifying and endorsing particular interventions through policy, in the manner of the phonics check, is the homogenisation of teachers, students and their classroom situations.

This will come at the expense of teachers’ freedom to use their practical and professional wisdom to make informed decisions about the best ways to respond to the needs of individual students.

David Aldridge

Principal Lecturer in Philosophy of Education at Oxford Brookes University

This is a post I originally wrote for The Conversation. They allowed me to reproduce it and you can find the original here.

Advertisements

More on Phonics

@oldandrew replied to my earlier blogs on phonics here: http://teachingbattleground.wordpress.com/2014/07/08/revisiting-the-debate-over-the-davis-phonics-pamphlet-part-3/

I posted a response on the site, which I have reproduced here.

Thanks Andrew. If you would kindly print these comments in your thread, I’m happy that this debate has probably run its course. I take each point in turn.

1. If you could claim with reference to evidence that there is some distinct method – you call it ‘SSP’ – that ‘consistently gets the same result’, then you would indeed trump the argument. But the point is that you are not warranted in claiming that it is SSP that ‘consistently gets the same result’, at least not if the intended result is the ability to read. The evidence could not possibly support such a claim given the complexity of the classroom contexts you are prescribing for.

You also waver between implying that SSP has bounded, distinct and exclusive properties (exactly along the analogy of the chemical composition of a drug) and that it is a blurrier group of mixed practices (as you claim in response 2). Your own position is inconsistent here. The more you interweave your insistence on phonics teaching with all sorts of other undeniably valuable teaching activities, the less likely it becomes that any of the ‘evidence’ you point to will support this nuanced collection of activities as a distinctive ‘method’. But that’s a good way to go: actually, it brings our two stances on what teachers should actually be doing in the classroom much closer. They should be making situated judgements, using what they know from research and other sources, about the best way to go forward with particular children in a particular context. But this problematises the phonics check, of course (see my original argument and my response to 2).

2. I restricted my comments largely to the phonics check, which by design (as the open letter originally argued) tests the method of synthetic phonics exclusively. The check is also methodologically broken, but I think the letter makes that argument just fine.

3. Here I think quoting out of context puts you in danger of misrepresenting my case. The point is that ‘learning styles’ is an easy target: no-one I know will seriously defend that fad. My case for the complex situated judgement of teachers rests rather on the infinite contingency of the classroom situation, taking in all sorts of factors and including the complex prior experience and awareness of each individual student. You don’t knock that down by knocking down learning styles. I believe you are aware of the nature of a ‘straw man’ argument.

4. We are, at least, agreed on this point. “The phonics check will put teachers under pressure to teach phonics effectively”. This will indeed deter teachers from acting in ways that do not promote (systematic synthetic) phonics knowledge. I think it is important to add: even if those ways of acting might themselves promote literacy.

5. I have argued that teachers engage with educational research; they can and should also conduct their own. I have also argued that they should do so with a sensitivity to the way that academic researchers normally communicate their findings: as a piece of a much bigger and varied endeavour that might shed some light on a particular issue of classroom practice, rather than as warrant for the wholesale imposition of some particular technique.