fbpx

Dr Kerry Hempenstall, Senior Industry Fellow, School of Education, RMIT University, Melbourne, Australia.

All my blogs can be viewed on-line or downloaded as a Word file or PDF at https://www.dropbox.com/sh/olxpifutwcgvg8j/AABU8YNr4ZxiXPXzvHrrirR8a?dl=0


 

It’s hardly a revelation to argue that the adoption of evidence-based practice (EBP) in some other professions is far advanced in comparison to its use in education. That’s not to say that the resistance displayed by some teacher organizations towards the adoption of EBP has not been evident in the early stages of its acceptance by those professions, such as medicine and psychology. However, as these principles have been espoused in medicine and psychology since the early nineties, a new generation of practitioners have been exposed to EBP as the normal standard for practice. This has occurred among young practitioners because their training has emphasized the centrality of evidence in competent practice.

In education, unfortunately, there are few signs of this sequence occurring. Most teachers-in-training are not exposed to either the principles of EBP (unless in a dismissive aside) or to the practices that have been shown to be beneficial to student learning, such as the principles of instructional design and effective teaching, explicit phonological instruction, and student management approaches that might be loosely grouped under the behavioural or cognitive-behavioural banner.

In my view, until educational practice includes EBP as a major determinant of practice, then it will continue to be viewed as an immature profession. It is likely that the low status of teachers in many western countries will continue to be the norm unless and until significant change occurs.

The contribution below is an update on a paper I wrote on the topic of EBP for the Australian Journal of Learning Difficulties:

 Hempenstall, K. (2006). What does evidence-based practice in education mean? Australian Journal of Learning Disabilities, 11(2), 83-92.


What does evidence-based practice in education mean? 

Teaching has suffered both as a profession in search of community respect and as a force for improving a nation’s social capital, because of its failure to adopt the results of empirical research as the major determinant of its practice. There are a number of reasons why this has occurred, among them a science-aversive culture endemic among education policymakers and teacher education faculties. There are signs that major shifts are occurring. There have been strong moves in Great Britain and the USA towards evidence-based practice in education in recent years. Indeed, the movement is likely to be further advanced by the recent edict from the US government’s Office of Management and Budget (Zient, 2012) that requests the entire Executive Branch to use every available means to promote the use of rigorous evidence in decision-making, program administration, and planning”. Evidence-based practice has influenced many professions in recent years. A simple Google search produces over 73,000,000 hits. Among them, in varying degrees of implementation, are professions as diverse as agriculture, speech pathology, occupational therapy, transport, library and information practice, management, nursing, pharmacy, dentistry, and health care.

Several problems do require attention. The generally low quality of much educational research in the past has made the process of evaluating the evidence difficult, particularly for those teachers who have not the training to discriminate sound from unsound research designs. Teacher training itself has not empowered teachers with the capacity and motivation to explore how evidence could enhance their effectiveness. Until teachers become more skilled at doing so, it was hoped that bodies such as the What Works Clearing House could perform the sifting process to simplify judgements on what practices have been demonstrated to be effective. However, the strong criteria usually employed in this process have unearthed very few adequately-designed studies from which to make these judgements. There have also been other issues that have made this resource less helpful as a readily accessible and trustworthy site.

Teachers are coming under increasing media fire lately: Too many students are failing. Current teachers are not sufficiently well trained. Our brightest young people are not entering the teaching profession. What does that imply about those who are teachers? Are current teachers inadequate to the task entrusted to them? A nation’s future is dependent upon the next generation of students. So, how should we respond as a nation?

Education has a history of regularly adopting new ideas, but it has done so without the wide-scale assessment and scientific research that is necessary to distinguish effective from ineffective reforms. “More typically, someone comes across an idea she or he likes and urges its adoption… often the changes proposed are both single and simple – more testing of students, loosening certification requirements for teachers, or a particular school improvement model” (Levin, p.740).

“Most management decisions are not based on the best available evidence. Instead, practitioners often prefer to make decisions rooted solely in their personal experience. However, personal judgment alone is not a very reliable source of evidence because it is highly susceptible to systematic errors – cognitive and information-processing limits make us prone to biases that have negative effects on the quality of the decisions we make.” (Barends, Rousseau, & Briner, 2014, p.8)

This absence of a scientific perspective has precluded systematic improvement in the education system, and it has impeded growth in the teaching profession for a long time (Carnine, 1995a; Hempenstall, 1996; Marshall, 1993; Stone, 1996). Years ago in Australia, Maggs and White (1982) wrote despairingly "Few professionals are more steeped in mythology and less open to empirical findings than are teachers" (p. 131).

 

Since that time, a consensus has developed among empirical researchers about a number of effectiveness issues in education, and a great deal of attention (Gersten, Chard, & Baker, 2000) is being directed at means by which these research findings can reach fruition in improved outcomes for students in classrooms. Carnine (2000) noted that education appears to be impervious to research on effective practices, and he was one of the first to explore differences between education and other professions, such as medicine that are strongly wedded to research as the major practice informant.

“Evidence-based practice involves conscientious, explicit, and judicious use of the best available evidence in making decisions (Sackett 2000). Individuals, both laypeople and professionals, typically use some form of evidence in making decisions—if only their past experience. EBP raises the issue of what that evidence is and, in particular, how strong it might be (Barends, et al 2014; Sackett 2000). Evidence-based practitioners seek to improve the quality of the evidence used and condition their decisions and practices on the confidence that the evidence warrants. Importantly, effective EBP practice requires a commitment to continuous practice improvement and lifelong learning (Straus et al 2005).” (Rousseau & Gunia, 2015, p. 5)

“Evidence based practice seeks to improve the way decisions are made. It is an approach to decision making and day to day work practice that helps educators – be it teachers, heads of department or senior leaders to critically evaluate the extent to which they can trust the evidence they have at hand. It also helps educators to identify, find and evaluate additional evidence relevant to their decisions.” (Jones, 2016, p.6)

History

Evidence-based medicine became well known during the 1990s. It enables practitioners to gain access to knowledge of the effectiveness and risks of different interventions, using reliable estimates of benefit and harm as a guide to practice. There is now strong support within the medical profession for this direction, because it offers a constantly improving system that provides better health outcomes for their patients. Thus, increased attention is being paid to research findings by medical practitioners in their dealings with patients and their medical conditions. Practitioners have organisations, such as Medline (http://medline.cos.com) and the Cochrane Collaboration (www.cochrane.org), that perform the role of examining research - employing specific criteria for what constitutes methodologically acceptable studies. They then interpret the findings and provide a summary of the current status of various treatments for various medical conditions. Thus, practitioners have the option of accepting pre-digested interpretations of the research or of performing their own examinations. This latter option presumes that they have the time and expertise to discern high quality from lesser research. Their training becomes a determinant whether this latter is likely to occur.

Despite these clear policy changes in medicine, there do remain problems of universal acceptance among practitioners. The first wide-scale audit of Australian healthcare found that 43% of patients do not receive treatments based upon the best available evidence (Runciman et al., 2012). Director of the Whitlam Orthopaedic Research Centre, Professor Ian Harris noted that fewer than half of the operations performed in Australia have been properly evaluated. "We often know something doesn't work, but out there are thousands and thousands of doctors who have been taught certain procedures and that's all they do. … changing of clinician beliefs and behaviour, even in the face of credible evidence, remains highly challenging (p.5)" (Medew, 2012).

Funding for research is also a critical element in evidence-based practice. Whilst health and education consume a similar amount of the federal budget, research funding for health is much greater than that for educational research. The US D.O.E. spends about $80 million annually in educational research; whereas, the Department of Health and Human Services provides about $33 billion for health research (The Haan Foundation, 2012). This means that relatively few educational studies are of the randomised control trial type most favoured in the sciences, as they can be fearsomely expensive.

In an initiative similar to that taken in medicine during the 1990’s, the American Psychological Association (Chambless & Ollendick, 2001) introduced the term empirically supported treatments as a means of highlighting differential psychotherapy effectiveness. Prior to that time, many psychologists saw themselves as developing a craft in which competence arises through a combination of personal qualities, intuition, and experience. The result was extreme variability of effectiveness among practitioners.

Their idea was to devise a means of rating therapies for various psychological problems, and for practitioners to use these ratings as a guide to practice. The criteria for a treatment to be considered well-established included efficacy through two controlled clinical outcomes studies or a large series of controlled single case design studies, the availability of treatment manuals to ensure treatment fidelity, and the provision of clearly specified client characteristics. A second level involved criteria for probably-efficacious treatments. These criteria required fewer studies, and/or a lesser standard of rigor. The third category comprised experimental treatments, those so far without sufficient evidence to achieve a higher status.

The American Psychological Association’s approach to empirically supported treatments could provide a model adaptable to the needs of education. There are great potential advantages to the education system when perennial questions are clearly answered. What reading approach is most likely to evoke strong reading growth? Should "social promotion" be used or should retention in one's grade be the norm when a year is failed? Would smaller class sizes make a difference? Should summer school programs be provided to struggling students? Should kindergarten be full day? What are the most effective means of providing remediation to children who are falling behind? Even in psychology and medicine, however, it should be noted that 15 years later there remain pockets of voluble opposition to the evidence-based practice initiatives.

The first significant indication of a similar movement in education occurred with the Reading Excellence Act (The 1999 Omnibus Appropriations Bill, 1998) that was introduced as a response to the unsatisfactory state of reading attainment in the USA. It acknowledged that part of the cause was the prevailing method of reading instruction, and that literacy policies had been insensitive to developments in the understanding of the reading process. The Act, and its successors, attempted to bridge the gulf between research and classroom practice by mandating that only programs in reading that had been shown to be effective according to strict research criteria would receive federal funding. This reversed a trend in which the criterion for adoption of a model was that it met preconceived notions of “rightness” rather than that it was demonstrably effective for students. Federal funding was to be only provided for the implementation of programs with demonstrated effectiveness - evidenced by reliable, replicable research.

Reliable replicable research was defined as objective, valid, scientific studies that: (a) include rigorously defined samples of subjects that are sufficiently large and representative to support the general conclusions drawn; (b) rely on measurements that meet established standards of reliability and validity; (c) test competing theories, where multiple theories exist; (d) are subjected to peer review before their results are published; and (e) discover effective strategies for improving reading skills (The 1999 Omnibus Appropriations Bill, 1998).

A term sometimes used as a synonym for evidence-based is research-based. It is important that the definition of research-based be analysed, as in some contexts it represents a weaker standard. The definition of evidence-based includes the criterion that a program has been tested in the appropriate population and has been found to be effective. Sometimes research-based programs have not met this criterion, but have simply been constructed, based on components that have been shown to be effective in other programs.

However, the components are only the ingredients for success in evidence-based programs, and copying some or all components might not lead to success. Having all the right culinary ingredients doesn’t guarantee a perfect soufflé. There are other issues, such as what proportion of each ingredient is optimal, when should they be added, how much stirring, heating, cooling are necessary? Errors in any of these requirements lead to sub-optimal outcomes.

So, it is with literacy programs. “Yet there is a big difference between a program based on such elements and a program that has itself been compared with matched or randomly assigned control groups” (Slavin, 2003). Because a program has some/all of the elements doesn’t necessarily mean that it will be effective. Engelmann (2003) points to the logical error of inferring a whole based upon the presence of some or all of its elements. Consider the logical error involved in this argument: If a dog is a Dalmatian, it has spots. Therefore, if a dog has spots, it is a Dalmatian (Engelmann, 2003). In this simile, the Dalmatian represents programs known to be effective with students. It is possible to analyse these programs, determine their characteristics, and then assume incorrectly that the mere presence of those characteristics is sufficient to ensure effectiveness. Engelmann is thus critical of merely “research-based” programs, that is, programs constructed only to ensure each respected component is somewhere represented. He points out that this does not guarantee effectiveness.

So for a true measure, we must look also for empirical studies to show that a particular combination of theoretically important elements is indeed effective in practice.

There is an interesting expansion of this point in a piece entitled Scientifically Based Research vs. Evidence Based Practices and Instruction at http://fliphtml5.com/mkjt/abpr/basic

See below for the infographic they use:

EBP

In England, similar concerns about approaches lacking in evidence produced the National Literacy Strategy (Department for Education and Employment, 1998) that mandated teaching approaches based upon research findings. For example:

“There must be systematic, regular, and frequent teaching of phonological awareness, phonics and spelling” (National Literacy Strategy, 1998, p.11).

In practice, this edict suffered from strong resistance from within the education industry ( e.g., teacher education, publishers, whole language protagonists, teacher professional associations) and did not achieve its objectives. It was also compromised by supporting the three-cueing system, known in England as the Searchlight model. Following the influential Rose Report (2006), a new even more directive approach was instituted across the nation, and was known as the Primary National Strategy (2006).

 In Australia, the National Inquiry into the Teaching of Literacy (2005) also reached similar conclusions about the proper role of educational research. The Australian Government’s Review of Funding for Schooling Panel (2011) bemoaned the current lack of evidence-basis for educational programs and the absence of evaluation of the programs’ effects on learning (Nous Group, 2011).

Slavin (2002) argued that the decision to require evidence prior to program adoption would reduce the pendulum swings that had characterized education thus far, and could produce revolutionary consequences in reducing the wide range of educational achievement differences across our community wrought by teacher and program variability.

The National Research Council's Center for Education (Towne, 2002) suggested that educators should attend to research that (a) poses significant questions that can be investigated empirically; (b) links research to theory; (c) uses methods that permit direct investigation of the question; (d) provides a coherent chain of rigorous reasoning; (e) replicates and generalizes; and (f) ensures transparency and scholarly debate. The Council’s message was clearly to improve the quality of educational research, and reaffirm the link between scientific research and educational practice. Ultimately, the outcomes of sound research should inform educational policy decisions, just as a similar set of principles had been espoused for the medical profession. The fields that have displayed unprecedented development over the last century, such as medicine, technology, transportation, and agriculture have been those embracing research as the prime determinant of practice (Shavelson & Towne, 2002).

So, evidence-based practice are: “ … practices that are supported by multiple, high-quality studies that utilize research designs from which causality can be inferred and that demonstrate meaningful effects on student outcomes” (Cook & Cook, 2011, p. 73).

Similarly, in Australia in 2005, the National Inquiry into the Teaching of Literacy asserted that “teaching, learning, curriculum and assessment need to be more firmly linked to findings from evidence-based research indicating effective practices, including those that are demonstrably effective for the particular learning needs of individual children” (p.9). It recommended a national program to produce evidence-based guides for effective teaching practice, the first of which was to be on reading.

“Recommendation 5: The committee recommends that the Minister take up with Universities Australia the need to encourage a more rigorous and evidence-based approach to the preparation of trainee teachers in regard to literacy and mathematics method” (p.64).

In all, the Report used the term evidence-based 48 times. Unfortunately, in Australia, this potentially game-changing report has never been adopted by any government.

So, the implication is that education and research are not adequately linked in Australia. Why has education been so slow to attend to research as a source of practice knowledge? Carnine (1991) argued that the leadership has been the first line of resistance. He described educational policy-makers as lacking a scientific framework, and thereby inclined to accept proposals based on good intentions and unsupported opinions. Professor Cuttance, director of the Melbourne University's Centre for Applied Educational Research was equally blunt: “Policy makers generally take little notice of most of the research that is produced, and teachers take even less notice of it.” (Cuttance, 2005, p.5).

A recent study highlighted other potential hurdles within organisations. The Callen et al. (2017) study identified 3 barriers that policymakers must overcome in order to use the evidence that development researchers produce. First, their ability to interpret evidence was found to be lacking. Neither the policy makers nor their department staff were adept at analysing or interpreting data. Though they reported a belief in the potential value of consulting research, their organisational culture demanded decisions too quickly to enable careful analysis. Further, there was no value placed on research at the senior levels, many of whom were resistant to any change. Finally, decisions about the value of employing research findings tended to depend on whether the research-based finding were consistent with the policy-makers prior beliefs.

Carnine (1995b) pointed to teachers’ lack of training in seeking out and evaluating research for themselves. The teacher training institutions have not developed a research culture, and tend to view teaching as an art form - in which experience, personality, intuition, and creativity are the sole determinants of practice. For example, he estimated that fewer than one in two hundred teachers are experienced users of the ERIC educational database.

“The findings indicated that ...evidence-based interventions including explicit instruction, cognitive strategy instruction, content enhancements, and independent practice opportunities were reported infrequently. … Finally, universities, school districts, and educational service centers are encouraged to provide sustained professional development in strategies that contribute to independent learning and RTI to reduce the research to practice gap in special education.” (Ciullo et al., 2016, p. 44-45)

“Key Findings regarding teacher educators’ views on education

— They are far more likely to believe that the proper role of teacher is to be a "facilitator of learning" (84 percent) not a "conveyor of knowledge" (11 percent).

— Asked to choose between two competing philosophies of the role of teacher educator, 68 percent believe preparing students "to be change agents who will reshape education by bringing new ideas and approach to the public schools" is most important; just 26 percent advocate preparing students "to work effectively within the realities of today's public schools."

 — Only 24 percent believe it is absolutely essential to produce "teachers who understand how to work with the state's standards, tests, and accountability systems."

— Just 39 percent found it absolutely essential "to create teachers who are trained to address the challenges of high-needs students in urban districts." Just 37 percent say it is absolutely essential to focus on developing "teachers who maintain discipline and order in the classroom."

— The vast majority of education professors (83 percent) believe it is absolutely essential for public school teachers to teach 21st century skills, but just 36 percent say the same about teaching math facts, and 44 percent about teaching phonics in the younger grades. (Farkas & Duffett, 2011, p. 8-9)

Taking a different perspective, Meyer (1991, cited in Gable & Warren, 1993) blamed the research community for being too remote from classrooms. She argued that teachers will not become interested in research until its credibility is improved. Research is often difficult to understand, and the careful scientific language and cautious claims may not have the same impact as the wondrous claims of ideologues and faddists unconstrained by scientific ethics.

Fister and Kemp (1993) considered several obstacles to research-driven teaching, important among them being the absence of an accountability link between decision-makers and student achievement. Such a link was unlikely until recently, when regular mandated state or national test programs results became associated with funding. They also apportion some responsibility to the research community for failing to appreciate the necessity of adequately connecting research with teachers’ concerns. The specific criticisms included a failure to take responsibility for communicating findings clearly, and with the end-users in mind. Researchers have often validated practices over too brief a time-frame, and in too limited a range of settings to excite general program adoption across settings. Without considering the organizational ramifications (such as staff and personnel costs) adequately, the viability of even the very best intervention cannot be guaranteed. The methods of introduction and staff training in innovative practices can also have a marked bearing on their adoption and continuation.

Woodward (1993) pointed out that there is often a culture gulf between researchers and teachers. Researchers may view teachers as unnecessarily conservative and resistant to change; whereas, teachers may consider researchers as unrealistic in their expectations and lacking in understanding of the school system and culture. Teachers may also respond defensively to calls for change because of the implied criticism of their past practices, and the perceived devaluation of the professionalism of teachers. Leach (1987) argued that collaboration between change-agents and teachers is a necessary element in the acceptance of changes to practice. In his view, teachers need to be invited to make a contribution that extends beyond solely the implementation of the ideas of others. There are some signs that such a culture may be in the early stages of development. Viadero (2002) reported on a number of initiatives in which teachers have become reflective of their own work, employing both quantitative and qualitative tools. She also noted that the American Educational Research Association has a subdivision devoted to the practice.

Some have argued that science has little to offer education, and that teacher initiative, creativity, and intuition provide the best means of meeting the needs of students. For example, Weaver considered scientific research offers little of value to education (Weaver et al., 1997). “It seems futile to try to demonstrate superiority of one teaching method over another by empirical research” (Weaver, 1988, p.220). These writers often emphasise the uniqueness of every child as an argument against instructional designs that presume there is sufficient commonality among children to enable effective group instruction employing the same materials and techniques. Others have argued that teaching itself is ineffectual when compared with the impact of socioeconomic status and social disadvantage (Coleman et al., 1966; Jencks et al., 1972). Smith (1992) argued that only the relationship between a teacher and a child was important in evoking learning. Further, he downplayed instruction in favour of a naturalist perspective “Learning is continuous, spontaneous, and effortless, requiring no particular attention, conscious motivation, or specific reinforcement” (p.432). Still others view research as reductionist, and unable to encompass the wholistic nature of the learning process (Cimbricz, 2002; Poplin, 1988).

What sorts of consequences have arisen in other fields from failure to incorporate the results of scientific enquiry?

Galileo observed moons around Jupiter in 1610. Francesco Sizi’s armchair refutation of such planets was: There are seven windows in the head, two nostrils, two ears, two eyes and a mouth. So in the heavens there are seven - two favourable stars, two unpropitious, two luminaries, and Mercury alone undecided and indifferent. From which and many other similar phenomena of nature such as the seven metals, etc we gather that the number of planets is necessarily seven...We divide the week into seven days, and have named them from the seven planets. Now if we increase the number of planets, this whole system falls to the ground...Moreover, the satellites are invisible to the naked eye and therefore can have no influence on the earth and therefore would be useless and therefore do not exist (Holton & Roller, 1958, as cited in Stanovich, 1996, p.9).

Galileo taught us the value of controlled observation, whilst Sizi unintentionally highlighted the limitations of armchair theorising.

The failure to incorporate empirical findings into practice can have far-reaching consequences. Even medicine has had only a reletively brief history of attending to the findings of research. Early in the 20th century, medical practice was at a similar stage to that of education currently. For example, it was well known that bacteria played a critical role in infection, and 50 years earlier Lister had shown the imperative of antiseptic procedures in surgery. Yet, in this early period of the century, surgeons during operations were still wiping instruments on whatever unsterilised cloth that was handy, with dire outcomes for their patients.

More recently, advice from paediatrician Doctor Benjamin Spock to have infants sleep face down in their cots caused approximately sixty thousand deaths from Sudden Infant Death Syndrome in the USA, Great Britain and Australia between 1974 and 1991 according to researchers from the Institute of Child Health in London (Dobson & Elliott, 2005). His advice was not based upon any empirical evidence, but rather armchair analysis. The book, Baby and Child Care (Spock, 1946), was extraordinarily influential, selling more than 50 million copies. Yet, while the book continued to espouse this practice, reviews of risk factors for SIDS by 1970 had noted the risks for infants in the practice of having them sleep face down. In the 1990’s, when public campaigns altered this practice, the incidence of SIDS death halved within one year. In recent times, more and more traditional medical practices are being subjected to empirical test as the profession increasingly established credibility.

Are there examples in education in which practices based solely upon belief, unfettered by research support, have been shown to be incorrect - but have led to unhelpful and counter-productive teaching?

  • Learning to read is as natural as learning to speak (National Council of Teachers of English, 1999).
  • Children do not learn to read in order to be able to read a book, they learn to read by reading books (NZ Ministry of Education, as cited in Mooney, 1988).
  • Parents reading to children is sufficient to evoke reading (Fox, 2005).
  • Good readers skim over words rather than attending to detail (Goodman, 1985).
  • Fluent readers identify words as ideograms (Smith, 1973).
  • Skilled reading involves prediction from context (Emmitt, 1996).
  • English is too irregular for phonics to be helpful (Smith, 1999).
  • Accuracy is not necessary for effective reading (Goodman, 1974).
  • Good spelling derives simply from the act of writing (Goodman, 1989).
  • Attending to students’ learning styles improves educational outcomes (Carbo, & Hodges, 1988; DEECD, 2012b; Dunn & Dunn, 1987).

These assertions have influenced educational practice for more than 30 years, yet each has been shown by research to be either incorrect or unsupported (Hempenstall, 1999). The consequence has been an unnecessary burden upon struggling students to manage the task of learning to read. Not only have they been denied helpful strategies, but they have been encouraged to employ moribund strategies. Consider this poor advice from a newsletter to parents at a local school:

If your child has difficulty with a word: Ask your child to look for clues in the pictures. Ask your child to read on or reread the passage and try to fit in a word that makes sense. Ask your child to look at the first letter to help guess what the word might be.

This approach reflects adherence to the discredited whole language model, a model that has never had research support, but was based upon a philosophically-based belief system. Its lack of effectiveness as an instructional system was made clear in Hattie's work. In his now famous analysis of 800 meta-analyses, he concluded that the effect size for whole language was a negligible 0.06. He notes that unless an effect size of at least 0.4 can be attained, the intervention is not worthwhile. This is a damning finding, and consistent with earlier analyses, such as by Lloyd (2006).

When unsupported belief guides practice, we risk inconsistency at the individual teacher level and disaster at the education system level.

There are at least three groups with whom researchers need to be able to communicate if their innovations are to be adopted. At the classroom level, teachers are the focal point of such innovations and their competent and enthusiastic participation is required if success is to be achieved. At the school administration level, principals are being given increasing discretion as to how funds are to be disbursed; therefore, time spent in discussing educational priorities, and cost-effective means of achieving them may be time well-spent, bearing in mind Gersten and Guskey's (1985) comment on the importance of strong instructional leadership. At the broader system level, decision makers presumably require different information, and assurances about the viability of change of practice.

Perhaps because of frustration at the problems experienced in ensuring effective practices are employed across the nation, we are seeing a top-down approach, in which evidence-based educational practices are either mandated, as in Great Britain (the Primary National Strategy (2006) or made a pre-requisite for funding, as in the 2001 No Child Left Behind Act (U.S. Department of Education, 2002). Whether this approach will be successful in changing teachers’ practice remains to be seen. In any case, there remains a desperate need to address teachers’ and parents’ concerns regarding classroom practice in a cooperative and constructive manner.

In Australia, pressure for change is building, and the view of teaching as a purely artisan activity is being challenged. Reports such as that by the National Inquiry into the Teaching of Literacy (2005) have urged education to adopt the demeanour and practice of a evidence-based profession. State and national testing has led to greater transparency of student progress, and, thereby, to increased public awareness. Government budgetary vigilance is greater than in the past, and measurable outcomes are the expectation from a profession that has not previously appeared enthused by formal testing. A further possible spur occurred when a Melbourne parent successfully sued a private school for a breach of the Trade Practices Act (Rood & Leung, 2006). She argued that it had failed to deliver on its promise to address her son's reading problems. Reacting to these various pressures, in 2005 the National Institute for Quality Teaching and School Leadership began a process for establishing national accreditation of pre-service teacher education. The Australian Council for Educational Research is currently evaluating policies and practices in pre-service teacher education programs in Australia. The intention is to raise and monitor the quality of teacher education programs around the nation.

There is another stumbling block to the adoption of evidence-based practice. Is the standard of educational research generally high enough to enable sufficient confidence in its findings? Broadly speaking, some areas (such as reading) invite confidence; whereas, the quality of research in other areas cannot dispel uncertainty. Partly, this is due to a preponderance of short-term, inadequately designed studies. When Slavin (2004) examined the American Educational Research Journal over the period 2000-2003, only 3 out of 112 articles reported experimental/control comparisons in randomized studies with reasonably extended treatments. The National Reading Panel (2000) selected research from the approximately 100,000 reading research studies that have been published since 1966, and another 15,000 that had been published before that time. The Panel selected only experimental and quasi-experimental studies, and among those considered only studies meeting rigorous scientific standards in reaching its conclusions. Phonemic Awareness: Of 1962 studies, 52 met the research methodology criteria; Phonics: Of 1,373 studies, 38 met the criteria; Guided Oral Reading: Of 364 studies, 16 met the criteria; Vocabulary Instruction: Of 20,000 studies, 50 met the criteria; Comprehension: Of 453 studies, 205 met the criteria. So, there is certainly a need for educational research to become more rigorous in future.

Further, some argue that the quality of educational research remains below par because relatively few educational studies are of the randomised control trial type most favoured in the sciences, due to their high cost and the difficulty in educational settings of creating random assignment. However, there is increasing acceptance of the value of quasi-experimental designs in producing legitimate findings of causality:

“This article discussed four of the strongest quasi-experimental designs for causal inference when randomized experiments are not feasible. … the estimates of quasi-experimental designs—which exploit naturally occurring selection processes and real-world implementations of the treatment—are frequently better generalizable than the results from a controlled laboratory experiment.” (Kim & Steiner, 2016, p.404)

In the areas in which confidence is justified, how might we weigh the outcomes of empirical research? Stanovich and Stanovich (2003) proposed that competing claims to knowledge should be evaluated according to three criteria. First, findings should be published in refereed journals. Second, the findings have been replicated by independent researchers with no particular stake in the outcome. Third, there is a consensus within the appropriate research community about the reliability and validity of the various findings – the converging evidence criterion. Although the use of these criteria does not produce infallibility it does offer better consumer protection against spurious claims to knowledge. Without research as a guide, education systems are prey to all manner of gurus, publishing house promotions, and ideologically-driven zealots. Gersten (2001) laments that teachers are "deluged with misinformation" (p. 45).

Unfortunately, education courses have not provided teachers with sufficient understanding of research design to enable the critical examination of research. In fact, several whole language luminaries (prominent influences in education faculties over the past 20 years) argued that research was unhelpful in determining practice (Hempenstall, 1999). Teachers-in-training need to be provided with a solid understanding of research design to adapt to the changing policy emphasis (National Inquiry into the Teaching of Literacy, 2005). For example, in medicine, psychology, and numerous other disciplines, randomized controlled trials are considered the gold standard for evaluating an intervention’s effectiveness. Training courses in these professions include a strong emphasis on empirical research design. There is much to learn about interpreting other forms of research too (U.S. Department of Education, 2003). In education, however, there is evidence that the level of quantitative research preparation has diminished in teacher education programs over the past twenty years (Lomax, 2004).

"Learning the basics of how research works is important, not because every teacher should be a researcher, but because it allows teachers to be critical consumers of the new research findings that will come out during the many decades of their career. It also means that some of the barriers to research, that arise from myths and misunderstandings, can be overcome. In an ideal world, teachers would be taught this in basic teacher training, and it would be reinforced in Continuing Professional Development, alongside summaries of research.” (Goldacre, 2013)

“Unfortunately, quality training in research-based principles, tactics, and components is not commonplace. For instance, in a survey conducted by the National Council on Teacher Quality of 72 teacher education programs (Walsh, Glaser, & Wilcox, 2006), only 15% of them taught all five components of successful reading instruction (National Reading Panel [NRP], 2000) and almost half of the programs taught none of them. Similar results were found for preparation programs in special education (Reschly, Holdheide, Smartt, & Oliver, 2007)” (Slocum, Spencer, & Detrich, 2012, p.174)

Further, without an appreciation of the evidence basis behind some instructional activities it is possible that valuable interventions are dismissed.

“Baker, Gersten, Dimino, and Griffiths (2004) found that knowledge of a practice’s underlying principles distinguished between teachers who were high sustainers and teachers characterized as moderate sustainers. The results of these studies suggest that when teachers lack an understanding of research-based principles that allow effective adaptation, interventions may be prematurely discarded and practitioners may conclude that research has little relevance to their practice (Gersten, Vaughn, Deshler, & Schiller, 1997)” (Slocum, Spencer, & Detrich, 2012, p.172)

The adoption of evidence-based practice may also reduce the wasted efforts to which education has been prone. There have always been programs lacking in evidence; however, recent years have seen the rise of programs claiming falsely to be based upon neuroscience. Unfortunately, too many schools and school systems have fallen for the dramatic claims behind pseudo-science based programs, such as Brain Gym:

"Debunking the neuro-myths that surround teaching is an important endeavour as unchecked they can pervade classrooms throughout the country, damaging educational achievement. A decade ago, the neuro-myth of Brain Gym was prevalent in England’s schools. In schools afflicted by Brain Gym, pupils were instructed to activate their brains by rubbing so-called ‘brain buttons’, located in different areas of the body. By having pupils rub their clavicle, various regions of the brain would light up - so went the theory. In the oddest cases, pupils were instructed to slowly sip water in the hope that water would be absorbed into the brain via the roof of the mouth, thus hydrating the brain! However biologically illiterate this practice may seem to us now, it demonstrates the importance of having a knowledgeable and research-informed profession inoculated from falling victim to this nonsense. We live in an era of unrivalled technical and scientific enlightenment. But in England, in the 21st century, we have seen teachers taking into account the imagined learning styles of their pupils - such as visual, auditory and kinaesthetic - which is both a waste of effort and can have a negative effect on pupils, according to the Education Endowment Foundation." (Gibb, 2017)

“More than 80% of the teachers surveyed agreed with statements such as the following: “Individuals learn better when they receive information in their preferred learning style (for example, auditory, visual, kinesthetic)”; “Differences in hemispheric dominance (left brain, right brain) can help explain individual differences among learners”; and “Short bouts of coordination exercises can improve integration of left and right hemispheric brain function” (Weigmann, 2013, p.136).

“The shift toward an evidence-based special education was partly a response to the intrusion of fad, pseudoscientific, and unproven interventions that have plagued the field for decades (Kozloff, 2005). … Special education professionals are charged with using evidence-based practices, but various unproven, disproven, and pseudoscientific interventions continue to proliferate. Unproven and ineffective interventions emerge and are adopted for various reasons. Ineffective interventions are inevitably harmful and require professionals to adopt a conservative approach that both minimizes potential for harm and maximizes potential for educational benefit. This is fundamental to the evidence-based movement, but special education professionals may not recognize and avoid ineffective interventions. … Despite advances in the evidence-based special education movement, unproven, disproven, and pseudoscientific interventions have continued to proliferate throughout the field. A stroll through the exposition at many special education conferences reveals vast numbers of booths promoting questionable and downright ridiculous “solutions” to a host of teaching and learning challenges.” (Travers, 2017, p.195)

This is not to say that neuroscience has nothing to offer education - only that we should proceed with caution:

“According to these critics, neuroscience and education are currently too far removed from each other in goals and methodologies to allow meaningful and productive collaboration and that attempts to bring them together are often misguided (Bruer, 1997; Cubelli, 2009; Willingham, 2009). Even those who have been enthusiastic to bring neuroscientific insights to educational practice have acknowledged the gulf that exists between the two disciplines and advocated a role for separate intermediary communicators and interpreters (Goswami, 2006) or ‘‘neuroeducators’’ and ‘‘education engineers’’ (Fischer, 2009) to bridge the gap and thus facilitate cross-disciplinary understanding. Another attempt to ‘‘bridge the gap’’ between neuroscience and education is exemplified by the establishment of cross-disciplinary centres, such as the Australian Science of Learning Centre (2013), which brings together neuroscientists, cognitive psychologists, educationalists and teachers to work on common research projects. Rather than have intermediaries or ‘‘education engineers’’ performing a post hoc translation from neuroscience research to education, the aim of the Science of Learning Centre is to foster a more radical ‘‘bottomup’’ integration of disciplines.” (Morris & Sap, 2016, p.150-151)

A practical issue even with educational practices of demonstrable value is that they be implemented faithfully. As indicated earlier, the evaluations that led to interventions earning the label evidence-based were predicated on implementers committed to presenting the interventions precisely as written. One risks the effectiveness of a program if alterations are made.

"Fidelity of implementation, or implementing a standardized, research-based intervention as designed and intended, is critical to ensure that students who have been identified as responding inadequately are true inadequate responders. Without high fidelity, it is unclear what the effects on the students will be (Hill, King, Lemons, & Partanen, 2012).” (Austin, Vaughn, & McClelland, 2017, p.2)

“Increasing the likelihood of teachers implementing research-based strategies in authentic school settings is a major goal of education leaders. Likewise, decreasing the variability of instruction practices and increasing fidelity of implementation to models of instruction and intervention is particularly difficult (Gersten, Chard, & Baker, 2000; Gresham, MacMillan, Beebe-Frankenberger, & Bocian, 2000).” (Fien et al., 2015)

“Approaches, practices and interventions delivered in real-world school and classroom settings often look different from what was originally intended. Principals and teachers may decide to adapt elements of a program, and barriers in the school system may prevent an approach from being fully realised. What this shows is the importance of the quality of the implementation in affecting learning gains, rather than the program itself. Implementation strategies such as training and ongoing teacher support are important to consider in efforts to encourage positive student outcomes.” (Vaughan & Albers, 2017)

Are there any immediate shortcuts to discerning the gold from the dross? If so, where can one find the information about any areas of consensus? Those governments that have moved toward a pivotal role for research in education policy have usually formed panels of prestigious researchers to peruse the evidence in particular areas, and report their findings widely (e.g., National Reading Panel, 2000). They assemble all the methodologically acceptable research, and synthesise the results, using statistical processes such as meta-analysis, to enable judgements about effectiveness to be made. It involves clumping together the results from many studies to produce a large data set that reduces the statistical uncertainty that inevitably accompanies single studies."

So, there are recommendations for practice produced by these bodies that are valuable resources in answering the question what works? These groups include the National Reading Panel, American Institutes for Research, National Institute for Child Health and Human Development, The What Works Clearinghouse, Coalition for Evidence-Based Policy. A fuller list with web addresses can be found in the appendix. As an example, Lloyd (2006) summarises a number of such meta-analyses for some approaches. In this method an effect size of 0.2 is considered small, 0.5 is a medium effect, and 0.8 is a large effect (Cohen, 1988). For early intervention programs, there were 74 studies, 215 effect sizes, and an overall effect size (ES) = 0.6. For Direct Instruction (DI), there were 25 studies, 100+ effect sizes, and an overall ES = 0.82. For behavioural treatment of classroom problems of students with behaviour disorder, there were 10 studies, 26 effect sizes, and an overall ES = 0.93. For Whole language, there were 180 studies, 637 effect sizes, and an overall ES = 0.09. For perceptual/motor training, there were 180 studies, 117 effect sizes, and an overall ES = 0.08. For learning styles, there were 39 studies, 205 effect sizes, and an overall ES = 0.14.

These sources can provide great assistance, but can also be confusing as they do not all agree on which studies should be included in their meta-analyses. For example, Hattie’s analysis of Direct Instruction studies revealed strong effects for regular (d=0.99), and special education and lower ability students (d=0.86), higher for reading (d=0.89) than for mathematics (d=0.50), similar for the more low-level word attack (d=0.64) and also for high-level comprehension (d=0.54), and similar for elementary and high school students. In contrast, the Coalition for Evidence-Based Policy does not include Direct Instruction among its list of evidence-based approaches because of their perception of a lack of long term effect studies. The What Works Clearinghouse rejects most of the Direct Instruction studies as not meeting their criteria for methodological soundness, and ignores those older than 20 years or so. There has also been criticism (Briggs, 2008; Slavin, 2008) of some of the WWC decisions, in particular, inconsistency in applying standards for what constitutes acceptable research. Thus, the large scale reviews have their own issues to deal with before they can be unquestioningly accepted. It may also be quite some time before gold-standard research reaches critical mass to make decisions about practice easier. It is also arguable whether education can ever have randomised control trials as standard.

Of course, it is not only the large scale, methodologically sophisticated studies that are worthwhile. A single study involving a small number of schools or classes may not be conclusive in itself, but many such studies, preferably done by many researchers in a variety of locations, can add some confidence that a program's effects are valid (Slavin, 2003). If one obtains similar positive benefits from an intervention across different settings and personnel, there is added reason to prioritise the intervention for a large gold-standard study.

Taking an overview, there are a number of options available to create educational reform. One involves the use of mandate, as with education policy in England. Another option involves inveigling schools with extra money, as in the USA beginning with the No Child Left Behind Act (U.S. Department of Education, 2002). Still another is to inculcate skills and attitudes during teacher training. Whilst these are not mutually exclusive options, the third appears to be a likely component of any reform movement in Australia, given the establishment and objectives of the National Institute for Quality Teaching and School Leadership (2005).

All this is not to suggest that the answers to educational problems have all been determined. There is much work to do in educational research, even in areas such as literacy where so much has been achieved. Other curriculum areas, such as maths and science have been investigated far less frequently and thoroughly. Even in literacy, there is much yet be realised in how to scale-up the research findings that are often developed in individual and small group settings, and in relatively short term studies with few offering long-term followup to check for effect fading over time.

“These articles suggest a number of areas where we need to strengthen our training and scholarship as we approach the next decade of research in education science. First, we need to reconsider the interventions we study—ensuring that they are focusing on “trifecta skills” with the largest potential for “cascade” effects and aimed at populations where such interventions have the opportunity to produce population-level impact. Second, we need better evaluation designs—designs that allow us to examine treatment heterogeneity, cross-time environmental influences, and the long-term cost benefit of shorter term programs. And, finally, we need our training and scholarship to draw from a wide range of fields outside of education science—including developmental science, ecological theory, prevention science, public health, and medicine. This is an ambitious agenda, but one that is critical if we are to succeed at changing the education (and life) trajectories of children. Having built a foundation over the last decade in rigorously assessing educational interventions, education science is now well-positioned to take on this next challenge.” (Morris & Reardon, 2017, p.6)

 

A prediction for the future, perhaps 15 years hence? Instructional approaches will need to produce evidence of measurable gains before being allowed within the school curriculum system. Education faculties will have changed dramatically as a new generation takes control. Education courses will include units devoted to evidence-based practice, perhaps through an increased liaison with educational and cognitive psychology, speech pathology, instructional designers, neuroscience, and special education. Young teachers will routinely seek out and collect data regarding their instructional activities. They will become scientist-practitioners in their classrooms. Problems in learning will be anticipated and detected early, student progress will be regularly monitored, and addressed systematically. Overall rates of student failure will fall. Optimistic? Of course!

More so than any generation before them, the child born today should benefit from rapid advances in the understanding of human development, and of how that development may be optimised. There has been an explosion of scientific knowledge about the individual in genetics and the neurosciences, but also about the role of environmental influences such as socio-economic status, early child rearing practices, effective teaching, and nutrition. However, to this point, there is little evidence that these knowledge sources form a major influence on policy and practice in education. There is a serious disconnect between the accretion of knowledge and its acceptance and systematic implementation for the benefit of this growing generation. Acceptance of a pivotal role for empiricism is actively discouraged by advisors to policymakers, whose ideological position decries any influence of science. There are unprecedented demands on young people to cope with an increasingly complex world. It is one in which the sheer volume of information, and the sophisticated persuasion techniques, to which they will be subjected may overwhelm the capacities that currently fad-dominated educational systems can provide for young people. A recognition of the proper role of science in informing policy is a major challenge for us in aiding the new generation. This perspective does not involve a diminution of the role of the teacher, but rather the integration of professional wisdom with the best available empirical evidence in making decisions about how to deliver instruction (Whitehurst, 2002).

"Evidence-based policies have great potential to transform the practice of education, as well as research in education. Evidence based policies could finally set education on the path toward the kind of progressive improvement that most successful parts of our economy and society embarked upon a century ago. With a robust research and development enterprise and government policies demanding solid evidence of effectiveness behind programs and practices in our schools, we could see genuine, generational progress instead of the usual pendulum swings of opinion and fashion. This is an exciting time for educational research and reform. We have an unprecedented opportunity to make research matter and to then establish once and for all the importance of consistent and liberal support for high-quality research. Whatever their methodological or political orientations, educational researchers should support the movement toward evidence-based policies and then set to work generating the evidence that will be needed to create the schools our children deserve." (Slavin, 2002, p.20)

 


References

Austin, C.R., Vaughn, S., & McClelland, A.M. (2017). Intensive reading interventions for inadequate responders in Grades K–3: A synthesis. Learning Disability Quarterly, 1– 20.

Barends, E.G., Rousseau, D.M., & Briner, R.B. (2014). Evidence-based management: The basic principles. Amsterdam: Center for Evidence-Based Management. Retrieved from https://www.cebma.org/wp-content/uploads/Evidence-Based-Practice-The-Basic-Principles.pdf

Briggs, D.C. (2008). Synthesizing causal inferences. Educational Researcher, 37(1), 15-22.

Callen, M., Khan, A., Khwaja, A.I., Liaqat, A., & Myers, E. (2017). These 3 barriers make it hard for policymakers to use the evidence that development researchers produce. The Washington Post, August 13. Retrieved from https://www.washingtonpost.com/news/monkey-cage/wp/2017/08/13/these-3-barriers-make-it-hard-for-policymakers-to-use-the-evidence-that-development-researchers-produce/?utm_term=.5d1ceceb43b6

Carbo, M., & Hodges, H. (1988, Summer). Learning styles strategies can help students at risk. Teaching Exceptional Children, 48-51.

Carnine, D. (1991). Curricular interventions for teaching higher order thinking to all students: Introduction to the special series. Journal of Learning Disabilities, 24, 261-269.

Carnine, D. (1995a). Trustworthiness, useability, and accessibility of educational research. Journal of Behavioral Education, 5, 251-258.

Carnine, D. (1995b.). The professional context for collaboration and collaborative research. Remedial and Special Education, 16(6), 368-371.

Carnine, D. (2000). Why education experts resist effective practices (and what it would take to make education more like medicine). Washington, DC: Fordham Foundation. Retrieved from http://www.edexcellence.net/library/carnine.html

Chambless, D. L. & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. Annual Review of Psychology, 52, 685-716.

Cimbricz, S. (2002, January 9). State-mandated testing and teachers' beliefs and practice. Education Policy Analysis Archives, 10(2). Retrieved from http://epaa.asu.edu/epaa/v10n2.html

Ciullo, S., Lembke, E.S., Carlisle, A., Newman Thomas, C., Goodwin, M., & Judd, L. (2016). Implementation of evidence-based literacy practices in middle school Response to Intervention: An observation study. Learning Disability Quarterly, 39(1), 44–57.

Cohen, J. (1988). Statistical power analysis for the behavioural sciences (2nd ed.), Hillsdale, NJ: Lawrence Earlbaum.

Coleman, J., Campbell, E., Hobson, C., McPartland, J., Mood, A., Weinfeld, F. D., & York, R. (1966). Equality of educational opportunity. Washington D.C.: Department of Health, Education and Welfare.

Cook, B. G., & Cook, S. C. (2011). Unraveling evidence-based practices in special education. The Journal of Special Education, 47(2), 71-82.

Cuttance, P. (2005). Education research 'irrelevant.'The Age, July 5, p.5.

DEECD (2012). Student Support Group Guidelines 2012. Retrieved from http://www.eduweb.vic.gov.au/edulibrary/public/stuman/wellbeing/SSG_Guidelines_2012.pdf

Department for Education and Employment. (1998). The National Literacy Strategy: Framework for Teaching. London: Crown.

Dobson, R., & Elliott, J. (2005). Dr Spock’s advice blamed for cot deaths. London: University College. Retrieved from http://www.ucl.ac.uk/news-archive/in-the-news/may-2005/latest/newsitem.shtml?itnmay0504

Dunn, R., & Dunn, K. (1987). Understanding learning styles and the need for individual diagnosis and prescription. Columbia, CT: The Learner's Dimension.

Farkas, S., & Duffett, A. (2011). Cracks in the ivory tower? The views of education professors circa 2010. Thomas B. Fordham Institute. Retrieved from http://edex.s3-us-west-2.amazonaws.com/publication/pdfs/Cracks%20In%20The%20Ivory%20Tower%20-%20Sept%202010_8.pdf

Fien, H., Smith, J. L. M., Smolkowski, K., Baker, S. K., Nelson, N. J., & Chaparro, E. A. (2015). An examination of the efficacy of a multitiered intervention on early reading outcomes for first grade students at risk for reading difficulties. Journal of Learning Disabilities, 48(6), 602–621.

Fister, S., & Kemp, K. (1993). Translating research: Classroom application of validated instructional strategies. In R.C. Greaves & P. J. McLaughlin (Eds.), Recent advances in special education and rehabilitation. Boston, MA: Andover Medical.

Fox, M. (2005, August 16). Phonics has a phoney role in the literacy wars. Sydney Morning Herald, p.6.

Gable, R. A. & Warren. S. F. (1993). The enduring value of instructional research. In Robert Gable & Steven Warren (Eds.), Advances in mental retardation and developmental disabilities: Strategies for teaching students with mild to severe mental retardation. Philadelphia: Jessica Kingsley.

Gersten, R., & Guskey, T. (1985, Fall). Transforming teacher reluctance into a commitment to innovation. Direct  Instruction News, 11-12.

Gersten, R. (2001). Sorting out the roles of research in the improvement of practice. Learning Disabilities: Research & Practice, 16(1), 45-50.

Gersten, R., Chard, D., & Baker, S. (2000). Factors enhancing sustained use of research-based instructional practices. Journal of Learning Disabilities, 33, 445-457.

Gibb, N. (2017). The importance of an evidence-informed profession. Department of Education. Retrieved from https://www.gov.uk/government/speeches/nick-gibb-the-importance-of-an-evidence-informed-profession

Goldacre, B. (2013). Building evidence into education. Retrieved from http://www.badscience.net/2013/03/heres-my-paper-on-evidence-and-teaching-for-the-education-minister/Haan Foundation. (2012). The federal education research project. Retrieved from http://www.haan4kids.org/fed_proj/

Hattie, J. A.C. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London and New York: Routledge.

Hempenstall, K. (1996). The gulf between educational research and policy: The example of Direct Instruction and Whole Language. Behaviour Change, 13, 33-46.

Hempenstall, K. (1999). The gulf between educational research and policy: The example of Direct Instruction and whole language. Effective School Practices, 18(1), 15-29.

Jencks, C. S., Smith, M., Acland, H., Bane, M. J., Cohen, D., Ginits, H., Heyns, B., & Michelson, S. (1972). Inequality: A reassessment of the effect of family and schooling in America. New York: Basic Books.

Jones, G. (2016). Evidence based practice: A handbook for teachers and school leaders. Retrieved from https://drive.google.com/file/d/0B3LUp9PxnSZlZUVUSDJnUUE4M00/view?usp=sharing

Kim, Y., & Steiner, P. (2016). Quasi-experimental designs for causal inference. Educational Psychologist, 51(3-4), 395-405.

Leach, D. J. (1987). Increasing the use and maintenance of behaviour-based practices in schools: An example of a general problem for applied psychologists? Australian Psychologist, 22, 323-332.

Lloyd, J.L. (2006). Teach effectively. Retrieved from http://teacheffectively.com/index.php?s=meta+analysis

Levin, B. (2010). Governments and education reform: Some lessons from the last 50 years. Journal of Education Policy, 25(6), 739-747.

Lomax, R.G. (2004). Whither the future of quantitative literacy research? Reading Research Quarterly, 39(1), 107-112.

Maggs, A., & White, R. (1982). The educational psychologist: Facing a new era. Psychology in the Schools, 19, 129-134.

Marshall, J. (1993). Why Johnny can't teach. Reason, 25(7), 102-106.

Medew, J. (2012). Push for tougher line on surgery. The Age, October 4, p.5.

Mooney, M. (1988). Developing life-long readers. Wellington, New Zealand: Learning Media.

Morris, J., & Sah, P. (2016). Neuroscience and education: Mind the gap. Australian Journal of Education, 60(2), 146–156.

Morris, P.A., & Reardon, S.F. (2017). Moving education science forward by leaps and bounds: The need for interdisciplinary approaches to improving children's educational trajectories. Journal of Research on Educational Effectiveness, 10(1), 1-6.

National Inquiry into the Teaching of Literacy. (2005). Teaching Reading: National Inquiry into the Teaching of Literacy. Canberra: Department of Education, Science, and Training. Retrieved from www.dest.gov.au/nitl/report.htm

National Institute for Quality Teaching and School Leadership. (2005, August 25). National accreditation of pre-service teacher education. Retrieved from http://www.teachingaustralia.edu.au/home/What we are saying/media_release_pre_service_teacher_ed_accreditation.pdf

National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington, DC: U.S. Department of Health and Human Services.

Nous Group Consortium. (2011). Schooling challenges and opportunities: A report for the Review of Funding for Schooling Panel. Nous Group Consortium, August 29. Retrieved from http://www.deewr.gov.au/Schooling/ReviewofFunding/Pages/PaperCommissionedResearch.aspx

Poplin, M. (1988). The reductionist fallacy in learning disabilities: Replicating the past by reducing the present. Journal of Learning Disabilities, 21, 389-400.

Primary National Strategy (2006). Primary framework for literacy and mathematics. UK: Department of Education and Skills. Retrieved from http://www.standards.dfes.gov.uk/primaryframeworks/

Rood, D., & Leung, C. C. (2006, August 16). Litigation warning as private school settles complaint over child's literacy. The Age, p.6.

Rose, J. (2006). Independent review of the teaching of early reading. Bristol: Department for Education and Skills. Retrieved from www.standards.dfes.gov.uk/rosereview/report.pdf

Rousseau, D.M., & Gunia, B.C. (2015). Evidence-based practice: The psychology of EBP implementation. Annual Review of Psychology, 67, 667-692. Retrieved from https://www.cebma.org/wp-content/uploads/Rousseau-Gunia-ARP-2015.pdf

Runciman, W.B., Hunt, T.D., Hannaford, N.A., Hibbert, P.D., Westbrook, J.I., Coiera, E.W., Day, R.O., Hindmarsh, D.M., McGlynn, E.A., & Braithwaite J. (2012). CareTrack: Assessing the appropriateness of health care delivery in Australia. Medical Journal of Australia, 197(2), 100-105.

Senate Employment, Workplace Relations and Education Committee. (2007). Quality of school education. Commonwealth of Australia. Retrieved from http://www.aph.gov.au/SEnate/committee/eet_ctte/completed_inquiries/2004-07/academic_standards/index.htm

Shavelson, R. J., & Towne, L. (Eds.). (2002). Scientific research in education. Washington, DC: National Academy Press.

Slavin, R. E. (2002). Evidence-based education policies: Transforming educational practice and research. Educational Researcher, 31(7), 15-21.

Slavin, R. E. (2003). A reader's guide to scientifically based research. Educational Leadership, 60(5), 12-16. Retrieved from http://www.ascd.org/publications/ed_lead/200302/slavin.html

Slavin, R. E. (2004). Education research can and must address "What Works" questions. Educational Researcher, 33(1), 27-28.

Slavin, R. E. (2008). Evidence-based reform in education: Which evidence counts? Educational Researcher, 37(1), 47-50. Retrieved from http://search.proquest.com/docview/216905441?accountid=13552

Slocum, T. A., Spencer, T. D., & Detrich, R. (2012). Best available evidence: Three complementary approaches. Education and Treatment of Children, 35(2), 153 – 181.

Smith, F. (1973). Psychology and reading. New York: Holt, Rinehart & Winston.

Smith, F. (1992). Learning to read: The never-ending debate. Phi Delta Kappan, 74, 432-441.

Spock B. (1946). The commonsense book of baby and child care. New York: Pocket Books.

Stanovich, K. (1996). How to think straight about psychology (4th ed.). New York: Harper Collins.

Stanovich, P. J., & Stanovich, K. E. (2003). How teachers can use scientifically based research to make curricular & instructional decisions. Jessup, MD: The National Institute for Literacy. Retrieved from http://www.nifl.gov/partnershipforreading/publications/html/stanovich/

Stone, J. E. (April 23, 1996). Developmentalism: An obscure but pervasive restriction on educational improvement. Education Policy Analysis Archives. Retrieved from http://seamonkey.ed.asu.edu/epaa.

The 1999 Omnibus Appropriations Bill (1998). The Reading Excellence Act (pp.956-1007). Retrieved from http://www.house.gov/eeo

Towne, L. (2002, February 6). The principles of scientifically based research. Speech presented at the U.S. Department of Education, Washington, DC. Retrieved from www.ed.gov/nclb/research/

Travers, J.C. (2017). Evaluating claims to avoid pseudoscientific and unproven practices in special education. Intervention in School and Clinic, 52(4) 195–203.

U.S. Department of Education (2002, Jan). No Child Left Behind Act, 2001. Retrieved from http://www.ed.gov/offices/OESE/esea/

U.S. Department of Education (2003). Identifying and implementing educational practices supported by rigorous evidence: A user friendly guide. Washington, D.C.: Institute of Education Sciences, U.S. Department of Education. Retrieved from http://www.ed.gov/print/rschstat/research/pubs/rigorousevid/guide/html

Vaughan, T., & Albers, B. (2017). Research to practice – implementation in education. Teacher, 20 June 2017, ACER. Retrieved from https://www.teachermagazine.com.au/articles/research-to-practice-implementation-in-education

Viadero, D. (2002). Research: Holding up a mirror. Editorial Projects in Education, 21(40), 32-35.

Weaver, C. (1988). Reading: Progress and practice. Portsmouth, NJ: Heinemann.

Weaver, C., Patterson, L, Ellis, L., Zinke, S., Eastman, P., & Moustafa, M. (1997). Big Brother and reading instruction. Retrieved from http://www.m4pe.org/elsewhere.htm

Weigmann, K. (2013). Educating the brain. EMBO reports, 14(2), 136-9. European Molecular Biology Organization.

Whitehurst, G. J. (2002). Statement of Grover J. Whitehurst, Assistant Secretary for Research and Improvement, Before the Senate Committee on Health, Education, Labor and Pensions. Washington, D.C.: U.S. Department of Education. Retrieved from http://www.ed.gov/offices/IES/speeches/

Woodward, J. (1993). The technology of technology-based instruction: Comments on the research, development, and dissemination approach to innovation. Education & Treatment of Children, 16, 345-360.

Zient, J.D. (2012). Use of evidence and evaluation in the 2014 Budget: Memorandum to the heads of executive departments and agencies. Executive Office of the President, Office of Management and Budget. Retrieved from http://www.whitehouse.gov/sites/default/files/omb/memoranda/2012/m-12-14.pdf


Appendix

A great place to start is The National Early Childhood Technical Assistance Center (NECTAC). NECTAC has compiled a list of selected resources on defining, understanding, and implementing evidence-based practice. Links are provided for those materials that are freely available full-text online. http://www.nectac.org/topics/evbased/evbased.asp

Eric Digests (http://www.ericdigests.org/) Short reports (1,000 - 1,500 words) on topics of prime current interest in education. A large variety of topics are covered, including teaching, learning, charter schools, special education, higher education, home schooling, and many more.

The Promising Practices Network (http://www.promisingpractices.net/) web site highlights programs and practices that credible research indicates are effective in improving outcomes for children, youth, and families.

Visible Learning Hattie, J. A.C. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London and New York: Routledge.

Blueprints for Violence Prevention (http://www.colorado.edu/cspv/blueprints/index.html) is a national violence prevention initiative to identify programs that are effective in reducing adolescent violent crime, aggression, delinquency, and substance abuse.

The International Campbell Collaboration (http://www.campbellcollaboration.org/Fralibrary.html) offers a registry of systematic reviews of evidence on the effects of interventions in the social, behavioral, and educational arenas.

Social Programs That Work (http://www.excelgov.org/displayContent.asp?Keyword=prppcSocial) offers a series of papers developed by the Coalition for Evidence-Based Policy on social programs that are backed by rigorous evidence of effectiveness.

Coalition for Evidence-Based Policy. (2003). Identifying and implementing educational practices supported by rigorous evidence: A user friendly guide. Washington, DC: U.S. Department of Education. Retrieved May 13, 2004, from http://toptierevidence.org/wordpress/

Comprehensive School Reform Program Office. (2002). Scientifically based research and the Comprehensive School Reform (CSR) Program. Washington, DC: U.S. Department of Education. Retrieved May 13, 2004, from http://www.ed.gov/programs/compreform/guidance/appendc.pdf

Florida Center for Reading Research aims to disseminate information about research-based practices related to literacy instruction and assessment for children in pre-school through to Year 12 (www.fcrr.org/)

The U.S. Department of Education’s American Institutes for Research has a new 2005 guide, using strict scientific criteria to evaluate the quality and effectiveness of 22 primary school teaching models. AIR researchers conducted extensive reviews of about 800 studies. See at http://www.air.org/news/documents/Release200511csr.htm

Major reviews of the primary research can provide additional surety of program value. In a Department of US Education meta-analysis, Comprehensive School Reform and Student Achievement (2002, Nov), Direct Instruction was assigned the highest classification: Strongest Evidence of Effectiveness, as ascertained by Quality of the evidence Quantity of the evidence, and Statistically significant and positive results. Its effects are relatively robust and the model can be expected to improve students’ test scores. The model certainly deserves continued dissemination and federal support.

Borman, G.D., Hewes, G.M., Overman, L.T., & Brown, S. (2002). Comprehensive school reform and student achievement: A meta-analysis. Report No. 59. Washington, DC: Center for Research on the Education of Students Placed At Risk (CRESPAR), U.S. Department of Education. Retrieved 12/2/03 from http://www.csos.jhu.edu./crespar/techReports/report59.pdf

The Council for Exceptional Children provides informed judgements regarding professional practices in the field. See what its Alert series says about Phonological Awareness Social Skills Instruction Class-wide Peer Tutoring Reading Recovery Mnemonic Instruction Co-Teaching Formative Evaluation High-Stakes Assessment Direct Instruction Cooperative Learning. Found at http://dldcec.org/ld_resources/alerts/.

TheOregon Reading First Centerreviewed and rated 9 comprehensive reading programs.To be considered comprehensive, a program had to (a) include materials for all grades from Prep to Year 3; and (b) comprehensively address the five essential components of reading. They were Reading Mastery Plus 2002, Houghton Mifflin The Nation’s Choice 2003, Open Court 2002, Harcourt School Publishers Trophies 2003, Macmillan/McGraw-Hill Reading 2003, Scott Foresman Reading 2004, Success For All Foundation Success for All, Wright Group Literacy 2002, Rigby Literacy 2000. Found at Curriculum Review Panel. (2004). Review of Comprehensive Programs. Oregon Reading First Center. Retrieved 16/1/2005 from http://reading.uoregon.edu/curricula/core_report_amended_3-04.pdfhttp://www.peri.org.au/page/email_newsletter.html

The What Works Clearinghouse (http://www.w-w-c.org/) established by the U.S. Department of Education's Institute of Education Sciences to provide educators, policymakers, and the public with a central, independent, and trusted source of scientific evidence of what works in education.