Project Rationale

A literature-based argument for the need for a project like TLT

ACT Assessment scores for 2004 "reveal that an alarming number of graduating high school seniors continue to be unprepared for college science and math courses" (ACT, 2004). The need for reform of high school science and math instruction is clear, and ideas for reform are plentiful --- some more credible than others. Formative assessment is arguably the pedagogic practice that most dramatically improves student learning, especially for traditionally underachieving groups, so great attention should be paid to high school STEM reforms based upon it.

"Formative assessment" (FA) means using assessment (measurement) of student knowledge and learning in order to provide guiding feedback for students and teachers: assessment for learning, rather than assessment of learning. For assessment to be formative, the feedback must be used to adapt teaching and learning (Black & Wiliam, 1998a; Bransford et al., 1999; Sadler, 1989). According to the research literature, "innovations which include strengthening the practice of formative assessment produce significant, and often substantial, learning gains" across ages, school subjects, and countries --- gains "larger than most of those found for educational interventions" (Black & Wiliam, 1998b). Many studies show that FA is particularly beneficial for students who are traditionally "low achievers," with potential to help narrow the achievement gap between students from different socioeconomic strata (Black & Wiliam, 1998b; Stiggins, 2002). This last observation is particularly relevant in light of the No Child Left Behind act. FA can elicit richer classroom discourse and help students become more engaged and motivated (Gallagher, 2000). It can help students become aware of the limits of their understanding and the actions they can take to progress (Ramaprasad, 1983; Sadler, 1989). It can also catalyze significant teacher learning, as feedback gives teachers new understanding of student learning (Black et al., 2002) and the effectiveness of their own practices (Bransford et al., 1999).

Practicing FA is difficult. It requires new modes of pedagogy, deep changes in classroom practice, adoption of new roles by teacher and students, and a renegotiation of the implied contract between teacher and students (Black et al., 2002; Black & Wiliam, 1998b). Overall, FA is not well understood by teachers, even those attempting to implement it. Common pitfalls that lead to "weak practice" of FA include: encouraging superficial, rote, detail-oriented learning; failing to engage in critical review or discussion of assessment questions used; confusion between summative and formative uses, including over-emphasis of grading; and emphasizing competition among students rather than personal improvement (Black & Wiliam, 1998a).

The literature provides some insight into what makes for effective FA (Black et al., 2002; Black & Wiliam, 1998a). Assignments or questions used should encourage deeper and more reflective learning by requiring inferential and deductive reasoning, and they must be designed to reveal the thinking used by students to reach answers, not just their correctness. In addition to good instruments, teachers need the ability to interpret results and decide how to respond. Students learn more when the amount and kind of feedback they receive is tuned to their particular and immediate need. FA is perhaps most effective as "real-time" FA, in which feedback is almost continuously obtained to monitor learning and instruction is almost continuously adjusted (Stiggins, 2002).

Although the literature is plentiful on the effects of FA and the ways it has been implemented, it offers only scattered anecdotal accounts of teachers' learning of FA-based pedagogy. There is evidence that practicing FA effectively requires subject-specific pedagogical content knowledge as well as generic FA skills that apply across subjects (Black et al., 2002).

Practicing real-time FA with the number of students found in typical classrooms can be difficult or impossible because of the communication and information-processing load it places on the teacher. Classroom response systems (CRSs) help to overcome this problem by providing a supplemental, technology-enhanced channel of communication between teacher and students. A CRS consists of a set of input devices for students, such as laptop computers, PDAs, programmable calculators, or remote-control style "clicker" keypads; a computer for the instructor; some kind of network or mechanism connecting these together; and software to control the system. It enables a teacher to present a question, problem, or task to the class; have students enter their responses into input devices; collect and aggregate the answers; and display the aggregated answers as a histogram or other representation. Using such a tool, the teacher can efficiently pose questions to the class, motivate all students to answer the question, assimilate the range of answers, and motivate a class-wide discussion. CRSs are a key enabling technology for real-time FA (Beatty, 2004; Robert J. Dufresne et al., 1996; Roschelle et al., 2004).

Using a CRS does not automatically result in FA. The UMass Physics Education Research Group, of which Leonard, Gerace, and Beatty are members, has developed a pattern of implementing real-time FA with a CRS that we call technology-enhanced formative assessment (TEFA). In this pattern, a typical class is structured around an iterative "question cycle" during which the teacher presents a question to students; provides them a few minutes to discuss it amongst themselves in small groups and reach consensus; collects answers through the CRS; displays the histogram to the class; moderates a class-wide, student-centered discussion around arguments for and against various possible answers; and selects some kind of "follow-up" micro-lecture, question cycle, or other activity in response to the needs revealed by the students' answers and discussion (Beatty, 2004; Robert J. Dufresne et al., 1996; Wenk et al., 1997). We have found this approach to be both pedagogically effective and popular with students. Others have developed strategies for CRS use similar to ours and also reported learning gains (Roschelle et al., 2004), although from our perspective most underemphasize the teacher's real-time formative use of the feedback.

While the evidence is strong that CRSs can enable powerful practice of real-time FA and benefit learning, little research has been conducted on the details of how CRS use affects learning, and even less on how teachers develop mastery of it and how teacher educators can assist them. Most extant wisdom is based on the personal and anecdotal experiences of individuals who have found their own way to mastery or who have assisted others through informal mentoring or isolated workshops (R.J. Dufresne et al., 2000; Mazur, 1977; Roschelle et al., 2004).

The only published research we are aware of on teacher learning of TEFA is from Assessing-to-Learn (A2L), a previous NSF-funded project (ESI-9730438) by us on the use of TEFA in high school physics instruction (Feldman & Capobianco, 2003). A2L began with the intention of creating prototype CRS questions for high school physics teachers to use with TEFA. However, we discovered that learning to use TEFA in its fullness is a remarkably difficult and time-consuming process for most teachers. In addition to mastering a complex set of skills, it requires internalizing perspectives and habits of mind that change a teacher's very "way of being a teacher" (Blum, 1999; Davis et al., 2003). Even teachers who identified with the stated goals of FA often practiced it in a way inconsistent with these goals, resulting in "weak practice" as discussed above. Although teachers and students appeared to enjoy doing FA activities and claimed that the activities were valuable learning experiences, most teachers did not use the feedback they received to inform real-time teaching decisions; therefore, the activities were not "formative," at least for the teachers.

As a result, we broadened the scope of the A2L project to study factors that influence teachers' adoption of FA practices and the kinds of support teachers need to fully implement TEFA. Our findings inform Section 3 of this proposal. In essence, we have a relatively detailed model of the capabilities and perspectives that contribute to TEFA mastery, but a much cruder understanding of how teachers come to acquire these and reach mastery. In order to develop effective methods of teaching TEFA mastery to pre-service and in-service STEM teachers, and in order to know when and how to evaluate student learning impacts, a more substantial research base on teacher learning of TEFA is required. No such research exists apart from the Assessing-to-Learn study. This project aims to redress that, thus strengthening the practice of STEM instruction by improving teacher education and professional development for TEFA, and preparing the ground for a large study of the scalability and effects of TEFA-based pedagogy.

Because TEFA is a well defined and focused practice that nevertheless requires a broad range of skills and consonant perspectives to master, we believe it makes a good point of entry for research into the encompassing topics of formative assessment and of teacher learning in general. Thus, a research project on teacher learning of TEFA would contribute to the study of effective formative assessment and effective teacher education and development.

By contributing significantly to the research base on teacher learning of formative assessment and essentially founding a research base on teacher learning of TEFA, the project will enable better teacher education and professional development. By creating the foundations for a large IERI-style scaling study on TEFA, it will make possible a concerted, effectual initiative to promulgate the practice of TEFA. The goal of such a study would be to produce data sufficient for the What Works Clearinghouse. Since TEFA enables and enhances formative assessment in STEM classrooms, and since formative assessment significantly and often dramatically improves student learning -- especially among students who are often "left behind" by traditional STEM instruction --- this should be a major advance for STEM education and a step towards closing the achievement gap between socioeconomic groups.

According to the TPC Program Solicitation section on Research Projects, "Of special interest is research around pilot-scale models of STEM teacher professional continuum strategies that show promise for scalability" --- a description that fits our project well. Classroom response systems are now affordable commercial products appearing in quantity in K12 schools and universities; one manufacturer claims to have "over 600,000 response pads now being used in all 50 states in thousands of k-12 schools as well as over 450 universities and 10 foreign countries" (Trehub, 1991). The technology and interest for explosive scaling clearly exists, even without the support of research-based pedagogy and teacher professional development.

References

  • ACT. (2004). Average National ACT Score Rises for First Time Since 1997, But Many Students Still Not Ready for College Science and Math Courses (national data release): ACT.
  • Beatty, I. (2004). Transforming student learning with classroom communication systems (Research Bulletin No. ERB0403): Educause Center for Applied Research.
  • Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2002). Working inside the black box: Assessment for learning in the classroom. London, UK: King's College London Department of Education and Professional Studies.
  • Black, P., & Wiliam, D. (1998a). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7-71.
  • Black, P., & Wiliam, D. (1998b). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139-148.
  • Blum, L. (1999). Ethnicity, identity, and community. In M. Katz, N. Noddings & K. Strike (Eds.), Justice and Caring: The Search for Common Ground in Education (pp. 127-145). New York, NY: Teachers College Press.
  • Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (1999). How People Learn: Brain, Mind, Experience, and School. Washington, D.C.: National Academy Press.
  • Davis, K., Feldman, A., Irwin, C., Pedevillano, E., Capobianco, B., & Weiss, T. (2003). Wearing the letter jacket: Legitimate participation in a collaborative science, mathematics, engineering, and technology education reform project. School Science and Mathematics, 103(3), 121-134.
  • Dufresne, R. J., Gerace, W. J., Leonard, W. J., Mestre, J. P., & Wenk, L. (1996). Classtalk: A classroom communication system for active learning. Journal of Computing in Higher Education, 7, 3-47.
  • Dufresne, R. J., Gerace, W. J., Mestre, J. P., & Leonard, W. J. (2000). ASK-IT/A2L: Assessing student knowledge with instructional technology (technical report No. UMPERG-2000-09). Amherst: University of Massachusetts Physics Education Research Group.
  • Feldman, A., & Capobianco, B. (2003, April). Real-time formative assessment: A study of teachers' use of an electronic response system to facilitate serious discussion about physics concepts. Paper presented at the Annual Meeting of the American Educational Research Association, Chicago, IL.
  • Gallagher, J. J. (2000). Teaching for understanding and application of science knowledge. School Science and Mathematics, 100(6), 310-318.
  • Mazur, E. (1977). Peer Instruction: A User's Manual. Upper Saddle River, NJ: Prentice Hall.
  • Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28(1), 4-13.
  • Roschelle, J., Abrahamson, L. A., & Penuel, W. R. (2004, April 16). Integrating classroom network technology and learning theory to improve classroom science learning: A literature synthesis. Paper presented at the Annual Meeting of the American Educational Research Association, San Diego, CA.
  • Sadler, R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119-144.
  • Stiggins, R. J. (2002). Assessment crisis: The absence of assessment FOR learning. Phi Delta Kappan, 83(10).
  • Trehub, A. (1991). The Cognitive Brain. Cambridge, MA: MIT Press.
  • Wenk, L., Dufresne, R. J., Gerace, W. J., Leonard, W. J., & Mestre, J. P. (1997). Technology-assisted active learning in large lectures. In A. P. McNeal & C. D'Avanzo (Eds.), Student-Active Science: Models of Innovation in College Science Teaching (pp. 431-451). Orlando, FL: Saunders College Publishing.