After earning an MA in linguistics and romance languages from Germany, I started my career as a researcher and educator when I joined the research team of the Third International Mathematics and Science Video Study (TIMSS 1994) at UCLA in Los Angeles, California. My work on this and eventually its larger follow-up study (TIMSS 1999), the first observational studies that compared teaching practices in different countries using large, nationally representative video samples of authentic classroom instruction, prompted my interest in mathematics teaching and learning, interdisciplinary approaches, and research methodology, which led me to earn a PhD in quantitative research methods from UCLA. Those experiences have shaped in many ways how I approach my own research.
It is also during that time that I first became interested in teacher knowledge, specifically usable teacher knowledge, and its relationship to teaching and student learning. It seemed that the videotaped teachers in the different countries not only had different ways of teaching mathematics, but that they also differed in how they understood and interpreted classroom events as they unfolded, which led them to make different instructional decisions. When we asked collaborators (teachers, teacher educators, and researchers) from the different countries to comment on the teaching they had observed in the videotaped lessons from their own and other countries, their comments were surprisingly consistent by country revealing different cultural orientations toward mathematics teaching and learning (cultural scripts), but they also seemed to reveal differences in knowledge. This observation led me to ask whether we might be able to observe similar systematic differences in teachers' analyses of teaching events within a given country, reflecting differences in usable teacher knowledge and expertise.
During my graduate studies at UCLA and eventually for my dissertation, I developed a prototype measure that consisted of several short, mathematically and pedagogically interesting video clips and asked teachers of different levels of experience and expertise to comment on the observed teaching events. I developed rubrics to score the written responses to obtain measures of usable teacher knowledge and began exploring the relationship of these scores to other measures and criteria. Upon graduation, I moved on to become a research scientist at the LessonLab Research Institute in San Monica, California, where I continued and expanded this work with help of $1.5M funding by the Institute of Educational Sciences (IES) with promising results. We have now more than 120 video clips available for which we have psychometric information and which can be used as measures for research purposes either as existing scales or to custom build assessment scales around specific topics or mathematical ideas of the Elementary and middle school curriculum (www.teknoclips.org).
During my time at LessonLab, I also became interested in teacher value-added scores as another measure of teacher performance and teacher quality and their potential use in teacher accountability and formal teacher evaluation systems. Through a grant by NSF ($1.5M), I was given the opportunity to study their statistical properties in depth and their relationship to instructional quality and curriculum-based measures of student learning. Since joining the University of Arizona, I have continued both my work on usable teacher knowledge by exploring text classification approaches to automatically score teachers' responses to the video clips with additional funding from IES, and teacher value-added scores by developing and applying a observational rubrics of instructional quality to a large set of videotaped classroom lessons. Most recently, with additional funding from NSF, we began exploring automated scoring of our observational rubrics using verbatim lesson transcripts.
I am also the Project Lead of the Teachers as Learners, Teachers as Thinkers in the Department of Educational Psychology.