Assessment for Physics Faculty

Image: Banff National Park, Alberta, Canada

The Data Explorer and Assessment Resources for Faculty project (DEAR-Faculty) has three major activities:

  • studies the assessment needs and practices of physics faculty through both faculty interviews and surveys
  • promotes the use of research-based assessment techniques among physics faculty through PhysPort and the Data Explorer.
  • conducts research on teaching methods and student understanding of physics topics using research-based assessment instruments.

This project started in January 2012, and was funded through the NSF WIDER:EAGER and NSF WIDER programs. This work developed into PhysPort's research effort.

Project Description

Physics departments are under increasing pressure to assess the student learning outcomes of their classes and programs in order to reduce Drop/Fail/Withdraw rates, maintain program size, and receive or renew programmatic (e.g. with ABET) or departmental accreditation (e.g. with regional higher education associations). The field of physics education research (PER) has made significant progress in developing research-based assessment instruments, techniques for formative assessment, and alternative assessments for complex learning goals. However, there is a wide gap between the language and goals for assessment used by physics faculty members and department chairs and those used by physics education researchers. This gap results in a disconnect between researchers who do not answer the questions about assessment that most matter to faculty, and faculty who do not use assessments that are informed by research. The goal of this project is to build a bridge between these two groups by providing tools (online assessment resources and synthesis research) to arm faculty to do better assessment, and professional development (a workshop and online support for physics department chairs) to teach chairs, as agents of change, how to use those tools. This work will have three major impacts:

    1. Department chairs will learn to assess learning in their departments in a way that is consistent with their goals and language and connected to results in PER, thus meeting their need for assessment tools and transforming the way assessment is done in physics departments throughout the country.
    2. Physics education researchers will increase their understanding of the assessment needs related to program review, resulting in improved tools to meet these needs and potential new areas of research.
    3. Assessment is a gateway drug that will lead to increased adoption of evidence-based teaching. By arming chairs with good assessment practices tied to their needs and goals, this project will give them the tools to engage their departments in a systematic process of examination and improvement of teaching and student learning. We will facilitate the connection between assessment and evidence- based teaching by connecting online assessment resources to existing resources for PER-based teaching methods on the PER User's Guide (http://perusersguide.org).

Good assessment drives good teaching

There is substantial evidence that implementing meaningful department-wide assessment practices can have a strong impact on increasing the use of research-based teaching methods. For example, in a study of the results of ten years of assessment of introductory physics courses using research-based assessment instruments at the University of Colorado (CU), Pollock and Finkelstein report, “Collecting and analyzing these data is good not only for individual course assessments, but also for studying and supporting systematic transformation. We can use such data to move beyond assessments of a single instructor and a single course to observe the factors that support the widespread adoption and effective implementation of educational practices. For instance, at CU, the data serve as a mechanism for change. Collecting and reporting these data has become a part of departmental practice. Faculty are privately informed of their performance each semester, and given anonymized versions of these plots to contextualize their performance. While far from perfect, it helps us move beyond the standard end-of- term student evaluation as the sole metric of quality. We are beginning to couple teaching with learning.” (emphasis added)

There is further evidence that systematic assessment is not only helpful, but also necessary, for meaningful transformation of teaching practice. In a literature review of 191 articles on promoting instructional change in undergraduate STEM education, Henderson et al. report, “Successful strategies focused on disseminating curriculum and pedagogy typically involve more than one of the following components: coordinated and focused efforts lasting over an extended period of time, use of performance evaluation and feedback, and deliberate focus on changing faculty conceptions.”

Further, STEM department chairs are motivated by increasing calls to implement effective assessment programs to fulfill accreditation requirements: “Acknowledging the growing consensus that student learning outcomes are the ultimate test of the quality of academic programs, accreditors have also refocused their criteria, reducing the emphasis on quantitative measures of inputs and resources and requiring judgments of educational effectiveness from measurable outcomes.” Research has found “evidence of a connection between changes in accreditation and the subsequent improvement of programs, curricula, teaching, and learning in undergraduate programs.”

Project Staff

PIs: Sam McKagan, Eleanor Sayre

Co-Is: Bob Hilborn, Adrian Madsen, Bill Hsu, Eugene Vasserman

User Interface Design Consultant: Sandy Martinuk

Last update: 2019 February