U.Va.'s Feldon Proposes New Way to Measure Ph.D.s' Preparedness in Science, Technology, Engineering and Math Fields

David Feldon headshot

David Feldon (click for high-resolution version)

July 16, 2010 — What does it mean to be an expert in science, technology, engineering and math – the so-called "STEM" fields, which change so quickly that before you've had a chance to boot up the latest electronic gadget, a newer version already exists?

The work of David Feldon, an assistant professor at the University of Virginia's Curry School of Education, is focused on answering this question.

The landscape of scientific inquiry and technological capability changes so rapidly, the leaders in STEM research will need to create new approaches to solving complex problems, such as climate change, natural resources management, medical care, urban infrastructure and security.

To ensure that doctoral programs in the STEM areas are producing the best possible scientists, Feldon and his colleagues are working to create an evidence-based system to measure their effectiveness.

He explains his work in an article published in the July 16 issue of Science Magazine, "Performance-Based Data in the Study of STEM Ph.D. Education."

Feldon acknowledges a need for more students to study STEM subjects at the graduate level. However, it is not enough to merely widen the pipeline of students, he said; the education they are receiving at the Ph.D. level must be the best that is possible.

"Success cannot be assured simply through the graduation of a sufficient number of doctoral students studying STEM," Feldon said. "These individuals must also possess robust competence and the ability to innovate. Without a deeper understanding of how to foster these traits to the highest levels in more people, we risk having a workforce that has received the best training available, but not the best training possible.

"Currently, instructional and programmatic doctoral training decisions are based on culture and tradition," he said. Most STEM programs are measured by students' and faculty members' self-reporting about the effectiveness of a program.

"In the status quo, assessment of student competence is primarily conducted through course grades, publication rates, degree completion and letters of recommendation from faculty," Feldon said.

"Although courses are a typical facet of doctoral study, they typically only occupy students during the first year or two. Their more advanced skills develop during extended work in labs, independent studies and internships, which typically lack anything more explicit than a credit/no credit grade and a possible reflection in a letter of recommendation. These letters of recommendation are subject to all of the pitfalls and biases that are discussed in the article," he said.

"Further, a study from my research group, which is currently under review at the Journal of Higher Education, documents faculty advisers' inability to predict their students' performance at a level significantly better than chance. In many cases, their predictions of student performance are significantly worse than if they had guessed randomly."

In addition, "proxy measures, such as publication rates and dissertations, are imprecise measures of individuals' skills, because they do not inherently represent the work of the student as an individual. Co-authors, mentors and colleagues often contribute substantially to the development of ideas, the execution of analyses and the coherence of written arguments, which render these measures unable to reflect narrowly on the independent competencies of the student," Feldon said.

"Even degree completion fails to reflect individual competence," he said. "Studies of attrition in doctoral programs clearly reflect a broad range of factors that contribute to a decision to leave an academic program, and most of those are unrelated to the challenge of the work or issues of competence.

 "For those who do attain a Ph.D., the award of degree merely reflects the aggregate completion of program requirements, which typically consist of grades in required courses and a successfully defended dissertation. As such, it combines the imprecision inherent in each of those measures."

Feldon and his research team are now working with a rubric designed to assess research skills through students' written research proposals, which was originally developed for undergraduate biology students at the University of South Carolina by co-author Briana Timmerman. Feldon's team has engaged with an interdisciplinary group of faculty to adapt its assessment criteria to be applicable across the sciences, technology fields, engineering and mathematics.

Preliminary results indicate that many of the skills developed by beginning doctoral students are remarkably similar across disciplines. Future research will determine the extent to which rubrics must be differentiated by discipline.

"Making use of an evidence-based system will allow us to optimize doctoral training practice to improve the skill base of newly minted Ph.D.s and to improve the efficiency of STEM training in terms of time and other resources," Feldon said.

According to Feldon, nearly 50 percent of students in graduate and undergraduate STEM programs drop out. 

"If we could determine more effective ways to train students and motivate them to persist, we could readily double the size of the STEM workforce," he said.

Doubling the workforce would be good news, as many of the challenges being faced today will have solutions rooted in science, technology, engineering and mathematics, he said.

Feldon's research is being conducted with a three-year, $700,000 grant from the National Science Foundation's Division of Research on Learning.

Media Contact