Skip to main content
  • About Us
  • Careers
  • Contact

Search form

American Institutes for Research

  • Our Work
    • Education
    • Health
    • International
    • Workforce
    • ALL TOPICS >
  • Our Services
    • Research and Evaluation
    • Technical Assistance
  • Our Experts
  • News & Events

You are here

  • Home
29 Dec 2007
Report

Lessons Learned from U.S. International Science Performance

Steve Leinwand and Elizabeth Pollock

Introduction

All three major international assessments in science were given during 2003, an event that occurs only once every 12 years and offers a rare opportunity to compare performance across different grade and age levels. On the grade 4 Trends in International Mathematics and Science Study (TIMSS-4), U.S. students ranked in the top quarter of all participating countries (6th of 25). On the grade 8 assessment (TIMSS-8), the U.S. ranking slightly improved to reach the top fifth of all participants (9th of 45). On the Program for International Student Assessment (PISA), which is administered to 15-year-olds, the U.S. rank against all participating countries falls precipitously below the international average (22nd of 40). On the face of it, these results suggest that U.S. science rankings exhibit a sharp falloff beginning at secondary school and that U.S. efforts to strengthen international science performance should focus on improving secondary school science.

However, the rankings of U.S. international science performance against all the countries participating on each of the three international assessments have two important limitations. First, U.S. performance is compared against the performance of countries that represent a mix of both industrialized and developing economies. A better gauge of U.S. students’ science success would be to compare U.S. students’ science performance to only the industrialized countries that represent U.S. economic competitors. Second, the mix of industrialized nations changes across the three international assessments. Thus, the U.S. science rankings may reflect the changing mix of countries participating in each assessment and not reflect real changes in U.S. relative performance.

The National Science Foundation discussion of the TIMSS 2003 science results ignored the comparison group issue when it focused on U.S. 2003 science score improvement compared with the scores of all countries that participated in the prior 1995 TIMSS assessments: U.S. “fourth grade students remained fifth among 15 countries that participated in both 1995 and 2003” and “Eighth-grade students…raised their overall standing among the 21 countries that participated in TIMSS in both 1995 and 2003.” However, these U.S. comparisons are based on two different country sets and include different mixes of industrialized and nonindustrialized countries.

The current study corrects for weaknesses in measuring U.S. science performance by recomputing U.S. rankings against only the common set of 11 other industrialized countries participating in all three 2003 science exams. This approach of employing a common comparison group of industrial countries was previously used to assess U.S. rankings on the international mathematics assessments. It produced significantly different U.S. results compared with the full rankings against all countries participating on each assessment.

Along with examining U.S. international science performance, this study examines several country background variables that research suggests may be important in explaining students’ science outcomes on the international assessments. One such variable is students’ mathematics performance. A recent study of U.S. college students found that students’ secondary mathematics course taking was associated with science performance in college. The current study provides independent information on which to examine the relationship between mathematics and science performance by comparing a country’s international mathematics performance with its international science performance at the elementary and secondary levels.

Along with a country’s mathematics performance, three other sets of background variables are examined: science curriculum exposure, science preparation of teachers of science, and student characteristics. Because PISA rotates the emphasis of its assessments among reading, mathematics, and science, the 2003 PISA assessment collected only a limited set of science background variables. Hence, this report relies primarily on the TIMSS background data.

A cautionary note is in order when interpreting the correlation between a country’s science performance and its background characteristics. Large-scale international assessments cover widely different international systems and offer a natural laboratory in which to identify characteristics of science systems associated with performance differences. However, the correlations are only suggestive of an association; they are not, by themselves, evidence of causation. The bivariate correlations do not simultaneously control for the many unobserved country differences that could influence the bivariate correlations. They require further studies of the effect of these characteristics within a particular country to demonstrate their applicability.

PDF icon Lessons Learned from U.S. International Science Performance

Related Projects

Project

Trends in International Mathematics and Science Study (TIMSS)

The Trends in International Mathematics and Science Study (TIMSS) is an international comparative study of the mathematics and science achievement of fourth- and eighth-graders in the United States and students in the equivalent of fourth and eighth grade in other participating countries.

Project

Program for International Student Assessment (PISA)

The Program for International Student Assessment (PISA) is a system of international assessments that focuses on 15-year-olds' capabilities in reading literacy, mathematics literacy and science literacy.

Related Work

13 Dec 2007
News Release

New Analysis Shifts View of U.S. Students’ Science Performance Compared with Other Countries, Offers Clues to Differences

Shattering the myth that U.S. students score substantially above other countries in science in 4th and 8th grades, but then fall precipitously to below average in the 10th grade, a new study by the American Institutes for Research (AIR) shows there is actually a steady decline, not a sudden drop, in performance as students progress through school. Comparing the U.S. to 12 similar industrialized countries, the report also offers clues to what may lead to the differences and declines.

Further Reading

  • Reassessing U.S. International Mathematics Performance
  • New Analysis Shifts View of U.S. Students’ Science Performance Compared with Other Countries, Offers Clues to Differences
  • New Study Finds U.S. Math Students Consistently Behind Their Peers Around the World
  • Expressing International Educational Achievement in Terms of U.S. Performance Standards
  • New AIR Study Compares the Quality of U.S. Math Instruction with Singapore, a Recognized World Leader
Share

Contact

Steve Leinwand

Principal Researcher

Topic

Education
International Comparisons in Education
Mathematics Education
NAEP
STEM

RESEARCH. EVALUATION. APPLICATION. IMPACT.

About Us

About AIR
Board of Directors
Leadership
Experts
Clients
Contracting with AIR
Contact Us

Our Work

Education
Health
International
Workforce

Client Services

Research and Evaluation
Technical Assistance

News & Events

Careers at AIR


Search form


 

Connecting

FacebookTwitterLinkedinYouTubeInstagram

American Institutes for Research

1400 Crystal Drive, 10th Floor
Arlington, VA 22202-3289
Call: (202) 403-5000
Fax: (202) 403-5000

Copyright © 2021 American Institutes for Research®.  All rights reserved.

  • Privacy Policy
  • Sitemap