Bibliography prepared for the Campbell Collaboration Briefing Conference on Place Randomized Trials in Education, Crime, Social Welfare, and Health.
Find specific work or narrow your results by type, topic, program, project, or service by selecting your criteria from the choices at right.
This project was supported under contract number ED-01-C0-0039 (0002), as administered by the Institute of Education Sciences, U.S. Department of Education.
This report reviews the empirical evidence concerning the relation between teamwork and patient safety.
The following discussion compares the purpose, strategy, and effectiveness of two distinct categories of MTT, those that are primarily simulator-based and those that are primarily classroom-based. Data collected from MTT course observations, participant questionnaires, and instructor interviews are reported. Finally, we summarize the state-of-the-science and propose a series of research-based propositions for improving the future of MTT.
Over the years, the FAA has partnered with industry to develop programs for reporting, classifying, and analyzing safety-related data, but none has been able to integrate data from multiple sources. We are developing a generalized Human Factors taxonomy for classifying de-identified ASAP incident reports, AQP performance ratings, and FOQA output. Eventually this taxonomy will be embedded into a series of searchable computer databases that speak a common language, allowing the search for trends.
Because poor LOS construct validity can have real-world effects on pilot training and performance, assessing and improving the construct validity of Line Operational Simulations is more than just an academic or scientific issue. It is also a practical and political issue in that it involves multiple stakeholders who may have competing concerns. These include safety, justice/fairness, technical feasibility, and cost-effectiveness (Austin, Klimoski, & Hunt, 1996). Therefore, we recommend that all potential stakeholder groups be involved in identifying and improving the construct validity of Line Operational Simulations. These groups may include pilot unions, training staff, flight standards staff, and officials from the regional FAA offices. Moreover, all groups must be prepared to compromise some of their own goals/needs to achieve a balanced solution. In the end, only by working together can industry address the issue of LOS construct validity, and by extension, the quality of pilot crew training and evaluation.
The purpose of this study was to assess the relative effectiveness of different approaches to checking pilot performance at the end of training: the maneuver validation (MV) and the Line Operational Evaluation (LOE). Because the LOE provides greater contextual cues and integrates CRM skills with technical skills, it should simulate typical line operations more accurately than a traditional maneuver validation. Therefore, we hypothesized that pilots would rate the LOE as more useful than the MV. The results presented below are part of a much larger survey of airline pilots’ experiences in and reactions to their professional training (Baker et al., 2002).
The purpose of this study was to assess the relative effectiveness of different approaches to debriefing team performance: team debrief with videotape, team debrief without videotape, instructor debrief with videotape, and instructor debrief without videotape. We hypothesized that the four approaches would not be equally effective. However, the lack of consensus in the literature made it impossible to hypothesize whether team- vs. instructor-led debriefs would be more effective. Based on our personal experience, we hypothesized that debriefings which incorporate videotape would be perceived as more effective than those which did not.
AIR, in partnership with Policy Analysis for California Education (PACE) and EdSource, conducted the legislatively mandated independent evaluation of the Immediate Intervention/Underperforming Schools Program (II/USP) and the High Achieving/Improving Schools Program (HA/ISP) of the Public Schools Accountability Act (PSAA) of 1999.
This paper describes the framework underlying our evaluation design of a large-scale, multifaceted school reform initiative and to identify some of the methodological challenges inherent in work of this kind.