Archives: Articles

IssueM Articles

(5) Risky Predictions and Damn Strange Coincidences: An Initial Consideration of Meehl’s Index of Corroboration

Kristine Y. Hogarty
University of South Florida

Abstract: The explication and empirical testing of theories are critical components of research in any field. Despite the long history of science, the extent to which theories are supported or contradicted by the results of empirical research remains ill defined. Meehl (1997) has proposed an index of corroboration (C) that may provide a standardized weans of expressing the extent to which empirical research supports or contradicts n theory. The index is the product of a theory’s precision of prediction and the extent to which observed data are close to those predictions. Large values of C are expected from strong theories making tight, accurate predictions. Small values should result from (a) weak theories /linking weak predictions (regardless of their accuracy), Or (b) strong theories that are not accurate.

Simulation methods were employed to evaluate the sampling behavior of C. Factors ill the research design included the precision of prediction, degree of congruence between known population parameters and the theoretical prediction, sample size, psychometric reliability, and the influence of a confounding variable. The results suggest that precision of prediction is far more influential in the value of C than is the accuracy of prediction. As anticipated, less reliable measures yielded smaller values of C. An uncontrolled extraneous variable resulted in biased C values, but the direction of bias could not be anticipated. Surprisingly, sample size evidenced negligible influence on the average value of C, although sampling error was reduced with larger samples.

Citation: Hogarty, K. Y. (2000). Risky predictions and damn strange coincidences: An initial consideration of Meehl’s Index of Corroboration. Florida Journal of Educational Research40(2), 76-99.

Download:  Hogarty.401.pdf (917 downloads )

(4) Elaboration of HLM Growth Modeling Results

Richard L. Tate
Florida State University

Abstract: Standard reporting of the modeling of individual growth or change curves with hierarchical linear models (HLM) typically includes a focus on a certain important results (e.g., the variance of the status of the outcome) at a single time in the growth curve, a time that is determined by the specification of the origin of the time scale. It is argued here that such reporting should be extended to show the variation of these important results over the time span of the study. The required procedure, involving only some simple matrix algebra and a technical graphics program, is illustrated with data for the nonlinear growth of reading ability for young children.

Citation: Tate, R. L. (2000). Elaboration of HLM growth modeling results. Florida Journal of Educational Research40(2), 53-75.

Download:  Tate.401.pdf (923 downloads )

(3) Item Exposure Control in Computer-Adaptive Testing: The Use of Freezing to Augment Stratification

Cynthia Parshall
University of South Florida

J. Christine Harmes
University of South Florida

Jeffrey D. Kromrey
University of South Florida

Abstract: Computerized adaptive tests are efficient because of their optimal item selection procedures that target maximally informative items at each estimated ability level. However, operational administration of these optimal CATs results ill the administration of a relatively small subset of items with excessive frequency, while another portion of the item pool is almost unused. This situation both wastes a portion of the available items and is a security risk for testing programs that are available all more than a few scheduled test dates throughout the year. A number of exposure control methods have been developed to reduce this effect. In this study, we investigate the effectiveness of item “freezing” as a means of augmenting tile Stratified-a method for exposure control. A second variation of tile Stratified-a method investigated here concerns use of differing numbers of strata. Using Monte Carlo procedures, we examine these methods under varying conditions of freezing and number of strata, Results are reported in terms of pool usage and test precision, both unconditionally and conditionally on ability.

Citation: Parshall, C., Harmes, J. C., & Kromrey, J. D. (2000). Item exposure control in computer-adaptive testing: The use of freezing to augment stratification. Florida Journal of Educational Research40(1), 28-52.

Download:  Parshall.401.pdf (922 downloads )