Your Order is being submitted. Please wait.

Research Topics:
Test Development

Valid and reliable assessments result from the application of systematic development efforts by both content experts and psychometricians.

Overview

Valid and reliable assessments result from the application of systematic development efforts by both content experts and psychometricians. This is a multistep process that includes carefully defining the construct to be measured, developing a test blueprint aligned with the purpose of the assessment, thoughtful application of the art and science of item writing, and empirical data collection to support item and test validity. Research in test development is concerned with improved methods to enhance the validity and reliability of assessments.

Featured Article

Significant conceptual and empirical work over the last 10 years highlight the importance of understanding and specifiying the knowledge, skills, and other abilities that achievement test items elicit from examinees. This paper discusses the need for coherent and comprehensive understandings of how examinees interact with items, and several coding frameworks to identify item response demands.

Aligning Achievement Level Descriptors to Mapped Item Demands to Enhance Valid Interpretations of Scale Scores and Inform Item Development

Achievement level descriptors (ALDs) delineate the knowledge, skills, and abilities found in the standards that a student should posess depending on his or her level of achievement. This study examines the relationships among various cognitive and contextual coding frameworks and item difficulty when reviewing items in test administration order, with the ultimate goal of informing improved procedures for developing ALDs.

IRT Estimated Reliability for Tests Containing Mixed Item Formats

Various reliability coefficients of internal consistency have been proposed that make different assumptions regarding the degree of test-part parallelism, with implications for mixed-format tests. This study compares the IRT model-derived coefficients and observed values for mixed-format tests.

A Guide for Effective Assessment

We provide information that serves as a starting point and as a continuing reference for school board members, educators, and policy leaders for guidance.

Accuracy of Test Scores: Why IRT Models Matter

This paper describes different Item Response Theory (IRT) models for both multiple- choice items and constructed-response items. 

 

Understanding Assessment

The Learning Journey

CTB supports educators and students throughout the learning process.

Learn more »