How are custom measures developed?
My colleagues who are measurement specialists lament that the most common way that custom measures are developed is for someone on the project team to “just start writing items.” If this approach actually worked well for the making measures, I wouldn’t be so hard on it, but the truth is that such efforts mostly end in failure – it’s a bit like walking into the kitchen without a recipe and deciding to “just put some things in a pot.” In both cases, what we need is a plan – spontaneity is not terribly valuable. Luckily, there are several viable plans we can turn to for making customer measures, such as Evidence centered design, the Berkeley Evaluation and Assessment Research (BEAR) System, the sequential mixed methods designs of Cresswell and Plano-Clark, or Schensul and LeCompte’s generous multivolume Ethnographer’s Toolkit. It is my belief that we can do a fair job combining the best of these frameworks into a four-part workflow, and this is the workflow I recommend for evaluation consultants. Here are the four phases I suggest:
- Construction selection: choosing what we want to measure and articulating good reasons for what we choose
- Construct definition: exploring the construct’s conceptual meaning and breadth, and articulating a normative rationale for measuring it
- Instrumentation: authoring the instrument according to design principles that reflect our commitments to participants, such as accessibility
- Validation: subjecting the instrument itself to evaluation of its quality so that improvements can be made
Experienced evaluators will recognize each of the above steps as critical parts of the development of custom measures, even if they haven’t ever listed them like this. Typically, construct selection happens implicitly as we construct logic models, choosing to focus on the key indicators in the outputs and outcomes columns. Construct definition is something that is done, whether we acknowledge it or not, in our discussions about what we are measuring – and it is my view that it is best handled early and explicitly. If we have put serious work into the first two phases, then a smooth instrumentation process is our reward. Validation requires us to collect pilot data using our measure, often in the form of cognitive interviews and field tests of the instrument. Validation usually kicks off another round of the process, going all the back to asking whether we are indeed measuring the right construct (sometimes we aren’t). This might all sound exhausting, but I can’t think of any of these four steps it would be a good idea to skip.
Read more of our latest blogs!