In the previous blog post, I explained why evaluators encourage using a validated survey measure in place of a tailor-made questionnaire. I unpacked what validated measures are and what they can do for an evaluation. This week, I will share how an evaluator selects a validated measure as well as how programs should implement the validated measure.
How does an evaluator select a validated measure?
To select a validated measure, the evaluator will first want to get clear on what needs to be measured by the program. In the technical language of psychometrics, we need to define the construct. For example, is the program interested in changing attitudes or changing behavior? These two different constructs will probably require different measures – there is a world of difference between “I don’t think smoking is good for my health” and “I have stopped smoking.”
Once we have a working definition of the definition, the evaluator can begin to review the available measures. Often, there are a dozen or more published measures that I consider when choosing one for an evaluation. There will be complex tradeoffs to consider, including the length of the tool and the quality of evidence available for its validity. Often, I am looking for whether the tools are validated for specific populations. For example, one of my favorite tools has undergone separate validation for people receiving inpatient mental health treatment, French speakers, and Latin American migrants to the US who speak any one of a dozen varieties of Spanish. By setting a criteria in advance, I can rank tools by their appropriateness for the given task and present options for the evaluation team.
How should programs implement a validated measure?
If you are implementing a validated measure as part of an evaluation, there are a few pointers to keep in mind.
- Follow the administration instructions carefully. For example, the instructions given by the evaluator may provide specific language to answer questions posed by participants, such as “Please answer with the first thing that comes to your mind.” Following these standard instructions is important to make sure that the scores mean the same thing for all participants.
- Report any irregularities in implementation to the evaluator. This gives the evaluator a chance to check whether the irregularities posed a problem or not. For example, if multiple sites administered a validated measure, but one of the sites didn’t follow instructions, the evaluator can use statistical methods to check whether this impacted the results.
- Do not change the text of a validated measure, for example by changing the wording of an item. Changing the text of the items means that the questionnaire is no longer the same one that was validated. Even seemingly small changes can change the way that participants interpret the item, and thus, change what the scores mean.
- Use official translations of the measure, if available. Just like the original version of the measure, official translations of the measure go through similar qualitative and quantitative testing to ensure that the translations meet the statistical standards of the original measure. Translating the measure “on the fly” should be avoided. The evaluator should ask you in advance which languages are needed for the questionnaires – validated translations are often available.
- Ask a psychometrician to score the measure for you. Some measures require the applications or formulas or special statistical software to obtain a score. Some translations into other languages even have different scoring formulas. A psychometrician will be able to apply the right scoring procedure and interpret the scores.
With these tips in hand, you should have a better idea of what to pay attention to when using a validated measure. In a future post, I’ll take it one step further and explain why using a validated measure isn’t always the way to go.