Newswise — Researchers at the UC Davis MIND Institute have updated and validated a series of tests delivered on an iPad to accurately assess cognitive processing in people with intellectual disability. The validation opens new opportunities for more rigorous and sensitive studies in this population, historically difficult to evaluate.
The widely used NIH Toolbox was designed for use in the general population. It had not been applied as a rule to people with intellectual disability. Intellectual disability is characterized by significant limitations in both cognitive functioning and in adaptive behavior such as everyday social and practical skills. The most common genetic causes of intellectual disability are Down syndrome and fragile X syndrome.
The article “Validation of the NIH Toolbox Cognitive Battery in Intellectual Disability,” published February 24 in Neurology©, the medical journal of the American Academy of Neurology, determined that the tests accurately measure cognitive skills in individuals with a mental age of 5 or above. Additional modifications to the test are needed before it can be shown to be equally good at measuring skills in people with lower functioning.
“Our study assessed how the battery is performing in people with intellectual disability. We made some adaptations to the assessment so that it works well in this population,” said Rebecca Shields, the first author on the study and a UC Davis graduate student in human development working in the laboratory of David Hessl. “This is a big first step showing how it works in these individuals. Applying it consistently across this unique population means other researchers and clinicians can use it too.”
Manual developed to aid clinicians in using the test
To guide clinicians and researchers in using the Toolbox with this population, the group also developed and published a manual as a supplement to the NIH Toolbox Administrator’s Manual. The manual documents the researchers’ guidelines specific to assessing individuals with intellectual disabilities, allowing other researchers to administer the test in a standardized way. This project was led by Forrest McKenzie, a member of the Hessl laboratory, and is available in the online article as well as on the NIH Toolbox website.
“People with intellectual disabilities can be very difficult to assess. Many of the existing measures we use to evaluate them have a lot of limitations,” said Hessl, senior author on the study and a professor in the UC Davis Department of Psychiatry and Behavioral Sciences. “Also, different investigators choose a wide variety of different tests for research, making it very hard to compare results in the field. We really hope that the NIH Toolbox cognitive tests can be used more uniformly, as a common metric.”
The lack of standardized tests also has had an impact on clinical trials of potential new treatments, he said.
“When we are trying to determine if people with disabilities are really improving, if their cognitive rate is getting faster or if they are responding to treatment, we face challenges because of measurement limitations,” Hessl said. “This Toolbox really tackles a lot of these limitations. It is well standardized, and objective. And the test is given on an iPad, so the way each person responds to the question should be more consistent and reliable.”
Test measures cognitive skills and executive function in just 30 minutes
The test, which typically takes about 30 minutes, measures a variety of skills, including memory, vocabulary, single-word reading and processing speed. It also measures executive function, such as the ability to shift from one thought to another or to pay attention and inhibit impulses. In the cognitive flexibility test, the individual is asked to match items by shape. But the rules of the game then switch, and they are asked to match the items by color.
The test also measures receptive vocabulary, or how words are understood. For example, the test taker will hear a word and see four pictures then select the picture that matches the word. It also measures memory by presenting a picture story in a sequence then asking the test taker to put the story back together in the same sequence.
A list-sorting task on the test requires the individual to remember the group of items they had seen on the screen and repeat them back in a certain order. A processing speed task evaluates how well the individual can compare different patterns that appear on the screen.
Researchers found that the battery of tests was feasible for a very high percentage of individuals with a mental age of five or higher; individuals in the study did not refuse to participate, were able to respond to the tests as designed and understood what the tests required. The battery also proved to be reliable; the scores were consistent for individuals after re-testing. Hessl said these test properties are especially important in determining the value and utility of the battery, such as determining how useful it may be in detecting changes related to treatment.
Shields said that the team is now learning about how well the test battery picks up cognitive changes over development. They are bringing back the same participants in the study two years later.
Funding for the study came from the NICHD (RO1HD076189), the Health and Human Services Administration of Developmental Disabilities (90DD0596), the MIND Institute Intellectual and Developmental Disabilities Research Center (U54 HD079125) and the National Center for Advancing Translational Sciences, National Institutes of Health, through grant UL1 TR000002.
Other authors on the study include: Andrea Drayton and Stephanie Sansone of UC Davis; Aaron Kaat and Richard Gershon of Northwestern University; Jeanine Coleman and Karen Riley of the University of Denver; Claire Michalak and Elizabeth Berry-Kravis of Rush University Medical Center; and Keith Widaman of the University of California, Riverside.
MEDIA CONTACT
Register for reporter access to contact detailsCITATIONS
RO1HD076189; 90DD0596; U54 HD079125; UL1 TR000002; Neurology, Feb. 24, 2020