Newswise — The approach, which was first implemented at U-M in 2016, could serve as a model for improving imaging assessments at other cancer centers where clinical trials are conducted, the researchers say.

Across the country, new and experimental therapies are available to patients with cancer through clinical trials. The effectiveness of a new drug or treatment approach is often evaluated based on measurements of changes seen in patients’ tumors using a variety of imaging techniques — such as MRI, CT and PET scans.

The problem, however, is that individual variability and unintentional bias on the part of the physicians interpreting the scans can skew the results. This has the potential to make new therapies seem either better or worse than they actually are. It can also affect patient care, leading to patients being taken off of or staying on trials when they shouldn’t.

To limit such distortions, the Rogel Cancer Center created a tumor response assessment core — or TRAC for short.

An assessment of TRAC, which was published in JNCCN — the journal of the National Comprehensive Cancer Network — showed it did indeed lead to better measurements.

Only a handful of cancer centers across the country have developed similar systems, with dedicated imaging cores and web-based platforms, the study notes.

 “At any given moment, we have hundreds of people enrolled in clinical trials at our cancer center,” says study senior author Vaibhav Sahai, MBBS, a medical oncologist at Michigan Medicine who led the development of the new approach. “Before TRAC, the majority of the imaging analyses were done by medical oncologists, and this is very common across the country.”

There are two main downsides to this approach, Sahai notes.

The first drawback is that medical oncologists — that is, doctors who specialize in diagnosing and treating cancer patients — usually don’t receive the same degree of specialized training in quantitative imaging analysis as their colleagues in radiology and nuclear medicine. And it’s difficult, on top of a busy caseload of patients, to be deeply versed in the many evaluation methods employed across different trials.

“Different trials use different measurements of response, depending on the cancer type and the drug type,” Sahai says as he opens up the custom web portal developed at U-M for TRAC. “We have an investigator-initiated trial open with a drug that may cause tumor swelling as a result of damage to the cancer. This treatment response might be interpreted as progression, and one could end up thinking the drug has no value unless you check for tumor density or functional activity.

“Tumor swelling or ‘pseudo-progression’ is also possible in patients receiving immunotherapy medications, and correct use and application of response assessment criteria is crucial for accurate assessment of our patients receiving these novel drugs on clinical trials,” he adds.

The second drawback is that medical oncologists’ familiarity with their patients may lead them, unintentionally, to be biased in their evaluations.

“We care about our patients. We want them to do well. We want to keep them on trials. We want to believe our care is helping them,” Sahai says. “So, it can be sometimes hard to do an unbiased assessment — which is what the clinical trial and the patient both deserve.”

To improve clinical trial response assessments, the Rogel Cancer Center launched TRAC in 2016 and created a new image analyst role, staffed with highly trained specialists. The center also created a new workflow that includes an initial review by the image analyst, followed by review by radiologists who specialize in that particular type of cancer. The process also includes a method for involving outside input to help resolve disagreements or ambiguities.

At the time of the paper’s publication, TRAC had been used in more than 175 clinical trials across many types of cancer, assisting with assessments of more than 1,500 scans.

In the recent study to analyze TRAC’s effectiveness, researchers used records for 49 lung cancer patients treated at U-M between 2005 and 2015, before the new system was in place. The patients’ imaging scans were sent through the TRAC process, where they were reviewed by an image analyst and two board-certified radiologists; another radiologist also performed a separate, independent review. These results were then compared to the medical oncologists’ original assessments.

“We found substantial agreement between the TRAC analysis and the radiologists’ evaluations,” notes Sahai, an associate professor of internal medicine. “We found only moderate agreement between the assessments by medical oncologists and TRAC. These differences have the potential to affect patient treatment and trial outcomes.”

As an added benefit, the new approach greatly improved the efficiency for analysis of cancer clinical trials at U-M. The turnaround time for tumor measurements decreased from 33 days to 3 days, the study team reported.

“The mission of TRAC was to create independent, unbiased and verifiable measurements of our patients’ response during clinical trials, and the results of our study show that this approach lives up to that goal,” Sahai says. “We published a detailed explanation of the workflow and the software we created in hopes of being a model for other cancer centers, and thus to help improve the accuracy of clinical trial results for patients everywhere.”

This study was supported by the National Cancer Institute (P30CA046592).

Additional authors on the paper include Katherine E. Hersberger, Mishal Mendiratta-Lala, Rocky Fischer, Ravi K. Kaza, Isaac R. Francis, Mirabella S. Olszewski, John F. Harju, Wei Shi, Frank J. Manion and Mahmoud M. Al-Hawary, all of U-M. 

Paper cited: Quantitative Imaging Assessment for Clinical Trials in Oncology,” JNCCN. DOI: 10.6004/jnccn.2019.7331

Journal Link: JNCCN