What if We Worried Less About the Accuracy of Coronavirus Tests?

Accuracy is everything, typically, when we take a diagnostic test — an incorrect result can lead to anguish and erroneous, if not harmful, treatment. Currently the most reliable way to identify a coronavirus infection is by a polymerase chain reaction (P.C.R.) test: A swab, usually taken from the nasal passage, produces a sample that is then sent to a specialized laboratory. P.C.R. tests, which can detect minute amounts of genetic material from the virus, cost upward of $100; in ideal circumstances, they take just hours to analyze. But because of high demand, supply shortages and other issues, many commercial labs are taking more than a week to process them. That means a positive test often comes back too late to enable contact tracers to notify those who have been exposed before they might in turn infect others. In these circumstances, the diagnosis is useful only for making personal health decisions and providing data on the rate of infection in a community.

In a July 21 report in JAMA Internal Medicine, the C.D.C.’s response team for Covid-19 estimated that nine out of 10 infections are not being identified — and obstacles to getting tested are probably major reasons. To capture more of those cases, many of which may not show obvious symptoms, says Daniel Larremore, a computational biologist at the University of Colorado, Boulder, “we need to shift our thinking.” Specifically, he says, we need to go from prioritizing the accuracy of individual test results to prioritizing the ability of a testing system to reduce the rate of the virus in a given population — even if that results in more misdiagnoses.

To see how this could work in practice, consider one strategy for increasing testing capacity: pooling samples for analysis. Suppose one person in 100 has the virus. Testers take and label a nasal swab from all of them; a portion of each sample is saved, and the rest is grouped with the samples taken from nine other people. The lab then runs 10 analyses, one for each group of 10 samples. Nine of those will return negative results, a determination given to all 90 members in those groups. The lab then retests each individual sample in the positive group to find the infected member. Over all, the lab has conducted 20 analyses, rather than the 100 needed to test everyone individually.

At a certain threshold, diluting samples by combining them with so many others might make the virus harder to detect, but the technique has proved effective in batches of five for P.C.R. testing. Nebraska was able to stretch its supplies by pooling, except among populations with high infection rates, which cause more groups to test positive and thus require more individual assays. “That can change week to week and possibly day to day,” says Jonathan Kolstad, an economist at the University of California, Berkeley. “Florida, three months ago, you could have done pretty big pools. Now you wouldn’t want that.” But, he and his colleagues note in a working paper published in July in the National Bureau of Economic Research, computer modeling could use factors like a person’s age, job, ZIP code and social networks to classify people by their risk of infection and group their samples accordingly. In theory, as more people with the virus are removed from circulating among others, the infection rate will go down and the pools can be expanded, making testing more efficient. Consequently, the economists’ analysis showed, testing daily would cost only twice as much as testing monthly.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *