Coverage of conflicting breast and lung cancer screening studies puts the science news “see-saw” on tilt

What duties do scientists and journalists have in reporting findings to the public?

 

Readers who follow health and science news may feel a little tipsy given the “see-saw” effect that several stories have demonstrated over the past year: First, a federal health-advisory panel announced last November that mammograms might do more harm than good for women before age 50.

 

Then, less than a year later, a new study out of Sweden finds that mammograms cut the breast cancer death rate for women in their forties by 26 percent.

 

Now, on Nov. 5, a news story likened those findings to the results of a large trial by the National Cancer Institute, which showed that CT scans reduced mortality rates among the highest-risk lung cancer patients by 20 percent. That same story, from National Public Radio, also reported that those screened face frightfully similar risks as women do with mammograms: a 24 percent chance of a false positive and perhaps having a needle needlessly plunged into the chest for a biopsy.

 

Patients then get thrown into an emotional blender, as they wait and wonder whether or not they have cancer. … Dizzy yet?

 

Well, that’s the scientific endeavor for you: A group of investigators test a hypothesis and publish results intended to uphold or shoot down previous findings, or present new information altogether.

 

Science communications veteran Joann Rodgers has seen it time and again throughout her accomplished career—first, as an award-winning journalist and columnist, and for at least the last 25 years, in top roles for the media relations and public affairs division at Johns Hopkins Medicine.

 

This “see-sawing” is certainly frustrating, not only for the public, but also for journalists and scientists, according to Rodgers. In 2003, she was part of a team that published a study incorporating the opinions of scientists, journalists and consumers of science news about the accuracy, balance and content of a sampling of reports on genetic links to diseases.

 

The study, published in the journal Science Communications, remains one of the few in the academic literature to systematically assess mass media reporting based on the expectations of researchers, reporters and the average reader. One of the co-authors was Gail Geller, a core faculty member at the Johns Hopkins Berman Institute of Bioethics who studies the ethical implications of genetic technologies.

 

“It should be no surprise to scientists or journalists that contradictions will occur,” says Rodgers, a fellow of the American Association for the Advancement of Science and past president of the National Association of Science Writers.  “Part of the responsibility for publicly communicating science is to help the public understand that scientific truth is a journey.”

 

Who’s right?

 

In November 2009, the U.S. Preventive Services Task Force recommended that biennial breast screenings should be optional for women before age 50, given that the net benefit of starting before then would outweigh the risks. According to the task force’s analysis, only one death would be prevented per 1,904 women screened in the 40-to-50 age range.

 

Moreover, the recommendation rescinded earlier guidance that called for yearly mammograms in that age bracket, and it coldly contradicted longstanding advocacy by the American Cancer Society urging early screenings.

 

In the study about lung cancer screening via CT scan, more than 53,000 current or former heavy smokers—ages 55 to 74—participated in the trial. The study found that lung cancer deaths were 20 percent lower among those who received a CT scan than those screened with X-rays.

 

The NPR story reported that another recent study found that the false-positive rate for lung CT (computerized tomography) is 33 percent among those who have had two screening tests—higher than the National Cancer Institute trial found.

 

Away from the science page, an article that cites sources that offer opposing views on, say, the economy, politics or sports is the textbook example of journalistic balance, according to Rodgers. But it seems that the duty to present “both sides of the story” may at times be more of a disservice to the public when the “other side” lacks substantive scientific support, she adds.

 

Also, can scientists and health leaders effectively communicate discoveries at all through daily journalism—especially if research in said field is still ongoing and understanding isn’t definitive? Will journalistic traditions always result in news stories that play up tension and controversy?

 

Rodgers’ response to both those questions is “yes.” In general, journalists will continue to focus on contradiction, conflict and oddity—because news is defined as “man bites dog, not dog bites man.” As well, many research institutions’ press offices follow a formula of sorts that has become familiar to editors and reporters who need the news summed up in quick fashion.

 

But whereas a classroom or library would be the obvious place to educate one’s self, Rodgers says most news stories primarily seek to inform or provoke public debate, and engage as broad an audience as possible.

 

In other words, if a story entices people to dig deeper on their own, that’s a bonus.

 

Education vs. engagement

 

Education and engagement are like two different languages, according to Rodgers, and the devices of the latter include summarizing information, focusing on emotionally and intellectually appealing details prominently in a story, and then providing contextual facts and qualifiers that put the report in perspective—but perhaps more towards the end.

 

Granted, the results of the study that she and Geller helped author reinforced common complaints about the media. They found that one-third of the science stories that were assessed had exaggerated the benefits of a discovery, and only one-third of them presented a balance of expert opinion. But the study also concluded that the scientists, journalists and members of the public who were surveyed—11, 16 and 23, respectively—were in substantial agreement about what a good science news story should contain.

 

That finding, Rodgers says, is where efforts going forward should begin in helping the public make sense of the see-saw effect that everyday readers of science news occasionally witness. The average person simply wants to know whether research has resulted in conclusive data that will tell them how to improve or protect their own health and that of the ones they love.

 

In most cases, one paper alone cannot promise that—let alone the news story that is merely trying to sum up the study for the casual reader. Journalistic reporting presents a snapshot, according to Rodgers, whereas those seeking to educate themselves would do best to read the study itself, and then perhaps supplement the findings with a book or other resource about the field.

 

But that doesn’t mean that scientists, in an effort to make the most direct impact possible, should just bypass the public and go straight to policy-makers, either. “Scientists have unprecedented opportunities to reach the public with good information,” Rodgers says, “and increasingly, the National Institutes of Health and other funding organizations see it as the scientists’ responsibility to account for their use of public funds.”

 

In a feature just published by Nature on how scientists feel about working with the press, astronomer Steve Squyres expressed Rodgers’ sentiment in stronger terms. The oft-quoted Cornell University professor heads NASA’s Mars Rover science team, and so has spoken at many press conferences.

 

“We were handed more than $800 million for this project,” Squyers told Nature. “I didn’t feel we had the option to say, ‘No, we don’t feel like doing media today.’ They’re the conduit to the people you’re doing this for in the first place.”

2 people like this post.

Share

Contributors
Joann Rodgers
Michael Pena

Tags: , , , , , , ,

Leave a Reply