Misconception

February 7, 2013

by Leah Ramsay

 

I was told it was a “miracle drug.”  Regular infusions of Remicade seemed like nothing compared to the suffering I’d been through before being diagnosed with Crohn’s Disease, so I gladly began the loading dose process. With that treatment plan in place, I started to see a light at the end of the tunnel, leading back to a normal life I’d almost forgotten.  Perhaps I could eventually cut back on the cocktail of antibiotics, immune suppressants and other drugs I took daily. Enjoy an actual cocktail. I felt lucky to live in the Remicade era, and tried not to think too much about those who’d missed it.

 

But it wasn’t long before my symptoms returned – the wonder drug stopped working. ‘This happens sometimes’ my doctor said, and informed me that a clinical trial was currently underway at the practice, in the final stage before FDA approval.  Humira had been used widely for arthritis, but was being approved for Crohn’s.  Other treatments were at much earlier investigative stages; once I’d taken a survey that referenced them: Would you be willing to try a drug if 50 out of 1,000 people who took it died? If the miracle drug had failed I didn’t see much choice. I signed up for the Humira trial.

 

Here is where some bioethicists would say I had a “therapeutic misconception,” thinking that the clinical trial in which I was enrolled was part of the therapy for my illness.  As recently as 2006, Franklin Miller argued, “Medical care has a personalized focus. It is directed to help­ing a particular person in need of expert medical attention. Clinical research essentially lacks this purpose of person­alized help for particular individuals.”

 

My experience counters this, however.  I always dealt with the same members of the study team and felt a ‘personalized focus’ on my health, with plenty of time for my questions and concerns.  This clinical trial was simply the next logical step in my Crohn’s Disease treatment (the integration of the two, I admit, is probably enhanced in my eyes by the fact that the trial took place within my doctor’s practice).

 

I would say the misconception is Miller’s; perhaps at one time he and those who share his view would have been right, but in the new millennium, for the seriously ill, the rapid pace of innovation has increasingly integrated research and patient care.  Immune system disorders like Crohn’s Disease and Plaque Psoriasis (for which Humira has now also been approved) are excellent examples of this, as is pediatric oncology.

 

A growing number of healthcare experts advocate transitioning these scenarios from rare, dire circumstances to an entire “learning healthcare system” of integrated and reciprocally-informing research and practice.  Information regarding effective (or not) patient care would be shared and ‘learned’ from for the improvement of future patient care, and low-risk quality improvement and comparative research would be implemented much more rapidly, integrated into a typical patient care setting rather than established as a distinct circumstance.

 

Why not?  The difficulty of establishing a widespread, national and ideally global learning healthcare system lies in the fundamental ethical distinction between research and practice that has been the foundation of the American system for 30 years.  It is a duality born of a desire for dual protections: for people participating in medical research, and of the private, autonomous doctor-patient relationship. Human-subject research would be regulated, and patient-care decisions would not.

 

These ideas are not flawed in themselves but have become irrelevant, according to bioethicists Nancy E. Kass and Ruth R. Faden of the Johns Hopkins Berman Institute of Bioethics (where I also work). They led a team that recently authored two centerpiece articles of a special report, “Ethical Over sight of Learning Health Care Systems,” published by the Hastings Center Report. They write,

 

In the 1970s and for two decades thereaf­ter, this distinction was helpful: for some forms of research, it sheds light on which activities require ethical oversight. Research that is closely integrated with health care—notably, health delivery research—was then uncommon, however. That is no longer the case, and regulations and research ethics need to change to accommodate the new landscape.

 

The idea and desire for a learning healthcare system is a reflection of the digital age; rapid sharing and updating of large amounts of information and crunching huge amounts of data were not possible until very recently.  We have the capability, and in cases of necessity like my own, the research-practice line has already been blurred.  As usual, our ethics lag behind our technical capacity; we ironically always seem to be one step ahead of ourselves.  The computer, which did not exist during my father’s childhood, has taken over the world, transforming communication, transportation, social structures, conceptions of privacy and perhaps most of all, the speed of sharing and accessing information – the speed of learning. Bioethicists like Faden and Kass will play a crucial role in bringing the digital revolution to medicine on a broad scale, a radical rethinking that incorporates lessons from the past with a vision for an ethical future.  For patients and families that have been through wrenching ordeals, misdiagnoses and preventable deaths, that future of an ethically-based system of continuous learning and improvement is the only hope we have for our suffering to have meaning.

 

 

Leah Ramsay – Science Writer, Johns Hopkins Berman Institute of Bioethics. In addition to writing for the Bioethics Bulletin, Leah works with Berman Institute faculty to  communicate their work in commentaries, videos, press releases and media interviews.

6 people like this post.

Share

Contributors
Leah Ramsay

Tags: , , , , , ,

Leave a Reply