By Dan O’Connor,


As noted elsewhere on the Bulletin, the US government recently called for comments on their proposed changes to the rules governing the ways in which we protect people who volunteer to participate in medical research. The proposed changes interested me on two levels; first as a member of one of Johns Hopkins Institutional Review Boards (IRB), the committees which oversee human subjects research, and secondly as a historian currently writing a history of the regulation of research.


When I mention to my physician-researcher friends that I serve on an IRB, I get a look which is best described as a mixture of pity and chagrin. Pity for the assumed drudgery of the task (you read 20 consent forms a week…) and chagrin at what scientists often consider to be an unnecessarily burdensome set of regulatory standards governing their biomedical research.


Indeed, just last month I was invited to speak to an eminent association of science professionals at a symposium on research ethics, the basic premise of which was that current Federal research regulations are widely considered by scientists to be at best wilfully complicated, and at worst, dangerously detrimental to the progress of American medicine.


The current Federal rules are largely a function of the Belmont Report (1979), which famously outlined the three essential ethical principles governing the conduct of human subjects research (Respect for Persons, Beneficence, Justice) and their three manifestations in actual research procedures (Informed Consent, Risk/Benefit Analysis, Fair Selection of Subjects). For a contentious area like ethics, these two threesomes have held up pretty well, and it is hard to contemplate a time during which researchers were not required to get the fully informed consent of their subjects, or when it was blithely considered OK to expose some people to more risk just because of the colour of their skin or the size of their pocketbook.


Of course, the litany of very American horror stories in human experimentation is as long as it is lamentable: TuskegeeWillowbrookthe Jewish Chronic Disease Hospital, the human radiation experimentsMKULTRA… the list goes on and on. In popular historical accounts of the Belmont Report and the ensuing regulation of human  subjects research; it is these scandals, and the widespread revulsion which they triggered, that forced the US government to step in an regulate research.


Horrific as these signal events were, it should not be assumed that it took the headlining emergence of the Tuskegee case to stir up a national debate on the rights and wrongs of medical research. A number of other factors, in addition to the scandals, had been converging since World War II to create an atmosphere in which it was increasingly likely that government would intervene in scientific research.


It should be noted at this point, that traditionally the US government had been very leary of any sort of intervention in healthcare. The American Medical Association had spent most of the Twentieth Century decrying any such intervention as communism and ‘socialism for the American people’ – and successfully keeping the government out of the physician-patient relationship for the most part. The National Research Act (1974) which created the National Commission on Biomedical and Behavioral Research, which produced the Belmont Report, was passed against this trend of non-intervention.


So, what had changed? Three factors:


1) The growth in government since 1945: US Federal government had expanded hugely since WW2, particularly underLBJ’s ‘Great Society’ and was now present in parts of the US economy and society in ways previously unheard of. Other professions and activities were being regulated, and so research (with its attendant scandals) was increasingly seen as a potential target.


2) The rise of consumerism: Research was a target, in part, because of the increased attention paid to the rights of the US consumer (think Ralph Nader’s seatbelts) both as taxpayer and as subject of human medical research. In much the same way as Americans were demanding to know what was in their food, what was in their kids’ toys, what was in the air they breathed, so now it began to be asked ‘what was in the research’. US federal spending on medical research had increased hugely and so, as consumers and funders of that research, there was a prevailing belief that Americans should have some say in its conduct (if not in its scientific design)


3) The growth of the clinical trial: For centuries, the golden standard of medical efficacy was the learned opinion of the physician, but the rise of statistical science and the emergence of the double-blinded clinical trial in the second half of the Twentieth Century, meant that suddenly, physician opinion could be compared to an external and, presumably, unbiased and scientific authority.


All of these factors come together in a lovely quote from Senator Ted Kennedy, who had led the charge to pass the National Research Act:


The question is whether or not we can tolerate a system where the individual physician is the sole determinant of the safety an experimental procedure.


Certainly, the impetus to ask the question was coming from the outcry about the scandals, but here we can see the growth in government (‘whether or not we can tolerate a system’ implies the creation of a new system by government), the consumer movement (the concern with safety), and the clinical trial (questioning the physician as the sole determinant of safety). These conditions had been brewing for some 25 years before the Act was passed, and the medical literature throughout that time is full of fascinating articles debating the whys and wherefores of governmental regulation of medical research.


For the most part, the physicians and researchers who today object to government regulation of research do so only in terms of scope and burden, rather than objecting to the very notion. In the years before the Belmont Report, things were, to put it charitably, a little more strident.


Writing in Perspectives in Biology and Medicine in 1967, RL Landau, a physician, violently objected to regulation from outside the academy:


The clinical investigator who considers his program and who examines his motives in the darkness of his bedroom before falling asleep, looks into his own eyes when shaving the next morning and into those of his colleagues and students at work, won’t go far wrong. The Washington medical administrators either never had such experiences or have forgotten them.


Other than the fascinating insight into the physical attributes which Landau considered to be signifiers of morality (a clean shaven man who looks into your eyes, apparently) this statement is interesting in that it very much echoes the stance of Henry Beecher, the physician widely credited with first alerting the public to many of the scandalous occurrences in human experimentation. Beecher had published widely on the issue, insisting that peer review and collegiality was enough to ensure moral conduct in medicine. But not even Beecher would perhaps have gone so far as AM Dogliotti, who told a meeting of the New York Academy of Medicine in 1965 that:


We believe that the surgeon, because of his lofty mission, should have complete scientific and moral independence, especially in regard to the law.


Doctors are often accused of considering themselves to be a priestly class, above such petty concerns as legal systems, but rarely was the attitude as baldly stated as here.


Landau, Beecher and Dogliotti represented, to various degrees, the concerns of the American medical profession that intervention from government would lead to a reduction in the independence of their status, and thus in their ability to treat patients, and research subjects, as they saw fit.


The scandals of Tuskegee et al would, perhaps, not have been enough to overcome this widespread professional resistance to governmental regulation. However, when combined with those three other factors (government growth, the consumer movement, clinical trials), it suddenly became possible for government to step in and, on an unprecedented scale, dictate to scientists how they went about their research work.


These, in short, are the conditions under which is became possible to regulate research in the United States (note,research, not treatment – that’s another story). It will be interesting, in today’s America, to see if those conditions still impact upon the proposed changes to those regulations. We live in a time when any sort of government growth is routinely decried as wasteful, but also a time in which government interventions like Medicare and Social Security seem politically inviolate. Much is made of America as a consumerist society, but any attempt to install a Federal consumer protection official (hello, Elizabeth Warren) is seen as socialism in action. Clinical trials remain the golden standard, but as we move towards a genetic, personalised medicine, do double-blinded trials still make sense for every intervention?


The regulation of human subjects research was a product of historical forces, and it will be interesting to see how contemporary forces affect changes to those regulations.


Dan O’Connor – Research Scientist, Faculty, Johns Hopkins Berman Institute of Bioethics. Dan has two main research areas: the ethics of social media in healthcare and historicising the ethics of emerging diseases

1 person likes this post.

Share

Contributors
Daniel O'Connor

Tags: , , , , , , ,

Leave a Reply