A Case for Oregon

June 21, 2013

By Nathan Risinger

A version of this article is posted on the Huffington Post

Healthcare has always been a topic of both passion and perplexity within this country. While both sides of the debate are fueled by robust ideology and hardnosed rhetoric there tends to be a fair amount of confusion regarding the actual particulars of the ideas at stake. This shouldn’t come as a total surprise, since attempting to provide healthcare to a nation as large and as diverse as the United States is no easy task, and the devil is most certainly in the details. As the ACA lumbers into action over the next several months and years, it is worth noting that there have been other efforts — some more successful than others — to come up with a sustainable solution to increasing access to healthcare within our challenged healthcare system.

One of the most promising of these alternative efforts was never even attempted at the federal level. It was implemented several decades ago by the state of Oregon in a desperate attempt to provide better health services at lower prices to the entire population. Instead of attempting — as is currently the practice throughout the United States — to ration based on attributes of people (certain groups are more likely to get health coverage than others), the Oregon system discriminated on the basis of attributes of services.

This service-based approach was a relatively novel one, and it simply said that all people would be entitled to the same minimum standard of care. To make such a policy economically feasible, not all services were included. There was a list of services and, based on available resources, a cut-off was inserted. The procedures and treatments above the cutoff would be covered by the state. Those that fell below wouldn’t.

Such a system raises several ethical questions that are worth considering. How does one come up with a ‘ranking’ of various treatments? Should there be exceptions from the list (should certain sets of individual circumstances allow physicians to lobby for particular non-covered treatments for their patients)? Is such a list congruent with our notions of what is socially just? Does it place too much emphasis on the health of the group, and not enough emphasis on the health of the individual? While any healthcare system must obviously be tailored to the needs of the constituency it serves, to what degree can we allow such a system to compromise the rights of the singular agent?

Several different methods were tried in creating the list. The first attempt involved a fairly complex algorithm that — through an analysis of multiple variables including cost, gravity of condition, etc. — spat out a list of services, ordering them from most essential to least. Now, on the surface of it such a solution is quite elegant. By appealing to math we are able to discard a fair amount of bias that one might otherwise associate with the creation of such a list. A properly formulated algorithm will — in theory — not be subjective, it will not discriminate based on whim or emotion as a person might.

Unfortunately, while the algorithm wasn’t a bad first slice, it was far from perfect. The very objectivity that made it so appealing was also its undoing. Certain procedures that were obviously essential (surgery for appendicitis) were left below the cutoff, while some which might not be considered as important were included on the list of services provided (tooth capping). Obviously one would want a life-saving appendectomy before having his or her teeth capped. Because of these flaws a committee was tasked with going back through the list and re-arranging it where it was obviously out of tune with medical and public opinion.

In the end this first algorithm was scrapped. It was too complicated, and too impractical to be put into practice. In its stead a simpler algorithm was put forward, an algorithm that did not involve calculus and was supposedly more effective at prioritizing treatment. However, this algorithm suffered from many of the same flaws as the first, and while it was eventually applied in real clinical settings it was only after undergoing extensive human re-formatting in much the same vein as its older sibling.

This re-organizing raises an interesting question. Does human ‘tampering’ destroy the objectivity of an algorithm? The answer is probably yes, but the real question is: does it matter? From a philosophical perspective it certainly might (the ideal solution would obviously be to find the perfect algorithm, one which would not need to be tampered with at all), but from a practical point of view the answer is clearly no. Sure, it would be nice to have a perfect algorithm that acts as an un-biased arbiter. But how would we arrive at it? And, even if we did somehow miraculously arrive there, how would we know we had arrived? Oregon is the classic example of a health policy decision. It is an attempt to bend a realm of absolutes (ethics and philosophy) to reality — a place built on compromise both ethical and otherwise.

nathanrisingerNathan Risinger, B.A., is a research program coordinator at the Johns Hopkins Berman Institute of Bioethics.  He is interested in the concept of free will, especially in relation to the possibility of objective moral truths.

1 person likes this post.


Nathan Risinger

Tags: , , , , ,

Leave a Reply