This is the direction the federal government is taking us. We must oppose it. Action Alert!
Late last year, the FDA released a draft guidance document that will loosen the reigns on some types of medical software. This development is part of a larger trend in which artificial intelligence (AI) and other technologies take on a more central role in the doctor/patient relationship. It is “evidence-based” medicine, as determined by government regulators.
The FDA’s guidance deals with “clinical decision support” software (CDS). These are programs that “analyze data within electronic health records to provide prompts and reminders to assist health care providers in implementing evidence-based clinical guidelines at the point of care.” The 21st Century Cures Act excluded certain CDS from the definition of medical devices, meaning CDS that meet the criteria are not subject to certain FDA regulations. Further, the FDA laid out a policy of risk-based enforcement discretion for CDS that do not meet all four criteria, but that the agency considers low-risk.
The issue is less the types of software FDA considers medical devices and which it does not, but rather the trend of medicine by algorithm. Make no mistake: CDS is the future of medicine. The AI health market is projected to hit $6.6 billion by 2021; in 2014 it was $600 million.
AI supported by machine learning algorithms is already being integrated in the practice of oncology. The most common application of this technology is the recognition of potentially cancerous lesions in radiology images. But we are rapidly moving beyond this to where AI is used to make clinical decisions. IBM has developed Watson for Oncology, a program that uses patient data and national treatment guidelines to guide cancer management. As we’ve seen, Google and other tech giants are greedy for our health data so they can develop more of these tools.
Technology should absolutely be harnessed to improve medicine and clinical outcomes, but AI cannot replace the doctor/patient relationship. There are obvious ethical questions. First is the lack of transparency. The algorithms, particularly the “deep learning” algorithms currently being used to analyze medical images, are impossible to interpret or explain. As patients, we have a right to know why or how a decision about our health is made; when that decision is made by an algorithm, we are deprived of that right. Further, when a mistake is made, who is accountable? The algorithm, or the doctor? How do we hold an algorithm accountable?
There are other problems when using AI machine learning in medicine. It can lead to bias, for example, by predicting greater likelihood of a disease on the basis of race or gender when those are not causal factors. AI also does not account for the assumptions clinicians routinely make. The University of Pittsburgh Medical Center evaluated the risk of death from pneumonia of patients arriving in their emergency department. The AI model told them that mortality decreased when patients were 100 years old or had a diagnosis of pneumonia. Sound ridiculous? Rather than these patients actually being at low risk of death, their risk was so high that they were immediately given antibiotics before they were registered in the electronic medical record—throwing off the AI’s analysis and producing a ridiculous conclusion.
There is also a danger that medicine becomes even more monolithic than it is today. One commentator put it this way:
The machines do not and cannot verify the accuracy of the underlying data they are given. Rather, they assume the data are perfectly accurate, reflect high quality, and are representative of optimal care and outcomes. Hence, the models generated will be optimized to approximate the outcomes that we are generating today.
AI that is programmed to incorporate current disease guidelines and best practices into their analysis will suffer from the same shortcomings that we see in conventional medicine today. In other words, if American Medical Association guidelines are the standard for achieving the “evidence-based medicine” that AI can help with, we begin an analysis with the biases of conventional medicine. Sophisticated data analysis doesn’t get us anywhere if we’re asking the wrong questions in the first place.
Action Alert! Write to the FDA, the American Hospital Association, and the Federation of State Medical Boards, telling them you oppose the use of AI to make clinical decisions. Please send your message immediately.
AI that is programmed to incorporate current disease guidelines and best practices into their analysis will suffer from the same shortcomings that we see in conventional medicine today. In other words, if American Medical Association guidelines are the standard for achieving the “evidence-based medicine” that AI can help with, we begin an analysis with the biases of conventional medicine. Sophisticated data analysis doesn’t get us anywhere if we’re asking the wrong questions in the first place.
Excellent point. Asking the right questions is crucial in every area of life. Doctors right now are not making the grade or meeting patient needs as it is because they don’t consider the whole and are unable to see beyond their ‘book learning’. How is an AI going to improve this situation?
We are individuals, not mathmatical models
I GUARANTEE I WOULD RECEIVE BETTER MEDICAL CARE FROM AN ARTIFICIAL INTELLIGENCE PROGRAM THAN WHAT I HAVE RECEIVED OVER THE PREVIOUS 10 YEARS FROM THE QUACKS AT DUPAGE MEDICAL GROUP IN ILLINOIS. WHEN MONEY IS THE MEDICAL GROUPS ONLY BASIS FOR DECIDING WHAT LEVEL OF CARE THEY WILL PROVIDE A GIVEN PATIENT IT LEADS TO DECREASED QUALITY OF CARE FROM THE GREEDY HUMANS AND A WORLD HEALTH ORGANIZATION HEALTHCARE RANKING IN THE 30s FOR THE U.S.A.
The machines do not and cannot verify the accuracy of the underlying data they are given. Rather, they assume the data are perfectly accurate, reflect high quality, and are representative of optimal care and outcomes. Hence, the models generated will be optimized to approximate the outcomes that we are generating today.
AI that is programmed to incorporate current disease guidelines and best practices into their analysis will suffer from the same shortcomings that we see in conventional medicine today. In other words, if American Medical Association guidelines are the standard for achieving the “evidence-based medicine” that AI can help with, we begin an analysis with the biases of conventional medicine. Sophisticated data analysis doesn’t get us anywhere if we’re asking the wrong questions in the first place.
This is the most stupidest thing that I have ever heard of. Is there government really that dumb?
Well, why not? Surely you’re not advocating homeopathy and other debunked pseudoscientific claptrap in lieu? Why is evidence-based analysis so scary?
Algorithms allow modern technology to grow and continue to benefit us…you know, like the Internet, and smartphones, and other advances — THEY use algorithms.
Maybe it’s the “sciency-sounding” word that’s frightening? An algorithm is a set of rules that precisely defines a sequence of operations, which certainly isn’t scary.
The benefits of an algorithmic approach to medicine are simply too great to ignore. Shifting responsibility for more of the repetitive and programmable medical tasks to machines would allow physicians to focus on issues more directly related to patient care.
Dumping the idea wholesale out of fear serves no useful purpose; and scare tactics without due consideration of the benefits could become criminal.
I am opposed to the us of AI to make clinical decisions.
medical decisions are my sole responsibility. My doctor can suggest certain procedures, but ultimately I make the decision. No machine or software can do that. I inform my doctor whether or not I am willing to head his advice. When I have knowledge of a better practice I refuse my doctors suggestion. This happens a lot as I do my own research and usually know more about health issues than my doctor. I am apposed to using a machine that does not have my best interest in mind when making a medical decision. I am apposed to using Artificial Intelligence for medical decisions.
I am reluctant to relies on the AL when there is the inevitable human error and the AL goes on precise accurate information.
I oppose the use of Al to make clinical decisions.
Using AI to diagnose an treat patients would only be effective if the patients were robots with no variations in personality ,genetics, or life experiences. All of these things affect a human persons health and have to be taken into account in their healthcare. And these are not the only variables. Only a human doctor interacting with a patient can effectively evaluate and treat human patients. I strongly oppose use of AI to make clinical decisions for patients.
No one wants to just “be a number”, especially when we are speaking of health. Everyone is an individual and should be treated as such. There is no “common cold” as each comes with it’s own set of symptoms and variables.
I vehemently oppose any sort of AI making clinical decisions for me or about me.
Simple, no?
I am against AI health. I want my privacy protected from all the affiliates and websites that glean and sell for profit, my (and others) personal and medical privacy.
I am a person with an immune deficiency disorder. Having machines making medical decisions terrifies me since I doubt that the machines will be programmed to consider the outliers. Since those of us with CVID are more at risk than the general public in general, machines making decisions about our care sounds like it will increase our risks of being incorrectly treated.
Yup, this could detrimentally affect the 40 to 90 million Americans who already have autoimmune conditions, adding drugs to the mix of these conditions is often akin to adding insult to injury.
I oppose AI to make clinical decisions because it has been proven to make huge errors. The machines do not and cannot verify the accuracy of the underlying data they are given, therefore they cannot possible make accurate assessments.
More social depersonalization and atomization. The fundamental texts remain Brave New World and Nineteen Eight-Four.
First of All I stay Healthy doing ONLY Holistic and More Naturopathic/Homeopathic & Eastern Based Medicine, having “Little” or NO Use for Allopathic Medicine/Western Medicine. Second of All I have watched WAY TOO MANY Friends Suffer & meet their “Demise” from Oncologists. Seems that they are ONLY Trained on what feeds the Pockets of the “BIG PHARMA!”
I CERTAINLY DO NOT want to be “Judged” or have to be “Subjected” to IF I have to visit a Western Doctor (Which I NEVER DO) by being evaluated by a Machine W/O EVEN Seeing a Physical Person.
Machines can not make fine nuanced medical decisions. Machines also can not do therapeutic touch or use human touch and all human senses to make a medical assessment of human body. . The machine or AL is not a human being and thus can’t establish doctor or nurse patient relationships..
Technology should be used to assist the doctor and nurse to make better physical /diagnostic/ treatment plans but not replace the decision making process between patient and doctor.
I believe AL has both positive and negative outcomes. Insurance companies will increase their refusal of payment for treatment based on algorithms. Currently referrals to medical specialists are being made on basis of algorithms. As a patient advocate I had to “fight” algorithm decisions to obtain a cardiologist referral in a timely manner so patient would not die before obtaining treatment. The duality of AL must be monitored by the healthcare providers , ethic committees, and society.
We are all individuals, different in so many ways. Our health care should reflect that. AI may be great for some things but not for this.
Using AI to make medical decisions is nothing more that a blatant attempt to hide incompetence and medical errors by blaming ‘machine error’ rather than doctor error. Doctors kill about 300,000 yearly and this will do nothing to correct this terrible track record. It is time to greatly diminish the role of allopathy and start favoring naturopathic medicine.
I oppose the use of Artificial Intelligence to make clinical decisions.
The doctor patient relationship is pi idol to the decision making process in providing health care. Algorithm driven health care may provide some degree of guidance but should and cannot be considered a replacement for individualized care.
Informed consent is impossible when a diagnosis is made artificially. Furthermore, the many (inevitable) shortcomings of today’s state of the art would be enshrined in artificial decision making. A review of the many failed orthodoxies of medical history should be instructive when considering the wisdom of administering medicine through artificial “intelligence.”
Any substance natural or man made, that heals the body needs to be fairly tested and readily available for the doctor to prescribe. Data promoting the use of medicinal drugs will only encourage overuse and cause the body to become more toxic.
personally I find modern medicine lacking, no one seems to want to find what ails a person, they just want to pass out pills for your symptoms. Since these same doctors are the models for the AI’s I don’t see much of a future for any of us!
I oppose the use of AI to make clinical decisions.
Doctors are the only ones QUALIFiED to make clinical decisions. They should use all the tools available, BUT the FINAL decision should be made by a QUALIFIED DOCTOR.
The machines do not and cannot verify the accuracy of the underlying data they are given. Rather, they assume the data are perfectly accurate, reflect high quality, and are representative of optimal care and outcomes. Hence, the models generated will be optimized to approximate the outcomes that we are generating today.
Do not let artificial intelligence software start making healthcare decisions about our health. Preserve healthcare practitioners’ authority to diagnose and treat medical conditions with their very human eyes, minds, hands and hearts.
It is bad enough that se have insurance statisticians and their computer algorithms telling our doctors how to treat us, we certainly don’t need more machines doing it!
People are sicker than ever before in history. We do NOT need A1 interfering in how a doctor treats a person. People are all different. It is crazy to assume that a machine could correctly analyze our health. This is over the top wrong and evil. Just another way for someone to make a lot of money. Shame on the people in the FDA who have been making terrible decisions that other countries do not.
The machines do not and cannot verify the accuracy of the underlying data they are given. Rather, they assume the data are perfectly accurate, reflect high quality, and are representative of optimal care and outcomes. Hence, the models generated will be optimized to approximate the outcomes that we are generating today.
AI that is programmed to incorporate current disease guidelines and best practices into their analysis will suffer from the same shortcomings that we see in conventional medicine today. In other words, if American Medical Association guidelines are the standard for achieving the “evidence-based medicine” that AI can help with, we begin an analysis with the biases of conventional medicine. Sophisticated data analysis doesn’t get us anywhere if we’re asking the wrong questions in the first place.
Since machines cannot possibly make humane decisions, they should not be used in formulating health plans for human beings.
I have already had firsthand experience with medical AI algorithm. My 92 yr. old father, fairly healthy for his age, not on any pharma drugs prescription or OTC, got a UTI. Our first clue was that he fell twice in a twelve hour period, which could indicate sepsis as well. (This happened with my husband last year.) We got my dad into the ER via medical transport, where they determined that he did indeed have a UTI for which they immediately started a fast acting antibiotic drip. Then they tested for sepsis, which of course came up negative because they’d already started the I.V. antibiotic drip. He complained of severe pain in his right hip, but the x-ray showed nothing. So, he was transferred to a bigger “better” hospital where a CT scan revealed a cracked pelvis. He remained there for 4 days, but on “observation” status. A computer algorithm determined that because there was an absence of sepsis (results had been skewed by the antibiotics), it was “medically unnecessary” for him to remain in the hospital despite his fracture and extreme pain associated with certain movements, including standing. A doctor could have over-ridden the admit status, but he or she would do so at the risk of losing their job. So, they sent him home via medical transport, in a hospital gown with a foley bag, to be cared for by my disabled 88 yr. old mother. And because of his admit status, none of the hospital/doctor bills were covered by Medicare or their FEB (Federal Employee Benefits) insurance carrier. This is happening NOW, and it is a nightmare.
The programs are only as good (or bad) as the programmers, and can only follow, and expand on, the intent of the programmers. Perhaps the AI is designed to automatically downgrade a person so that they are not covered by their insurance. This isn’t such a novel idea because that has been happening in emergency rooms–with doctors in charge–for a long time. My retired physician husband did not believe that was happening until it happened to him. His health was deteriorating as he aged, and he KNEW he needed to be admitted at one point in his care. Instead, he was sent home after spending almost 12 hours in the emergency room. They did not authorize an ambulance to get him home, and when I needed help to get him up the stairs to the house, I was told to call the fire department. On a world wide comparison, the American health care system already has a very low ranking. Proliferation of AI in the medical system will only make that worse and take away our right to make our own medical decisions. I, personally use only alternative and complementary care. I almost lost my life twice due to medical error and life-threatening reactions to prescriptions. It has now been almost 40 years since I have had an antibiotic, or any prescribed pain killers, and I have never been healthier. The use of AI and many other technical devices is simply a way for those in power to exercise more control over our lives. Heaven help us all!
This is downright scary ! Please don’t allow AL to move forward. This would be a true nightmare. Doctor/patient relationships are key to all humans seeking medical help. Please don’t take it away.
The machines do not and cannot verify the accuracy of the underlying data they are given. My daughter has multiple health issues involving autoimmune problems and has had difficulties even getting competent care. She has Medicaid so choice of doctors is limited. Her current rheumatologist has said she has Lupus, written prescriptions to treat Lupus, but refuses to diagnose Lupus in writing! The rheumatologist apparently wants proof of organ failure beforehand. (perhaps fearing my daughter will take the diagnoses to a disability hearing as partial explanation of WHY she can’t work a full-time job– we don’t know)
AI is garbage in, garbage out, like any other program; only compounded!
Simulated planes crash under AI, always.Racial bias in web searches are a known issue.
AI for research to inform decisions is an excellent tool. For actually making decisions without human review, a recipe for disaster.
Healthcare decisions should never be made by computer (AI) or algorithm. That sets a very dangerous precedent. Only real live humans should be making these decisions, health matters are life and death, this is not like finding out what’s wrong with a car engine.
Please don’t allow Artificial Intelligence to make medical decisions! Humans need to communicate with a well-trained human mind to discuss options related to medical decisions!
Technology should absolutely be harnessed to improve medicine and clinical outcomes, but AI cannot replace the doctor/patient relationship. There are obvious ethical questions. First is the lack of transparency. The algorithms, particularly the “deep learning” algorithms currently being used to analyze medical images, are impossible to interpret or explain. As patients, we have a right to know why or how a decision about our health is made; when that decision is made by an algorithm, we are deprived of that right. Further, when a mistake is made, who is accountable? The algorithm, or the doctor? How do we hold an algorithm accountable?
The machines do not and cannot verify the accuracy of the underlying data they are given. Rather, they assume the data are perfectly accurate, reflect high quality, and are representative of optimal care and outcomes.
If the FDA truly had our good health in mind, this latest “franken-medicine” approach wouldn’t be an issue. The brutal fact is I DON’T TRUST THE FDA. Period. They’ve proven to be liars, bribe takers, and Big Pharma shills in disguise for decades. Sorry if this frankness zings you. But that’s the way it is.
There is no stopping the run toward AI and machine learning from the bureaucrats at NIH and FDA and the business community..The banks are fully behind this move and are pushing people toward digital records and a “paperless’ society where people have even less control of their moneyand away from demanding cash and paper reports on their bank accounts. Their next step is to do away with cash itself and the government is behind that move.
This is due to the greed of bankers such as Jamie Dimon who “gets” a minimum of $25,000,000/year for what he does. The stock market and the business TV including Maria Bartiromo the so-called conservative of Fox Business implies that Bankers are said to”earn” this money after they all instigated the destruction of the housing market in 2008 where they ALL got away with it. Nobody went to jail .—“The Past is prologue” and they are doing it again.
What we MUST do is insure that there are outlets where the human being is paramount even though it may cost more money. AI is death to humanity. We have no clue as to the political position of democrats and republicans who are running for office.
The use of AI in diagnosing diseases is not only unethical, but will be ineffective. Healthcare and all the nuances that need to be taken into account means diagnoses are NOT something that can be streamlined by technology. There is no “algorithm” for health. I oppose this greatly and my vote will reflect that.
To: FDA, American Hospital Association and Federation of State Medical Boards,
I opposed the strict use of AI for assigning treatment to patients. This has methodology has a high probability of abuse by Political motivations. It is appaling that you these methods are even being considered.
This trend removes liability of decision making and then there is no recourse for abuse, error, and can even be maliciously used against citizens of our country. Would you really say that AI is absolutely infallible? Neither are doctors, but doctors can consider other immediate factors that may not be strictly medical. Now you are opening up the ability of who gets the treatment and who doesn’t.
Too much absolute Government control over citizens using AI to assign treatment.
You should drop this malicious process and find better more human ways to use these resources
By proceeding with thin mentality I’m thinking you are “Socialist” minded and not American minded.
This is too big of an issued that should be left in the hands of rouge Socialist minded people.
Scrap this broad Governmental Over-reaching program immediately!
This is a dangerous road to travel. People would be treated as robots and a one size fits all approach which is null and void in a genuine doctor -patient relationship. Fast food medicine removes the ability of the medical provider to perform a proper hands on physical assessment and examination ( eyes, ears, and touch/ palpate) on the individual. In addition, obtaining a proper medical, social, and family history is a significant part of the individual’s diagnosis and treatment plan. Humans beings are complex organisms with millions of biochemical processes occurring every second. Computer algorithms are not substitutes for delivering preventive, optimal, and chronic medical management. People would suffer severe consequences and die.
WHEN WE STARTED MEDICAL SCHOOL WE WERE TOLD THAT 50% OF WHAT WE ARE GOING TO TEACH YOU IS WRONG. BUT THERE IS NO WAY TO KNOW WHICH 50% IS WRONG AND WHICH IS RIGHT.
LOOKING BACK OVER MANY YEARS OF MEDICAL PRACTICE, ITS NOT REALLY CLEAR EVEN 50% OF WHAT THEY TAUGHT US WAS CORRECT.
FOR THE CITIZEN, THE MOST IMPORTANT SKILL IS TO BE ABLE TO LISTEN TO MEDIA AND DECIDE WHICH IS TRUE AND WHICH IS BALDERDASH. SAME IS TRUE IN A SCIENTIFIC JOURNAL, TEXTBOOK, ETC. MUCH IS PROPAGANDA AND STUPIDITY. EXPERIENCE, INTUITION, STUDY, JUDGEMENT REQUIRED.
COMPUTERS COULD NOT EVEN DRIVE A CAR WITHOUT CRASH AND BURN. HOW COULD THEY PRACTICE MEDICINE. THEY HAVE NO SENSE OF RESPONSIBILITY.
Your last comment…”THEY HAVE NO SENSE OF RESPONSIBILITY” not only applies to A.I. but sadly to many western trained medical doctors and Big Pharma as well.
Our medical care today is already too impersonal. We have lost the human connection. We as people are not just machines to be analyzed, but human beings with diverse attitudes and challenges. There is no computer program that will be able to adequately determine what is best for me. That is modern insanity. We make the machines and people make mistakes. Most often the best way for a physician to know what is going on with a patient is to talk with them, to listen carefully and ask questions of them. Machines can’t do this. Stop the insanity!
The correct forward movement concerning the article, requires a proper response from the FDA, the American Hospital Association, and the Federation of State Medical Boards. The topic needs to become more of a knowledge based approach.
I oppose the use of AI to make clinical decisions.
Fascinating. I have been receiving requests from my insurance provider for ALL KINDS of info via “surveys” and just plain data mining. I finally realized that is what they are doing by offering Gift Cards for giving them info. OMG. At first I felt how inappropriate and then realized this is how they snag vulnerable people with no idea whatsoever about this.
Great article. Thank you as always. We have to stay informed about this. I do think there is a wise side to the appropriate use of data but it’s always so easy to be misused.
AMA =American Murder Association!
This is just plain going way too far! If you are not outraged? You are NOT paying Attention. RIP to those who believed in their DRs. So sorry to have had to bury you.