
Big tech has done a lousy job of solving healthcare problems by design because they were told to miss the fundamental point of how doctors and patients interact in a decentralized fashion. Hospitals paid for the design of algorithms that mine the data they collect.
How that data is mined creates the reality between the doctor and patient. That algorithm is a centralized medical tool. When you speak to your doctor that information has no input from an algorithm. The public has no idea how that one small thing affects their health. Today’s blog is being written to explain to you how this happens. I will also explain to you how I saw this impact patients’ care I was involved with recently and lead to a suboptimal outcome.
If you do not understand how centralized healthcare is harming you you will be impotent to avoid its collateral effects.
Physicians need to ask themselves the following question: How did we get manipulated, my fellow doctors? Did we screw ourselves by allowing machine learning (ML), artificial intelligence (AI), and algorithms (algos) to get between us and our patients in the doctor-patient relationship?
From Google to Apple to 23andMe, many major tech companies have gotten into the health research space. Where is the physician’s role in this business? We have our patient’s best interests in mind and we should be the gatekeepers of research as a care ancillary at the point of care, implemented to benefit our patients well being and improve health outcomes. Shouldn’t we?
When hospitals gave Google’s AI the ability to mine tech-created data of electronic medical charts the battle was lost. Physicians never realized that the EMR was medicine’s Trojan horse.
GOOGLE’s algorithm playbook was simple. Copy and paste the Big Pharma business plan. Create illness and solutions for sickness and you win with profits.
This algorithm data grab by Google was never an attempt to solve complex healthcare problems. Hubris and funding without contextual knowledge are a dangerous combination. Why do we keep waiting to get invited to this party when we should be the ones hosting it and driving the discussion?
THE KEY PROBLEM:
Individuals need a mechanism to ensure they have equal or stronger control over their personal information in the EMR. Only the programmer who worked for the healthcare company knows for sure why the algo was created in the way it was and they have the code and you don’t. Without the code and knowing what it does, you are blinded to what is most likely going to be recommended and will happen to you.
The public will come to realize that the secret of Data Science algorithms is not good. It was supposed to illuminate the truth, but the coders have written the software to remove all ideas, all concepts, in order to hide the truth to have a chance to profit from the deception. Right now data scientists are hiding this reality from everyone. The government has allowed healthcare companies to hire physicians so that healthcare companies could limit competition and then introduce the use of algorithms to replace proper medical advice. That is what is going on globally. Governments want physicians replaced by Artificial Intelligence because they believe it saves money. They could care less if it actually works.
Why?
All healthcare codes need to be fully audited by those who are subjected to their effect. This idea is simple. Every medical algo needs to have a proof of reserve audit before it can be approved for use by the FDA. Today we are asking blockchain technology in crypto exchanges to use proof of reserve before they can sell any coins of value. Why aren’t we doing the same in healthcare?
ASK THE FDA.
Who does the FDA work for? It is not patients. It is industry. Specifically, centralized healthcare.
Healthcare payers and Wall Street are 100% connected in profit creation. When you see how they create the problems, you begin to see how the issues with algorithms can lead to poor outcomes via bad advice. The FDA is charged with identifying these bad algos, but they have done a horrible job in creating an adequate regulatory audit trail to make sure the algo codes for accountability to the centralized healthcare players. Public health is failing because of this lack of control. See COVID and vaccine mandates as a recent example.
We should be able to trace every transaction and follow ‘its’ path in an audit trail, same applies to healthcare but we never get to see those outside the clinical side where certification and rigid standards are the norms for medical record vendors.
In making this comparison, we have the same issues too with checking “new” algorithms in healthcare and working from the blindside with payers. One true fact that holds anywhere is that sometimes we don’t know exactly what some algorithms may do, like under a perfect storm scenario.
Usually, in healthcare, it leads to an insurance claim rejection or something along that line. Just as the trading houses are running data over many exchanges, we have the same thing happening in healthcare with data touching many areas and accountability is a top concern.
On Wall Street, they want brokers to audit the algorithms and in healthcare, first of all, we need to see them, and when it comes right down to it, they should be filed in a digital format so one can watch one operate with some fictitious sample data.
Actually, that’s a good idea for future laws to incorporate some of those. One state legislature in New Jersey is on to this and has a bill with statutes that would allow health insurance algorithms to be audited as to how they were put together (computer code) as well.
BITCOIN LEDGERS ON BLOCKCHAINS are fully transparent in this way.
Nothing in health care is. There is ‘the problem.’
How does this change your health?
THE BLUEPRINT
Electronic medical records are really a billing device and a patient data aggregation sales device for hospitals. We should call them electronic billing records instead of EMRs. We should start telling patients their data is being sold without their knowledge. Doctors and patients are on the menu, not at the table. This is how the centralized system cheats with algorithms.
Algorithms make patterns they don’t break them. This means they have no creative or innovative potential. The intent of the algo is found in its digital design. That design is done by human action. That action is based on math. That math doesn’t lie, but humans do. Only humans using algorithms nefariously leave their fingerprints in the design of the code.
To find out who screwed you, you have to understand the mathematical design of the algo. The system behind the design always will hide in the shadows because you have to understand the math and purpose of the algo to decipher intent. Criminals use this because the law does not have expertise in coding. This is why coders believe “code is law.”
Healthcare corporations now use coding to enforce their beliefs on doctors using algos. They tell physicians that coding is designed to data mine in an evidence-based fashion. the fact is, it is not. It is designed to generate the most profit. What they never tell anyone is that the algos they built determine what becomes evidence-based. “Code is law” is a form of regulation whereby technology is used to enforce existing rules of the system who paid the coder.
This raises the key questions:
1. As law and code converge, what is the responsibility of software developers?
2. What about those who paid the coders to give the result they want in an algo that sift patient data?
The social overlap between my legal friends and my blockchain colleagues, whether on the business or technical side, is remarkably small. It’s surprising given how closely related the two fields are. In fact, I can’t imagine a meaningful blockchain conversation that doesn’t quickly escalate into a regulatory or legal rabbit hole. However, there’s one conversation that reliably comes up in both circles: Is code is law?
My blockchain colleagues on Clubhouse, especially the more technical ones, use the phrase “code is law” to suggest that code — for example, software that usually underlies a smart contract — will one day in the future replace the law. They believe that code will one day be the final authority. Accordingly, if a code has an inadvertent glitch and performs in an unexpected, perhaps unfair way, they would shrug their shoulders and respond: “Well, code is the law.”
I have yet to find a lawyer or regulator who shares this view.
In medicine, physicians are beginning to understand how healthcare systems are gaming public health systems for their profit. We really saw it during the pandemic.
Equitable treatment by the health care system really is a civil rights issue. Very few in centralized healthcare see it this way. Those of us who embrace decentralized medicine understand the risks of letting algos run our healthcare systems. The COVID-19 pandemic has laid bare the many ways in which existing societal inequities produce healthcare inequities — a complex reality that humans can attempt to comprehend, but that is difficult to accurately reflect in an algorithm. The promise of AI in medicine was that it could help remove bias from a deeply biased institution and improve healthcare outcomes and save money; instead, it threatens to automate the bias in the system. Here is where code is law fails those subject to a “software judge.”
If anything, the view in my medical-legal, and regulatory circles is the opposite. Legal practitioners and regulators, unsurprisingly, believe in the rule above all and cannot imagine a world where equities and circumstances are ignored.
The CFTC commissioner recently remarked in a speech, “I have heard some say that ‘the code is law,’ meaning that if the software code permits it, an action is allowed. I disagree with this fundamental premise. Case law, statutes, and regulations are the law. They apply to the code, just as they apply to other activities, contracts, or agreements.” This speech is cited below. He explained, “It is certainly possible that the software code does not represent the entirety of the participants’ agreement and must be interpreted in connection with traditional contract law concepts like good faith and fair dealing.” In other words, the rule of law trumps computer-generated code.
It was Lawrence Lessig, in his article of the same name and the book, Code and Other Laws of Cyberspace, who coined the phrase “code is law.” But when Lessig first used the phrase, he didn’t have in mind its contemporary usage. Lessig doesn’t argue that if software code permits an action, it is necessarily allowed. And he definitely doesn’t argue that software will replace the law.
Rather, when he wrote that “code is law,” Lessig was arguing that the internet should incorporate constitutional principles. Lessig astutely observed early on that the software that underlies the very architecture and infrastructure of the internet governs it as a whole. But who decides what the rules of code are? Who are the architects behind these code-based structures? There is an obvious and troublesome lack of transparency.
There are ways to undo it. Open-source software, if built correctly, can provide substantive protections such as freedom of speech on the internet. This is the path that Satoshi took in creating his algo. Just like the U.S. Constitution has built-in checks on power to guarantee various freedoms, the internet should include built-in transparency measures to protect the freedoms of its users.
Though it admittedly sounds a bit futuristic, I can certainly imagine a future in which computers, software, the internet, artificial intelligence, and other machine learning technology replace today’s legal system, at least some aspects of it. Will software replace the law, our legal framework, and institutions, completely? It may happen, though likely not in our lifetime. Until then, perhaps part of the law could be automated through code in the near future.
As law and code converge, what is the responsibility of software developers? Should they take steps to protect our freedoms more intentionally? What do you think?
WHAT DO I THINK?
Artificial intelligence (AI) and algorithmic decision-making systems — algorithms that analyze massive amounts of data and make predictions about the future — are increasingly affecting Americans’ daily lives. People are compelled to include buzzwords in their resumes to get past AI-driven hiring software to get a job. Algorithms are deciding who will get housing or financial loan opportunities. And biased testing software is now forcing minority students and students with disabilities to grapple with increased anxiety that they may be locked out of their exams or flagged for cheating. But there’s another frontier of AI and algorithms that should worry us greatly: the use of these systems in centralized medical care and treatment.
Some algorithms used in the clinical space are severely under-regulated in the U.S. The U.S Department of Health and Human Services (HHS) and its subagency the Food and Drug Administration (FDA). The FDA is tasked with regulating medical devices — with devices ranging from a tongue depressor to a pacemaker and now, medical AI systems. While some of these medical devices (including AI) and tools that aid physicians in treatment and diagnosis are regulated, other algorithmic decision-making tools used in clinical, administrative, and public health settings — such as those that predict the risk of mortality, the likelihood of readmission, and in-home care needs — are not required to be reviewed or regulated by the FDA or any regulatory body.
This lack of oversight can lead to biased algorithms being used widely by hospitals and state public health systems, contributing to increased discrimination against patients. Most physicians are unaware of what this healthcare algos were designed to do.
For example, in 2019, a bombshell study cited below found that a clinical algorithm many hospitals were using to decide which patients need care was showing racial bias. Black patients had to be deemed much sicker than white patients at admission to be recommended for the same care. This happened because the algorithm had been trained on past data on healthcare spending, which reflects a history in which black patients had less to spend on their healthcare compared to white patients, due to longstanding wealth and income disparities. While this algorithm’s bias was eventually detected and corrected, the incident raises the question of how many more clinical and medical tools may be similarly discriminatory.
Another algorithm, created to determine how many hours of aid Arkansas residents with disabilities would receive each week, was criticized after making extreme cuts to in-home care. This is also cited below. Some residents attributed extreme disruptions to their lives and even hospitalization to the sudden cuts. A resulting lawsuit found that several errors in the algorithm, errors in how it characterized the medical needs of people with certain disabilities, were directly to blame for inappropriate cuts made. The data they mined was from EMRs. Despite this outcry, the group that developed the flawed algorithm still creates tools used in health care settings in nearly half of U.S. states as well as internationally.
Another recent study cited below found that an AI tool trained on medical images, like x-rays and CT scans, had unexpectedly learned to discern patients’ self-reported race. It learned to do this even when it was trained only with the goal of helping clinicians diagnose patient images! This technology’s ability to tell patients’ race, even when their doctor cannot, could be abused in the future by insurers or employers, or unintentionally direct worse care to minority communities without detection or intervention.
REAL WORLD EXAMPLE
I was just involved in a case where the opinion of one surgeon was based upon a treatment algo for a brain tumor decision-making. The recommendation was made to the patient and family not to proceed with surgery based on variables found in the patient’s EMR. When I was brought into the case I asked a simple question, why was surgery refused option for this patient? The answer of the primary surgeon was, “that our algorithm analytics recommended against it.” I then asked him, “what did your analytics tell you to recommend and you do in this case? He said, I recommend and do what is evidenced based on our algorithm.” I asked him if he had any input in the algos creation. He said he did not. I asked him, “then how do you know if it is evidence-based?” He looked at me puzzled. It never occurred to him that the coder was paid to give that answer.
I asked the young surgeon if he knew what happened to the patient since his advice was given. He said he did not. I told him that the patient spent the last 4 weeks in an ICU with many complications from inaction. The 4 weeks of delay in treatment cost the insurance carrier over one million dollars. Moreover, the patient’s situation declined further in the ICU as a coma set in. I told him I came in as a second opinion and I gave the advice to remove the tumor at once to limit the symptoms of the patient even though the surgical risks of surgery were substantial. The time the patient spent in the ICU created massive risks for the patient as well. It also created massive profits for the hospital and losses for the insurer. I removed the tumor and the patient did not spend one night in the ICU. the person was discharged to rehab in three days.
When I saw the young surgeon in the parking lot, I asked the young doctor if you knew about the case outcome once I came in. He said he did. I asked him about the algorithm he relied on. I said, ‘do you know who created it and why it was created?’ He said no. I told him it was created by a software engineer who was contracted by the hospital system to create the evidence he dispensed. I then explained that the algo created a million-dollar bill for the profit of his employer, which I shared with him. He responded to me, he did what the evidence-based algo told him to. He told me he did nothing wrong and he followed the protocols that were in the medical staff bylaws of the hospital.
This young physician was installed as department chair of neurosciences by the administrators of the hospital. He told me that he was instructed when he was hired, as an employed physician by the healthcare system, that he had to follow evidenced-based “tools” the hospital had purchased for his specialty. So, from his perspective, “he was just doing his job.”
Now, what do you think about “code is law?”
CITES
1. https://www.cftc.gov/PressRoom/SpeechesTestimony/opaquintenz16
2.https://www.science.org/doi/10.1126/science.aax2342
3.https://www.theverge.com/2018/3/21/17144260/healthcare-medicaid-algorithm-arkansas-cerebral-palsy
4.https://news.emory.edu/stories/2022/05/hs_ai_systems_detect_patient_race_27-05-2022/story.html