Tag: Artificial intelligence

  • AI can second-guess physicians and improve care

    AI can second-guess physicians and improve care

    Regardless of what we think about the benefits of using artificial intelligence (AI) for medical treatment, physicians are now using AI in a variety of ways. One way that AI can benefit patients is by second-guessing patients. Alvin Powell reports for the Harvard Gazette on how AI is beginning to reduce human suffering.

    Sometimes, patients have such rare conditions that physicians are hard-pressed to diagnose them properly. But, if the physician puts their symptoms into a well-working Large Language Model (LLM), AI, it can answer a wide range of questions. And, the patient can get treated appropriately.

    Powell gives the example of seeing a rare condition in a child. The physician asked AI a range of questions, the “genetic causes, biochemical pathways, next steps in the workup, even what to tell the child’s parents.” AI made the physician’s job efficient and effective.

    To be clear, AI cannot work alone. We still need physicians. But, lay people can get helpful answers on medical conditions. And, AI can successfully second-guess physicians. For example, AI diagnosed a child’s pain correctly after 17 physicians had failed to do so over three years.

    The child’s mom gave ChatGPT all of her child’s medical notes. And, ChatGPT correctly determined the child’s spinal cord was attaching to his backbone. With that accurate diagnosis, a surgeon could treat the problem.

    AI also should be able to allow physicians to see more patients, helping physicians with patients’ histories, possible diagnoses etc. One experiment using AI found that doctors who entered all information into an LLM and left it to the AI to determine the diagnosis had a 90 percent accuracy rate, as compared to the doctors not relying on AI at all, who had a 74 percent accuracy rate.

    One physician with AI expertise explains: “The best way a doctor could use it now is for a second opinion, to second-guess themselves when they have a tricky case,” he said. “How could I be wrong? What am I missing? What other questions should I ask?

    But, physicians can also use AI to understand and prevent harmful drug interactions in hospital. Today, as many as one in four hospitalized patients in Massachusetts suffer from bad side effects. Using AI, they can be avoided.

    And, physicians can use AI for good note-taking while speaking with a patient. Rather than looking into a computer as they discuss the patient’s condition, AI can do the work, writing up and summarizing the clinical notes. The physician need only review the notes for accuracy, easing the physician’s workload.

    Still, AI can spit out bad information. So, physicians who rely on it need to be mindful.

    Here’s more from Just Care:

  • Should AI help with end-of-life decisions?

    Should AI help with end-of-life decisions?

    Artificial intelligence or AI is on the rise. If you haven’t yet tried using AI, check out Perplexity or ChatGPT. It boggles the mind how quickly they can answer your questions about just about anything, including drafting a research paper, writing a poem and explaining health care options. The Harvard Gazette explores the question of whether we should want AI to help patients and health care providers with end-of-life decision-making?

    For sure, AI has become better than some physicians at diagnosing patients’ conditions and arriving at a prognosis. In addition, hospitals and medical clinics use AI to analyze test results. Large-language models now permit AI to advise on patient care. Patients will make the ultimate decision.

    Of course, some patients are not competent enough to guide providers as to their end-of-life choices. And, some situations are fluid, depending upon the patient’s condition or even the time of the day.

    How could AI help a patient at the end of life? AI could explain what patients could expect. It could describe extremely thoroughly possible physical limitations of a diagnosis, pain, possibilities for treatment and more. Its advice would not be emotionally-laden.

    If the patient at the end of life could not speak for himself or herself, AI would have a more objective perspective perhaps than providers or family members about the patient and the patient’s perspective.

    In theory, AI could provide better advice than a physician about a patient’s chance of survival from a particular treatment. That advice would not dictate a particular outcome. AI probably should not be determining what a patient should do. That should happen between patient and doctor.

    When there’s no doctor available, AI could provide some patient care. Could AI deliver care in compassionate ways? How would that affect the patient’s health outcome?

    Here’s more from Just Care:

  • Next frontier: Eye exams using artificial intelligence

    Next frontier: Eye exams using artificial intelligence

    Photos from a retinal camera allow an artificial intelligence (AI) algorithm to perform eye exams and quickly diagnose diabetic retinopathy, a condition that could lead to blindness, reports Hannah Norman for California Health Line. Diagnoses are immediate, and no doctor is involved.

    Diabetic retinopathy is the principal cause of blindness for adults under 65 and a health condition that millions of Americans with diabetes are at risk of getting. Today, some 9.6 million Americans have diabetic retinopathy.

    People with type 2 diabetes typically spend a lot of time and money getting tested for retinopathy. They must see an eye doctor, have their eyes dilated and then can easily wait seven days for a diagnosis. And, it is recommended that they do so each year or, at least, every other year.

    To date, the FDA has approved lots of medical devices that work through AI.

    What is diabetic retinopathy? It stems from injury to blood vessels in the retina from high blood sugar. People with diabetes can stave off diabetic retinopathy when they manage their condition. And, doctors can treat diabetic retinopathy. But, screenings allow for early treatment.

    How easy is it to you an AI system to detect diabetic retinopathy? It takes only a few hours of training.

    What happens during the AI diagnosis? Patients look into a special camera so that a technician can photograph their eyes. Generally, there is no need to dilate the patients’ eyes.

    What are the risks of using AI to diagnose diabetic retinopathy? At the moment, using AI will only detect diabetic retinopathy. It will not detect other eye conditions that an eye doctor might detect. For example, it won’t detect choroidal melanoma.

    What are the benefits of using AI to diagnose diabetic retinopathy? Using AI to diagnose diabetic retinopathy is faster and less costly than going to the eye doctor. With AI, people are also far more likely to go for a follow-up visit after diagnosis than if they went to the eye doctor, according to one recent study. Researchers attribute the increased likelihood of follow-up to the fact that patients get a diagnosis right away.

    Does Medicare cover this AI test? Medicare covers this AI eye test, albeit at a very low rate–$45.36.  Corporate health insurers have an average negotiated rate of $127.81 for the test.

    The technology is still in its infancy. But, based on what we know right now, it is more than likely to take off big time before long. And, of course, researchers are looking to expand the reach of AI to detect glaucoma and other eye diseases.

    Here’s more from Just Care:

  • Medicare Advantage cannot rely exclusively on AI to deny coverage

    Medicare Advantage cannot rely exclusively on AI to deny coverage

    For some time now, UnitedHealth and other health insurance companies are reported to have been using artificial intelligence tools to broadly deny coverage in certain instances for people enrolled in Medicare Advantage plans. Insurers claim that the tools simply help them determine whether to deny a service. Use of these tools appears to have led to large numbers of inappropriate denials of care for Medicare Advantage enrollees, and the Biden administration is now stepping in reports SkilledNursingNews.

    The Centers for Medicare & Medicaid Services (CMS), which oversees Medicare, is restricting insurers’ ability to use AI tools to deny claims for Medicare Advantage enrollees. It issued an FAQ to insurers offering Medicare Advantage plans to ensure insurers understand that Medicare Advantage plans cannot deny care without considering individual patient’s needs.

    The FAQ explains how insurers can use algorithms and AI in Medicare Advantage. And, it clarifies coverage requirements for post-acute care to help ensure patients aren’t wrongly denied critical care. Insurers can use prior authorization, but not in an emergency or urgently needed care situations or for out-of-network services they cover.

    While insurers can use AI in determining whether to cover a service, they are responsible for making sure that the AI complies with the Medicare coverage rules. And, they can’t rely exclusively on AI. Insurers offering MA must consider each individual enrollee’s situation, including the enrollee’s medical history and treating physician’s recommendation, in deciding whether care is medically necessary. And, before an insurer ends services for MA enrollees, their individual conditions must be reevaluated.

    Furthermore, insurers must publicly release their coverage criteria on a website and cannot change it as they please, according to CMS. And, if your doctor says you need post-acute care in a rehab facility, your health insurer cannot second-guess that decision so long as you meet Medicare coverage criteria for such care.

    If an insurer still denies coverage, it bears the burden of proof on appeal that care is not medically necessary through a detailed explanation. And, a provider with expertise in that care must issue the denial.

    Here’s more from Just Care:

  • Humana sued for inappropriate Medicare Advantage care denials

    Humana sued for inappropriate Medicare Advantage care denials

    Humana is the latest of the Medicare Advantage insurers to be sued for using AI to systematically deny care inappropriately. Tina Reed reports for Axios on a class action lawsuit filed in Kentucky alleging that Humana violated its government contract and charging Humana with unjust enrichment as well as violating insurance laws in 22 states.

    The lawsuit explains that the Humana AI model–NaviHealth–cut off rehabilitation benefits for patients without regard to their particular recovery trajectories. In one case, Humana denied plaintiff JoAnne Barrows more than two weeks of rehab therapy after she fractured her leg in her home. Her doctor had said she needed six weeks of rehab.

    Overruling the opinion of patients’ treating physicians, the complaint alleges that Humana’s AI model relies on “rigid and unrealistic predictions for recovery.” And, Humana knew that it’s AI model for predicting patents’ recovery was ‘highly inaccurate.”

    A similar suit was brought last month against UnitedHealth, which owns NaviHealth.

    These lawsuits come on the heels of a series of independent government and non-governmental reports finding widespread and persistent inappropriate delays and denials of care and coverage in Medicare Advantage plans. The Centers for Medicare and Medicaid Services, which should be overseeing these health plans, lacks the resources and the power to hold them to account for their bad acts.

    Here’s more from Just Care:

  • CMS can’t oversee AI denials in Medicare Advantage

    CMS can’t oversee AI denials in Medicare Advantage

    Congressman Jerry Nadler, Congresswoman Judy Chu and 28 other House members recently sent a letter to the Centers for Medicare and Medicaid Services (CMS) urging CMS to assess AI denials in Medicare Advantage. If only CMS could do so effectively and in a timely manner. Not only does CMS lack the resources to do the requisite oversight at the moment, but when it finds Medicare Advantage plans are inappropriately denying care through AI, CMS appears to lack the power to punish the insurers in a meaningful way.

    Bottom line: It seems unlikely that CMS can rein in the Medicare Advantage plans’ use of AI to deny claims at eye-popping rates, even if the insurers offering Medicare Advantage plans deny care without regard to enrollees’ particular conditions, as required.

    In their letter to CMS, the members of Congress express concern about CMS’ Medicare Advantage and Part D prescription drug prior authorization requirements in its 2024 final rule.

    What’s happening exactly? NaviHealth, myNexus and CareCentrix provide Medicare Advantage plans with AI software to restrict coverage based on artificial intelligence. The insurers who rely on AI claim that they also review claims based on patient needs. But, former NaviHealth staff argue to the contrary. Mounting evidence suggests that the lives and health of some Medicare Advantage enrollees are endangered.

    Because CMS does not prevent insurers from using AI to deny Medicare Advantage coverage, members of Congress recognize the challenge for CMS to monitor the use of AI and ensure that claims are properly processed. “Absent prohibiting the use of AI/algorithmic tools outright, it is unclear how CMS is monitoring and evaluating MA plans’ use of such tools in order to ensure that plans comply with Medicare’s rules and do not inappropriately create barriers to care,” the members wrote.

    The insurers will always claim that AI is not making the denial decision, which is true. The insurers are. But, they appear to be exercising little if any independent judgment in many instances. So, the question remains whether the insurers are determining medical necessity based on the medical needs of their enrollees, as they should be. What’s clear is that though Medicare Advantage plans are legally required to provide the same coverage as traditional Medicare, they do not. 

    To help ensurer appropriate oversight of the insurers’ use of AI, among other things, the members of Congress propose that CMS:

    • Require MA plans to report prior authorization data including reason for denial, by type of service, beneficiary characteristics (such as health conditions) and timeliness of prior authorization decisions;
    • Compare the AI determinations against the actual MA plans’ determination;
    • Assess whether the AI/algorithms are “self-correcting,” by determining whether, when a plan denial or premature termination of services is reversed on appeal, that reversal is then factored into the software so that it appropriately learns when care should be covered.

    Here’s more from Just Care:

  • UnitedHealth renames company responsible for massive inappropriate denials

    UnitedHealth renames company responsible for massive inappropriate denials

    A while back, I reported on a story in Stat News that exposed a division of UnitedHealth, NaviHealth that uses artificial intelligence, AI, to deny thousands of Medicare Advantage claims, in seconds. Now, Stat News reports that UnitedHealth is renaming NaviHealth, with all the evidence pointing towards UnitedHealth continuing to deny claims en masse with the help of the renamed company. If you need a reason not to enroll in a Medicare Advantage plan or to disenroll from one, NaviHealth or whatever it’s new name, is as good as any.

    The original Stat News story explained that UnitedHealth, as well as many other health insurance companies, rely on NaviHealth, an AI system, in its medical decisionmaking to inappropriately deny care to people in Medicare Advantage plans. Former employees at NaviHealth report that its AI algorithms wrongly deny care to Medicare Advantage enrollees in serious health.

    Employees at NaviHealth complained in internal communications that insurers were denying care to people who are on IVs in rehab facilities. Medicare should cover up to 100 days in a rehab facility or nursing home for eligible individuals. But, NaviHealth sometimes determines that people need to leave rehab before their treating physicians believe that it is appropriate for them to do so. In 2022, the Office of the Inspector General of the Department of Health and Human Services reported widespread and persistent delays and denials of care in some Medicare Advantage plans, including denials of rehab and skilled nursing services.

    As Stat previously reported, insurance corporations use AI–computer programs–to deny care to Medicare Advantage enrollees with serious diseases and injuries, when traditional Medicare would have covered the care. The NaviHealth system wrongly does not consider individual patient’s needs in its determinations about when to stop covering care. Patients, physicians and NaviHealth workers are “increasingly distressed” that patients are not able to get the care they need as a result of these computer algorithms.

    Here’s more from Just Care:

  • Can AI help with medical advice?

    Can AI help with medical advice?

    A post by the Lown Institute asks whether AI (artificial intelligence) could replace your doctors and provide you with as good or better medical advice? It picks up on a  JAMA Internal Medicine article reporting that AI chatbox answered patient questions better than many doctors, with regard to content and empathy.

    The JAMA article actually found that AI chatbox offered as much as ten times more empathy than your typical doctor. And, empathy is actually a critical component of treatment, though our health care system tends not to value it.

    The researchers studied AI responses to 195 random patient questions found on social media. Then, the AI answers were compared with those of doctors. Licensed health care professionals preferred the AI answers to those of the doctors.

    But, can AI really build trust with patients? It can’t replace the personal connection people have with their doctors. How could it?

    At the same time, doctors are generally pressed for time. It’s hard for them to be compassionate when they are typically in a rush. They often cannot offer quality time to their patients; they might feel bad about their conduct, while possibly harming patient well-being.

    The question becomes whether AI could offer a good support for patients in tandem with their doctors. It could supplement the care that doctors provide, even if it could never take the place of doctors.

    AI could help to deliver care that is empathic. AI also can help doctors with administrative tasks so that doctors have more time with patients. And, AI can answer some medical questions.

    Here’s more from Just Care:

  • Artificial intelligence machines can prompt physicians to discuss end-of-life issues

    Artificial intelligence machines can prompt physicians to discuss end-of-life issues

    StatNews reports on how artificial intelligence machines are prompting physicians to discuss end-of-life issues with their terminally ill patients. At its best, artificial intelligence leads physicians to identify patients who need to consider advance care planning.

    Physicians do not have the same ability as artificial intelligence machines to recognize when their patients are terminally ill. Physicians can be pressed for time. And, like everyone else they have a variety of biases that could keep them from recognizing which patients would benefit from advance care planning. Artificial intelligence machines, in sharp contrast, can review and process all the information in a patient’s electronic medical records objectively.

    Of course, that doesn’t mean that artificial intelligence machines are better than physicians at predicting whether a patient is terminally ill. And, the data suggests that these machines get it right less than half the time. But, as the machines improve, some believe they could be helpful.

    With the help of artificial intelligence, physicians might be prompted to have different conversations with patients than they would otherwise have. If patients are terminally ill, physicians would want to understand their goals and health care wishes. Without prompts from artificial intelligence machines, physicians might be far less likely to discuss end of life wishes with some patients.

    It can be hard for physicians to prioritize, especially when it comes to talking to their patients about end-of-life care. And timing is important. If physicians put off talking with their patients about their wishes, they might not be able to. Patients can lose their mental acumen and their ability to share their health care wishes.

    Of course, with artificial intelligence, ethical questions are at play. For example, how much should doctors be told about their patients. If artificial intelligence suggests a patient will die in three months, does the doctor need to know that? Should the patient know? What if the doctor disagrees with the artificial intelligence prediction?

    Until the accuracy of artificial intelligence machines improves–they are now no better than 45 percent accurate-most physicians will not be inclined to use them. Moreover, these machines predict likelihood of death, not who is most likely to benefit from advance care planning. But, inevitably, there will come a time when many physicians rely heavily on artificial intelligence for their treatment decisions, perhaps for the good and perhaps not.

    Here’s more from Just Care: