Tag: AI

  • AI can second-guess physicians and improve care

    AI can second-guess physicians and improve care

    Regardless of what we think about the benefits of using artificial intelligence (AI) for medical treatment, physicians are now using AI in a variety of ways. One way that AI can benefit patients is by second-guessing patients. Alvin Powell reports for the Harvard Gazette on how AI is beginning to reduce human suffering.

    Sometimes, patients have such rare conditions that physicians are hard-pressed to diagnose them properly. But, if the physician puts their symptoms into a well-working Large Language Model (LLM), AI, it can answer a wide range of questions. And, the patient can get treated appropriately.

    Powell gives the example of seeing a rare condition in a child. The physician asked AI a range of questions, the “genetic causes, biochemical pathways, next steps in the workup, even what to tell the child’s parents.” AI made the physician’s job efficient and effective.

    To be clear, AI cannot work alone. We still need physicians. But, lay people can get helpful answers on medical conditions. And, AI can successfully second-guess physicians. For example, AI diagnosed a child’s pain correctly after 17 physicians had failed to do so over three years.

    The child’s mom gave ChatGPT all of her child’s medical notes. And, ChatGPT correctly determined the child’s spinal cord was attaching to his backbone. With that accurate diagnosis, a surgeon could treat the problem.

    AI also should be able to allow physicians to see more patients, helping physicians with patients’ histories, possible diagnoses etc. One experiment using AI found that doctors who entered all information into an LLM and left it to the AI to determine the diagnosis had a 90 percent accuracy rate, as compared to the doctors not relying on AI at all, who had a 74 percent accuracy rate.

    One physician with AI expertise explains: “The best way a doctor could use it now is for a second opinion, to second-guess themselves when they have a tricky case,” he said. “How could I be wrong? What am I missing? What other questions should I ask?

    But, physicians can also use AI to understand and prevent harmful drug interactions in hospital. Today, as many as one in four hospitalized patients in Massachusetts suffer from bad side effects. Using AI, they can be avoided.

    And, physicians can use AI for good note-taking while speaking with a patient. Rather than looking into a computer as they discuss the patient’s condition, AI can do the work, writing up and summarizing the clinical notes. The physician need only review the notes for accuracy, easing the physician’s workload.

    Still, AI can spit out bad information. So, physicians who rely on it need to be mindful.

    Here’s more from Just Care:

  • Does UnitedHealth use flawed AI to deny care in Medicare Advantage?

    Does UnitedHealth use flawed AI to deny care in Medicare Advantage?

    Bob Hermann reports for StatNews that a case against UnitedHealth for using flawed AI algorithms to deny care to Medicare Advantage enrollees is making it way through the US District Court in Minnesota. (A similar suit has been filed against Humana, which uses the same AI system as UnitedHealth.) Will the judge agree that the artificial intelligence (AI) system is flawed and remedy the issue? First, the judge must find he has the authority to rule on this issue.

    UnitedHealth has moved to dismiss the case on the ground that plaintiffs have not worked their way through the lengthy appeals process and that federal Medicare law preempts state law. The judge will decide on that motion as early as this month. But, the stories of enrollees denied basic critical care by UnitedHealth abound. One older man, Frank Perry, needed rehab care to regain his strength after a brain disorder that caused him to fall a lot and landed him in the hospital each time. He couldn’t get it.

    UnitedHealth would only approve skilled nursing home care for Perry. Skilled nursing care is less costly than rehab care. Moreover, UnitedHealth only approved nursing care for two weeks, even though Medicare covers this care for up to 100 days when medically reasonable and necessary. Perry kept challenging the denials but he ended up dying before resolution of his case.

    UnitedHealth says it does not rely exclusively on AI to deny care. But, Stat got hold of UnitedHealth materials that run contrary to UnitedHealth’s claim. For sure, people who appeal win, more than four out of five times, suggesting that many denials are inappropriate. Unfortunately, most people don’t know to appeal or how easy it is to do so.

    Keep in mind that people enrolled in traditional Medicare do not face these barriers to care. As a general rule, they get the care they need when they need it.

    Here’s more from Just Care:

  • Should AI help with end-of-life decisions?

    Should AI help with end-of-life decisions?

    Artificial intelligence or AI is on the rise. If you haven’t yet tried using AI, check out Perplexity or ChatGPT. It boggles the mind how quickly they can answer your questions about just about anything, including drafting a research paper, writing a poem and explaining health care options. The Harvard Gazette explores the question of whether we should want AI to help patients and health care providers with end-of-life decision-making?

    For sure, AI has become better than some physicians at diagnosing patients’ conditions and arriving at a prognosis. In addition, hospitals and medical clinics use AI to analyze test results. Large-language models now permit AI to advise on patient care. Patients will make the ultimate decision.

    Of course, some patients are not competent enough to guide providers as to their end-of-life choices. And, some situations are fluid, depending upon the patient’s condition or even the time of the day.

    How could AI help a patient at the end of life? AI could explain what patients could expect. It could describe extremely thoroughly possible physical limitations of a diagnosis, pain, possibilities for treatment and more. Its advice would not be emotionally-laden.

    If the patient at the end of life could not speak for himself or herself, AI would have a more objective perspective perhaps than providers or family members about the patient and the patient’s perspective.

    In theory, AI could provide better advice than a physician about a patient’s chance of survival from a particular treatment. That advice would not dictate a particular outcome. AI probably should not be determining what a patient should do. That should happen between patient and doctor.

    When there’s no doctor available, AI could provide some patient care. Could AI deliver care in compassionate ways? How would that affect the patient’s health outcome?

    Here’s more from Just Care:

  • Proposed Medicare Advantage rule aims to limit bad insurer behavior

    Proposed Medicare Advantage rule aims to limit bad insurer behavior

    Last week, the Centers for Medicare and Medicaid Services (CMS), which oversees Medicare, proposed a new rule intended to limit some of the many insurance company bad acts, reports Rebecca Pifer for HealthcareDive. Unfortunately, Medicare Advantage plans all too frequently inappropriately delay and deny people’s care notwithstanding CMS rules. To protect MA enrollees, the government should penalize insurers who violate their obligations severely enough to deter bad acts; without strict penalties, more rules are unlikely to be of much help.

    The CMS proposed rule strives to address five of the biggest concerns with Medicare Advantage. The Trump administration will have the power to decide which, if any, of these proposals will be finalized.

    • Insurers’ use of artificial intelligence to deny care without consideration of patient needs. The rule is designed to make transparent to MA enrollees their insurers’ coverage policies. Insurers sometimes use artificial intelligence to engage in across-the-board denials of care, even when care is urgently needed. The  MA insurers use AI particularly to deny care for people with costly and complex conditions, such as people with cancer and people needing rehabilitation services. New CMS data reveals that more than 80 percent of denials are overturned on appeal, but only four percent of people appeal. The proposed rule also would require insurers to notify enrollees about their appeal rights.
    • Insurers’ publication of inaccurate provider directories that misrepresent which physicians and hospitals are in network. The rule strives to ensure that the provider directories do not mislead enrollees as they are wont to do.
    • Insurers’ misleading marketing. The rule strives to protect enrollees from misleading marketing.
    • Insurers’ coverage of supplemental benefits. The rule aims to ensure that enrollees are fully aware of these benefits and their limitations.
    • Insurers’ reporting of how much money they spend on patient care rather than administration and profits. Insurers are legally required to spend at least 85 percent of the money they are paid to cover enrollees on patient care. But, many appear to find ways to spend a lot less.

    In addition, if finalized, the proposed rule would for the first time require Medicare to cover weight-loss drugs for people who are obese, even if they don’t have other health conditions.

    Here’s more from Just Care:

  • UnitedHealth claims enrollees cannot challenge inappropriate care denials in court

    UnitedHealth claims enrollees cannot challenge inappropriate care denials in court

    Several months ago, Stat News exposed a common practice at UnitedHealth care and other big insurers: Large numbers of Medicare coverage denials through the use of AI. Bob Herman now reports for StatNews that UnitedHealth care claims a judge should dismiss a class action lawsuit against it because enrollees did not exhaust administrative remedies for appealing denials.

    UnitedHealth is able to deny people coverage in Medicare Advantage with impunity and profit from its failure to comply with Medicare coverage rules. It knows that only a small fraction of people will appeal denials, so it can save money by not paying for care. It also knows that the Centers for Medicare and Medicaid Services, which oversees Medicare, does not have the resources to adequately oversee MA plans or the power to impose meaningful penalties on insurers when they violate their contracts and deny care inappropriately.

    So UnitedHealth allegedly denied thousands of Medicare Advantage enrollees’ rehab therapy using an algorithm, without regard to the individual needs of its enrollees. And, now it’s claiming that their class action lawsuit against UnitedHealth for these denials should be dismissed because the vast majority did not exhaust their full appeal rights. (In addition, UnitedHealth claims that federal law protects insurers from these lawsuits; it argues that enrollees must sue the Department of Health and Human Services.)

    UnitedHealth blames the federal government for their enrollees’ plight, a novel. If the appeals process were swifter, UnitedHealth claims, plaintiffs would not be suing.

    The reality, of course, is that older vulnerable patients should not have to appeal inappropriate denials of necessary care; they should not face these denials. They wouldn’t have to if United considered their individual needs in making coverage determinations and put those above their shareholders’ needs. But, UnitedHealth’s shareholders’ needs appear to come first and that means Medicare Advantage enrollees might not get the Medicare benefits to which they are entitled.

    We will know soon whether the judge in the lawsuit agrees with UnitedHealth that plaintiffs claims should be dismissed because they did not exhaust their administrative remedies. Plaintiffs say that had they done so, they would have suffered irreparable harm. They needed care quickly and couldn’t afford to pay for it out of pocket.

    The government designed the Medicare Advantage program with a major payment system defect. It pays the insurers upfront to deliver Medicare benefits, and what the insurers don’t spend on care they largely get to keep. So, they have a powerful incentive to deny care inappropriately.

    Here’s more from Just Care:

  • CMS can’t oversee AI denials in Medicare Advantage

    CMS can’t oversee AI denials in Medicare Advantage

    Congressman Jerry Nadler, Congresswoman Judy Chu and 28 other House members recently sent a letter to the Centers for Medicare and Medicaid Services (CMS) urging CMS to assess AI denials in Medicare Advantage. If only CMS could do so effectively and in a timely manner. Not only does CMS lack the resources to do the requisite oversight at the moment, but when it finds Medicare Advantage plans are inappropriately denying care through AI, CMS appears to lack the power to punish the insurers in a meaningful way.

    Bottom line: It seems unlikely that CMS can rein in the Medicare Advantage plans’ use of AI to deny claims at eye-popping rates, even if the insurers offering Medicare Advantage plans deny care without regard to enrollees’ particular conditions, as required.

    In their letter to CMS, the members of Congress express concern about CMS’ Medicare Advantage and Part D prescription drug prior authorization requirements in its 2024 final rule.

    What’s happening exactly? NaviHealth, myNexus and CareCentrix provide Medicare Advantage plans with AI software to restrict coverage based on artificial intelligence. The insurers who rely on AI claim that they also review claims based on patient needs. But, former NaviHealth staff argue to the contrary. Mounting evidence suggests that the lives and health of some Medicare Advantage enrollees are endangered.

    Because CMS does not prevent insurers from using AI to deny Medicare Advantage coverage, members of Congress recognize the challenge for CMS to monitor the use of AI and ensure that claims are properly processed. “Absent prohibiting the use of AI/algorithmic tools outright, it is unclear how CMS is monitoring and evaluating MA plans’ use of such tools in order to ensure that plans comply with Medicare’s rules and do not inappropriately create barriers to care,” the members wrote.

    The insurers will always claim that AI is not making the denial decision, which is true. The insurers are. But, they appear to be exercising little if any independent judgment in many instances. So, the question remains whether the insurers are determining medical necessity based on the medical needs of their enrollees, as they should be. What’s clear is that though Medicare Advantage plans are legally required to provide the same coverage as traditional Medicare, they do not. 

    To help ensurer appropriate oversight of the insurers’ use of AI, among other things, the members of Congress propose that CMS:

    • Require MA plans to report prior authorization data including reason for denial, by type of service, beneficiary characteristics (such as health conditions) and timeliness of prior authorization decisions;
    • Compare the AI determinations against the actual MA plans’ determination;
    • Assess whether the AI/algorithms are “self-correcting,” by determining whether, when a plan denial or premature termination of services is reversed on appeal, that reversal is then factored into the software so that it appropriately learns when care should be covered.

    Here’s more from Just Care:

  • UnitedHealth renames company responsible for massive inappropriate denials

    UnitedHealth renames company responsible for massive inappropriate denials

    A while back, I reported on a story in Stat News that exposed a division of UnitedHealth, NaviHealth that uses artificial intelligence, AI, to deny thousands of Medicare Advantage claims, in seconds. Now, Stat News reports that UnitedHealth is renaming NaviHealth, with all the evidence pointing towards UnitedHealth continuing to deny claims en masse with the help of the renamed company. If you need a reason not to enroll in a Medicare Advantage plan or to disenroll from one, NaviHealth or whatever it’s new name, is as good as any.

    The original Stat News story explained that UnitedHealth, as well as many other health insurance companies, rely on NaviHealth, an AI system, in its medical decisionmaking to inappropriately deny care to people in Medicare Advantage plans. Former employees at NaviHealth report that its AI algorithms wrongly deny care to Medicare Advantage enrollees in serious health.

    Employees at NaviHealth complained in internal communications that insurers were denying care to people who are on IVs in rehab facilities. Medicare should cover up to 100 days in a rehab facility or nursing home for eligible individuals. But, NaviHealth sometimes determines that people need to leave rehab before their treating physicians believe that it is appropriate for them to do so. In 2022, the Office of the Inspector General of the Department of Health and Human Services reported widespread and persistent delays and denials of care in some Medicare Advantage plans, including denials of rehab and skilled nursing services.

    As Stat previously reported, insurance corporations use AI–computer programs–to deny care to Medicare Advantage enrollees with serious diseases and injuries, when traditional Medicare would have covered the care. The NaviHealth system wrongly does not consider individual patient’s needs in its determinations about when to stop covering care. Patients, physicians and NaviHealth workers are “increasingly distressed” that patients are not able to get the care they need as a result of these computer algorithms.

    Here’s more from Just Care:

  • Cigna sued in California for denying coverage 300,000 times in two months

    Cigna sued in California for denying coverage 300,000 times in two months

    Corporate health insurers’ use of AI to deny coverage is too often killing and disabling people. People in Medicare Advantage, people in State health insurance exchanges and people with job-based coverage are all at risk.  Now, Axios reports that a class of people are suing Cigna for using computer software to “deny payments in batches of hundreds or thousands at a time.” Why not? It maximizes Cigna’s profits, and Cigna has so far been able to get away with it.

    Mounting evidence shows that corporate insurers offering Medicare Advantage plans too often deny costly and critical care, including nursing home stays, rehab, home care and hospital care. This is care they are paid to cover and that traditional Medicare covers.

    The Clarkson law firm filed the lawsuit in California claiming that Cigna is violating state law. Cigna is supposed to thoroughly and fairly review insurance claims under California law. Computer algorithms is clearly at odds with that requirement. It’s hard to believe that a judge could find that a speedy computer review of a claim could be fair and thorough. But, these days, anything’s possible.

    The lawsuit claims that Cigna’s AI system denied 300,000 requests for authorization over two months in 2022. The system spent an average of 1.2 seconds on each claim. Thorough? Fair? One Cigna medical director, Cheryl Dopke, denied 60,000 claims in one month. Thorough? Fair? Hardly. California law requires individual review. And four out of five claims that were reviewed were overturned on appeal.

    Use of AI is the latest way health insurance corporations can inexpensively and swiftly turn a huge profit. Who’s designing the computer software algorithms? What’s their goal? As many denials as possible is what’s in Cigna’s economic interest. You have to wonder what questions Cigna asks about the algorithms before buying the software.

    Even some Republicans in Congress appear concerned, including House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.). She wrote Cigna for an explanation. Members of Congress appear to appreciate that people in Medicare Advantage and Medicaid are at risk of wrongful denials. But, what is she and her fellow members of Congress willing to do about it?

    Here’s more from Just Care:

  • Can AI help with medical advice?

    Can AI help with medical advice?

    A post by the Lown Institute asks whether AI (artificial intelligence) could replace your doctors and provide you with as good or better medical advice? It picks up on a  JAMA Internal Medicine article reporting that AI chatbox answered patient questions better than many doctors, with regard to content and empathy.

    The JAMA article actually found that AI chatbox offered as much as ten times more empathy than your typical doctor. And, empathy is actually a critical component of treatment, though our health care system tends not to value it.

    The researchers studied AI responses to 195 random patient questions found on social media. Then, the AI answers were compared with those of doctors. Licensed health care professionals preferred the AI answers to those of the doctors.

    But, can AI really build trust with patients? It can’t replace the personal connection people have with their doctors. How could it?

    At the same time, doctors are generally pressed for time. It’s hard for them to be compassionate when they are typically in a rush. They often cannot offer quality time to their patients; they might feel bad about their conduct, while possibly harming patient well-being.

    The question becomes whether AI could offer a good support for patients in tandem with their doctors. It could supplement the care that doctors provide, even if it could never take the place of doctors.

    AI could help to deliver care that is empathic. AI also can help doctors with administrative tasks so that doctors have more time with patients. And, AI can answer some medical questions.

    Here’s more from Just Care: