Publication

Locke Lord QuickStudy: Artificial Intelligence: The Expansion of ‎Growth Opportunities for Health Insurers and ‎Related ‎Regulatory Hurdles

Locke Lord LLP
July 10, 2023

The stories of artificial intelligence applications and ChatGPT are all over the news with daily reports of new uses and challenges. With the mind-boggling speed at which generative artificial intelligence is developing, it is important to consider how AI is providing welcomed growth opportunities as well as the corresponding reactions of insurance regulators.

Health insurers are using artificial intelligence (AI) in a multitude of different ways to improve efficiencies, lower costs, increase responsiveness and enhance customer experiences, all with an underlying goal of growth. Health insurers are embracing the competitive advantages of utilizing AI as an increasingly profitable business model. AI is being used to analyze massive amounts of data, including electronic health records, clinical research trial data, and medical claims, to determine patterns and understand the data in ways that human skill sets cannot. These patterns can detect fraud, abuse and waste and increase timeliness of treatment authorizations and claim adjudication efficiencies that can save insurers billions annually. Health insurers are utilizing the AI results/outcomes to improve network and claims operations, underwriting and pricing and risk management. For example, some health insurers are using Chatbots to improve the customer service experience and speed up claims adjudication. AI is also being used for telehealth and medication refills. While AI provides many benefits and opportunities for health insurers and the consumer buying public, it also may result in outcomes/decisions that could be harmful to consumers, including unfair discrimination and unlawful bias. With such large amounts of data being used as inputs, if that data is unfairly discriminatory, then the model’s outputs also may be problematic. Thus, insurance regulators are working to keep up with ways to regulate and monitor these AI applications in the health insurance industry.

The State of AI Insurance Regulation

Insurance regulators are diligently working to stay abreast of technological innovation. In order to regulate and monitor AI innovation, insurance regulators must first understand how AI impacts insurance consumers and then ensure that AI consumer protections are effective. For example, the NAIC has established a Big Data and Artificial Intelligence (H) Working Group that is focusing on three themes of: 1) artificial intelligence (AI)/machine learning (ML) surveys; 2) tools for monitoring AI/ML; and 3) AI/ML regulatory frameworks and governance. In accordance with the states’ market conduct examination authority, a group of two dozen or so insurance departments, have already completed AI/ML surveys for private passenger auto and home insurance and are in the process of evaluating the survey responses provided by life insurers. The NAIC’s AI/ML surveys have requested information relating to: (1) the AI Model and Business Purpose; (2) Data Inputs; (3) Model Assumptions and Outcomes; (4) Model Testing/Validation; (5) Governance; and (6) Consumer Protection and Access. These surveys are being used to determine the most efficient way to regulate and monitor the industry’s use of AI. The NAIC’s work is continuing and intended to result in a Model Act or Regulation.

In the interim, insurance regulators are relying on existing unfair and deceptive trade practices laws and regulations to protect the consumer buying public. These existing laws protect consumers from insurer AI practices that are misleading, deceptive, or unfairly discriminatory. Moreover, some states, for example, Colorado and California, are promulgating guidance to address the industry’s use of AI. On June 6, 2021, Colorado enacted SB 21-169 (codified as Colo. Rev. Stat. § 10-3-1104.9) “Concerning Protecting Consumers from Unfair Discrimination in Insurance Practices,” in which the law prohibits insurers from unfairly discriminating “based on race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression” or pursuant to rules adopted by the insurance commissioner, prohibits the “use any external consumer data and information sources, as well as any algorithm or predictive models that use external consumer data information” that does the same. A year later, on June 30, 2022, the California Insurance Department issued Bulletin 22022-5 stating that Insurers “must avoid both the conscious and unconscious bias or discrimination that can and often does result from the use of artificial intelligence, as well as other form of ‘Big Data’ (i.e., extremely large data sets analyzed to reveal patterns and trends) when marketing, rating, underwriting, processing claims, or investigation suspected fraud relating to any insurance transaction that impacts California residents, businesses, and policyholder.” Looking around the corner, state insurance regulators clearly are keen to expand their specific authority to protect consumers from any illegal effects of AI.

AI Corporate Governance

Given rapid innovation of technology and slower regulation of AI in the health insurance industry, insurers’ own enterprise governance framework becomes all the more critical to anticipating, identifying and addressing the risks associated with the use of AI. Insurers focusing on accountability and transparency in their use of AI are creating frameworks for AI compliance. In doing so, insurers should consider enterprise based written policies and procedures that include: fairness, inclusion, transparency, and explainability. Accountability and responsibility includes creating a cross-functional AI Governance Team consisting of senior-level decision makers from legal, IT, finance, audit, marketing, and insurance operations that report to the board of directors or one of their committees. Insurers also should be analyzing potential liability risk associated with the use of AI and take appropriate steps to mitigate those risks. For example, insurers should implement:

  • Ongoing testing and validation of their AI models;‎
  • Processes to detect bias;‎
  • Compliance programs to ensure adherence to applicable laws;‎
  • Policies and procedures to analyze risks and arrange for the purchase of appropriate ‎insurance coverage; and ‎
  • Policies and procedures to ensure prompt responses to regulators’ inquiries regarding their AI ‎practices. ‎

AI Governance is intended to increase efficiencies and thus profitability. As the technology rapidly ‎develops so too should insurers’ ability to leverage AI to their competitive advantage while ‎maintaining AI compliance and risk mitigation to protect the insurers as well as their customers. ‎

For questions about this topic, contact the authors of this article or your Locke Lord lawyer.‎

Click here to visit the AI resource center