New Year, New AI Rules: What Healthcare Organizations Need to Do Now

4 min read
January 16, 2026 at 3:08 PM

Several new state laws took effect on January 1, 2026, that directly govern how artificial intelligence is used and disclosed in healthcare settings. States are moving faster than federal lawmakers, and they are placing practical requirements on organizations that develop, deploy, or rely on AI in healthcare.

If your organization uses AI for patient engagement, clinical decision support, documentation, utilization review, or marketing and communications, now is the right time to confirm what is in scope, what disclosures are required, and what evidence you will need to show that controls are operating as designed.

California: No More Pretending to Be a Doctor

California is targeting AI systems that may mislead patients into believing they are interacting with a licensed healthcare professional.

Effective January 1, 2026, AB 489 prohibits developers and deployers of AI systems from using terms, letters, phrases, or design elements that indicate or imply the AI possesses a healthcare license. The law also bars AI advertising or functionality that suggests care is being provided by a natural person with the appropriate license when it is not. Enforcement is notable because healthcare professional licensing boards have jurisdiction and may pursue injunctions under existing licensing law.

California also enacted SB 243, effective the same day, which regulates companion chatbots designed to provide ongoing interaction and emotional support. The law requires clear notification that users are interacting with AI. It also mandates protocols intended to prevent responses that could encourage self harm or suicidal ideation, and to refer users to a crisis service provider when a user expresses suicidal ideation, suicide, or self harm.

For healthcare organizations, this is not just a policy update. It affects product and workflow design, including chatbot naming, user interface language, disclaimers, escalation logic, and how interactions are logged.

Texas: New Disclosure Requirements for the Use of AI

Texas has enacted the Texas Responsible Artificial Intelligence Governance Act, signed into law in June 2025 and effective January 1, 2026. For healthcare, the key requirement is disclosure. Practitioners must provide patients, or their personal representatives, with conspicuous written disclosure of the provider’s use of AI in the diagnosis or treatment of the patient. This disclosure must occur before or at the time of interaction. In emergencies, disclosure must be provided as soon as reasonably practicable.

Texas SB 1188 became effective on September 1, 2025. It allows practitioners to use AI for diagnostic or treatment purposes provided the practitioner is acting within the scope of the practitioner’s license and personally reviews all AI generated content or recommendations before a clinical decision is made. Like the broader Texas act, SB 1188 requires professionals to disclose AI use to patients.

In practice, this means disclosure cannot live only in general website language. It needs to be embedded into the point of care workflow, and you should be able to show evidence that it happened.

AI Transparency: What Is Under the Hood

Beyond healthcare specific rules, states are also imposing broader transparency requirements that can affect telehealth platforms, patient portals, and marketing operations with significant user bases.

California’s AI Transparency Act, SB 942, is effective January 1, 2026. It requires covered providers, defined as those with one million or more monthly users, to offer free tools that allow users to determine whether content was AI generated.

California AB 2013 requires AI developers to disclose information about the data used to train their generative AI systems. Even if you are not building models, you still need vendor answers about training data sources, bias testing protocols, validation controls, and ongoing performance monitoring. Do not assume a vendor owns compliance simply because the tool is purchased rather than built in house.

The Virginia Model Hits the Midwest and New England

Consumer privacy laws taking effect in Indiana, Kentucky, and Rhode Island on January 1, 2026, follow the Virginia Consumer Data Protection Act model. These laws provide consumer rights to access, correct, delete, and port data, and they include opt outs for targeted advertising, data sales, and profiling that produces legal or similarly significant effects. They also require data protection impact assessments for high-risk processing activities, including profiling.

These laws exempt protected health information and provide carve outs for covered entities and business associates acting within the scope of HIPAA. The important nuance is that this is not a blanket exemption for a healthcare brand. It applies to the data and activities regulated by HIPAA, not to everything a healthcare organization does. Consumer apps, marketing workflows, wellness offerings, and analytics programs can sit outside HIPAA and still trigger privacy obligations.

A Wrench in the Works: The December 11 Executive Order

On December 11, 2025, President Donald Trump signed an executive order titled Ensuring a National Policy Framework for Artificial Intelligence. The order aims to establish a single national framework, and it signals potential federal opposition to certain state AI requirements.

The executive order does not immediately invalidate state laws that took effect on January 1, 2026. For now, these laws remain on the books. The right posture is to continue compliance preparations while monitoring federal developments.

What to Do Next

Start with controls that can be evidenced and audited.

  1. Audit patient facing AI systems. Identify any tools that interact with patients and assess whether design or functionality could be interpreted as implying licensure or human oversight that does not exist.
  2. Implement disclosure protocols. If you operate in Texas, build workflows to ensure patients are informed of AI use in diagnosis or treatment before or at the point of care, and ensure the disclosure can be documented.
  3. Validate chatbot safety protocols. If you offer ongoing interaction or emotional support through conversational tools, ensure your notification, escalation, and crisis referral controls are defined, evaluated, and maintained.
  4. Upgrade AI vendor diligence. Update procurement questions and contracting expectations so you can obtain training data disclosures where applicable and confirm bias testing, validation controls, and monitoring practices.
  5. Assess privacy law applicability. Determine whether consumer data processing falls outside HIPAA scope and may trigger obligations under new state privacy laws.

We Are the Experts in AI Governance

Compass IT Compliance helps organizations implement practical AI governance that aligns to real workflows and audit expectations. Our Artificial Intelligence Governance Services begin by defining scope and engagement expectations, then move through risk assessments, tailored procedures, specialized training, and reporting that supports implementation.

If you are using AI in healthcare and want a defensible plan for disclosure, transparency, vendor diligence, and privacy alignment, we can help you scope requirements, prioritize gaps, and build an actionable compliance roadmap.

Contact us today to discuss an AI governance assessment and a compliance roadmap that fits your organization.

Contact Us

Get Email Notifications

No Comments Yet

Let us know what you think