Artificial Intelligence Regulations: State and Federal AI Laws 2026
AI regulations in the United States are evolving faster than most organizations can track. With no comprehensive federal law and a growing patchwork of state requirements, compliance has become a moving target.
This guide breaks down the current federal and state AI laws, industry-specific requirements, and practical steps to build a compliance program that keeps pace with regulatory change.
Defining Artificial Intelligence Laws and Regulations
AI laws and regulations are rules that govern how organizations develop, deploy, and use artificial intelligence systems. The United States does not have a single comprehensive federal AI law. Instead, regulation comes from a patchwork of state laws, federal agency guidance, and voluntary standards. Four themes run through nearly every AI regulation: transparency, bias prevention, data privacy, and accountability.
This fragmented approach creates real challenges. You might comply with one state's requirements while unknowingly violating another's. And because the landscape changes quickly, what works today may not work six months from now.
- Transparency requirements: Organizations disclose when AI generates content or makes decisions
- Bias and fairness rules: Regulations prevent discriminatory outcomes in hiring, lending, and other high-stakes areas
- Data privacy protections: Laws govern how AI systems collect and process personal information
- Safety and accountability standards: Rules establish liability and testing requirements for AI systems
Federal AI Regulation in the United States
No single federal AI law exists in the United States. Federal AI governance comes from executive orders, agency enforcement actions, and pending congressional legislation. Recent policy shifts prioritize innovation over broad restrictions.
Executive Orders on Artificial Intelligence
Executive Order 14179, issued in January 2025, reoriented U.S. AI policy toward promoting innovation. It revoked portions of the 2023 Biden administration AI executive order that emphasized safety testing and reporting requirements.
Here's the key distinction: executive orders guide federal agencies but do not create enforceable laws for private companies. They set the tone for how regulators approach AI oversight, but they don't directly regulate your business.
Congressional AI Legislation and Pending Bills
Congress continues to debate comprehensive AI legislation but has not passed a broad federal AI law. The AI LEAD Act, one of the key pending bills, establishes a product liability framework for AI systems. Under this framework, developers and deployers could face legal consequences when their AI causes harm.
Other bills address discrimination in automated decision-making, deepfake disclosure, and algorithmic accountability. The legislative landscape remains fluid, so tracking pending legislation helps organizations prepare for future requirements.
Federal Agency Guidance and Enforcement
Federal agencies use existing authority to regulate AI, even without AI-specific laws:
- Federal Trade Commission (FTC): Bans AI-generated fake reviews and enforces against deceptive AI practices
- Federal Communications Commission (FCC): Regulates AI-generated robocalls and voice cloning
- Securities and Exchange Commission (SEC): Monitors AI-related fraud and disclosure requirements
- Equal Employment Opportunity Commission (EEOC): Provides guidance on AI discrimination in employment decisions
State AI Laws Across the United States
States are filling the federal gap with their own AI legislation, with 45 states taking up AI-related bills in 2024. For organizations operating across state lines, this creates a patchwork of requirements to navigate.
Colorado AI Act
Colorado passed the first comprehensive state AI law in the United States. The Colorado AI Act requires deployers of high-risk AI systems to use reasonable care to avoid algorithmic discrimination.
The law mandates impact assessments, transparency disclosures to consumers, and documentation of AI decision-making processes. It takes effect in February 2026.
Illinois Artificial Intelligence Video Interview Act
Illinois requires employers to notify candidates when AI analyzes video interviews. Candidates provide consent before AI-based evaluation occurs.
The law also sets data retention and destruction requirements. Employers cannot keep AI-analyzed video indefinitely.
California AI Transparency and Employment Laws
California has enacted multiple AI-related laws addressing different use cases:
- AI Transparency Act (SB 942, AB 853): Requires AI providers to disclose when content is AI-generated
- Civil Rights Department AI regulations: Restricts discriminatory use of AI in employment decisions, effective October 2025
- Deepfake disclosure laws: Mandates labeling of AI-generated content in political advertising
Utah Artificial Intelligence Policy Act
Utah requires disclosure when consumers interact with generative AI. The state also established an AI learning laboratory for regulatory experimentation. This sandbox approach allows testing new AI governance models before implementing broader regulations.
Other States with AI Legislation
Several other states have enacted targeted AI regulations:
- New York City: Local Law 144 requires bias audits for automated employment decision tools
- Connecticut: Requires AI impact assessments for state agencies
- Texas: Created an AI advisory council and requires state agency AI governance
- Maryland and New Jersey: Restrict AI use in hiring decisions
Existing U.S. Laws That Apply to AI Systems
Even without AI-specific laws, existing regulations apply to AI. Organizations cannot assume AI is unregulated simply because no dedicated AI law exists in their jurisdiction.
Data Privacy Laws
State privacy laws like the California Consumer Privacy Act (CCPA), Virginia Consumer Data Protection Act (CDPA), and Colorado Privacy Act apply to AI systems that process personal data. Each of these laws includes requirements for automated decision-making disclosures and opt-out rights.
Anti-Discrimination Statutes
Title VII of the Civil Rights Act, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA) apply to AI-driven decisions in employment, housing, and lending. AI tools that produce discriminatory outcomes, even unintentionally, can create legal liability.
Consumer Protection Regulations
Section 5 of the FTC Act prohibits unfair or deceptive practices. This applies to AI. Misleading claims about AI capabilities or harmful AI outputs can trigger enforcement action.
Industry-Specific AI Regulations
Certain industries face additional AI compliance requirements beyond general laws.
AI in Healthcare
The Health Insurance Portability and Accountability Act (HIPAA) applies to AI systems handling protected health information (PHI). The Food and Drug Administration (FDA) provides oversight of AI-enabled medical devices and clinical decision support tools, requiring approval or clearance for many AI applications.
AI in Financial Services
Fair lending requirements like the Equal Credit Opportunity Act (ECOA) and Fair Housing Act apply to AI-based credit decisions. Banking regulators provide model risk management guidance, and the SEC has disclosure requirements for AI use in investment decisions.
AI in Employment and Hiring
Employment-specific AI rules include EEOC guidance on AI discrimination, state laws requiring bias audits like NYC Local Law 144, and notice and consent requirements for AI-based candidate screening.
AI Risk Categories and Regulatory Standards
The risk-based approach to AI regulation categorizes AI systems by potential harm. While the EU AI Act formalizes this approach, U.S. regulations increasingly adopt similar concepts.
What Makes an AI System High-Risk
High-risk AI includes systems that make or substantially influence decisions affecting employment, credit, housing, healthcare, or legal outcomes. Because of their potential impact on individuals' lives, high-risk systems face the strictest requirements.
Limited-Risk and Minimal-Risk AI
Lower-risk AI systems face fewer requirements. A chatbot, for example, typically only requires disclosure that the user is interacting with AI rather than a human. Minimal-risk systems like spam filters generally have no specific AI requirements.
Core AI Compliance Requirements
Common compliance obligations appear across multiple AI laws and regulations. Understanding the core themes helps you build governance that satisfies multiple requirements at once.
Transparency and Disclosure Obligations
Organizations disclose AI use to consumers, employees, and other affected parties. Disclosure typically occurs before AI makes decisions and when content is AI-generated.
Algorithmic Impact Assessments
Impact assessments evaluate how AI systems may affect individuals, particularly regarding discrimination. Laws like the Colorado AI Act and NYC Local Law 144 require impact assessments for high-risk AI. These AI risk assessments help identify emerging threats before they materialize. The assessments include documentation of potential harms and mitigation measures.
Bias Testing and Fairness Requirements
Many regulations require testing AI systems for discriminatory outcomes across protected classes. Bias audits involve specific testing protocols and occur at regular intervals, often annually, particularly critical given research showing AI screening tools showing distinct bias (e.g., AI tools preferred white-associated names in 85.1% of cases in this study).
Documentation and Recordkeeping
Organizations maintain records of AI systems, training data, testing results, and decisions. Thorough documentation supports audit readiness and enables response to regulatory inquiries.
How to Build and Maintain AI Compliance
Building AI compliance in a fragmented regulatory environment requires a systematic approach. The following steps address multiple requirements simultaneously.
1. Inventory Your AI Systems
Start by cataloging all AI tools and systems in use, including third-party vendors. Identify which systems are high-risk based on their decision-making impact. Building a practical AI compliance program requires systematic documentation from the start.
2. Map Regulatory Requirements to Each System
Next, identify which laws apply to each system based on geography, industry, and AI use case. A single AI system may be subject to multiple overlapping requirements.
3. Implement Required Controls
Build controls including transparency notices, impact assessments, bias testing, human oversight, and documentation. Where possible, design controls that satisfy multiple requirements.
4. Establish Continuous Monitoring
Point-in-time compliance is insufficient. AI systems can drift, regulations change, and new laws take effect. Ongoing monitoring, periodic reassessment, and audit trail maintenance keep you prepared, especially as over half of executives expect compliance costs to increase with tightening regulations.
Platforms like Drata help organizations automate control monitoring, maintain documentation, and demonstrate compliance across multiple frameworks, including emerging AI requirements. This enterprise GRC approach coordinates governance, risk management, and compliance under one unified program.
Simplify AI Compliance with Drata
Tracking multiple state laws, federal guidance, and industry requirements manually is unsustainable. The regulatory landscape changes too quickly.
Drata helps organizations manage AI compliance alongside existing framework requirements:
- Centralized compliance management: Track AI-related controls alongside SOC 2, ISO 27001, HIPAA, and other framework requirements
- Continuous monitoring: Automated evidence collection and control testing keeps you audit-ready
- Risk Management: Identify and track AI-related risks as part of your broader GRC program
- Documentation: Maintain the audit trails and records AI regulations require
Book a demo to see how Drata streamlines compliance for organizations navigating AI regulations.
Frequently Asked Questions About Artificial Intelligence Laws and Regulations
What Is the 30% Rule for AI?
The "30% rule" is not an official regulation. It refers to a common interpretation that AI-generated work requires substantial human contribution, often cited as at least 30% human input, to qualify for certain protections or meet disclosure thresholds under various AI transparency laws.
Does the United States Have a Comprehensive Federal AI Law?
No. The U.S. does not have a single comprehensive federal law regulating AI. Federal AI governance currently relies on agency enforcement under existing laws, executive orders, and voluntary guidelines. Congress continues to debate broader legislation.
Which U.S. States Have the Most Comprehensive AI Laws?
Colorado has the most comprehensive state AI law with the Colorado AI Act, effective 2026. California follows with multiple AI transparency and employment laws. Illinois, New York City, and several other states have enacted targeted AI regulations for specific use cases like hiring.
How Do U.S. AI Regulations Compare to the EU AI Act?
The EU AI Act is a comprehensive, binding regulation that applies across all EU member states with strict requirements for high-risk AI. The U.S. has no equivalent federal law. Instead, the U.S. relies on a fragmented patchwork of state laws, agency guidance, and voluntary standards.
Navigate AI x GRC With Confidence
Get a Demo