New York RAISE Act: America's First Frontier AI Safety Law Awaits Governor's Signature
Executive Summary
New York's legislature passed the Responsible AI Safety and Education (RAISE) Act on June 12, 2025, with broad bipartisan support, making it the most significant AI safety legislation to reach a governor's desk since California's vetoed SB 1047. Currently awaiting Governor Kathy Hochul's signature, the RAISE Act takes a surgical approach to AI regulation, targeting only the world's most powerful "frontier models" that cost over $100 million to train. If signed, New York would become the first state to establish legally mandated transparency standards for advanced AI systems, focusing specifically on preventing catastrophic harms rather than broader algorithmic discrimination concerns.
Key Provisions Breakdown
Frontier Model Definition and Scope
The RAISE Act applies exclusively to "frontier models" defined as:
- AI models trained using more than 10^26 computational operations at costs exceeding $100 million
- AI models produced through knowledge distillation from frontier models at costs exceeding $5 million
- Coverage extends to any frontier models developed, deployed, or operated in New York
Critical Harm Prevention
The legislation focuses on preventing "critical harm" defined as:
- Death or serious injury to 100 or more people
- Economic damages of $1 billion or more
- Harm caused by AI systems used to create chemical, biological, radiological, or nuclear weapons
- AI systems engaging in autonomous criminal conduct without meaningful human intervention
Developer Obligations
- Establish comprehensive safety and security protocols before model deployment
- Publish redacted versions of safety protocols with trade secret protections
- Conduct ongoing testing for misuse, loss of control, and potential self-replication
- Submit to annual independent third-party safety reviews and audits
- Maintain detailed documentation for up to five years
Incident Reporting Requirements
- Report safety incidents to New York Attorney General within 72 hours
- Report incidents to New York Division of Homeland Security and Emergency Services
- Include scenarios involving concerning AI model behaviour or unauthorized access by bad actors
- Provide detailed incident analysis and mitigation measures
Enforcement and Penalties
- New York Attorney General exclusive enforcement authority
- Civil penalties up to $10 million for first violations
- Civil penalties up to $30 million for subsequent violations
- No private right of action
- Injunctive and declaratory relief available
Business Implications
Limited but High-Impact Scope
- Targeted application: Only affects companies like OpenAI, Google, Anthropic, and other frontier model developers
- Geographic reach: Covers any frontier model operations touching New York, regardless of company headquarters
- Cost thresholds: Designed to exclude startups, academic researchers, and smaller AI developers
Strategic Considerations
- National precedent: Success in New York likely influences federal regulation and other state approaches
- Industry response: Major tech companies and industry groups are lobbying against the bill
- Federal preemption risk: Congressional efforts to block state AI laws could affect implementation
- California comparison: Designed to avoid the pitfalls that led to SB 1047's veto
Operational Requirements
- Safety infrastructure: Requires sophisticated testing and monitoring capabilities
- Documentation burden: Extensive record-keeping and reporting obligations
- Third-party coordination: Must engage qualified independent auditors
- Incident response: Rapid reporting capabilities within 72-hour windows
Implementation Recommendations
Phase 1: Readiness Assessment (Current - Governor's Decision)
- Evaluate whether your AI models meet frontier model thresholds
- Assess current safety and security protocols against RAISE Act requirements
- Identify gaps in incident reporting and third-party audit capabilities
- Monitor Governor Hochul's decision timeline and potential amendments
Phase 2: Compliance Preparation (If Signed)
- Develop comprehensive safety and security protocols covering critical harm scenarios
- Establish relationships with qualified third-party auditors
- Create incident reporting systems for 72-hour notification requirements
- Prepare redacted public versions of safety protocols
Phase 3: Operational Implementation (90 Days Post-Signature)
- Implement ongoing testing and monitoring procedures
- Launch annual audit cycles with independent reviewers
- Train teams on incident identification and reporting procedures
- Establish five-year documentation retention systems
Critical Success Factors
- Early preparation: Begin compliance planning before final signature
- Industry coordination: Engage with other frontier model developers on best practices
- Legal expertise: Develop deep understanding of critical harm definitions and reporting triggers
- Technical infrastructure: Invest in sophisticated AI safety testing and monitoring capabilities
- Stakeholder engagement: Maintain relationships with New York regulators and enforcement agencies