The Colorado legislature passed the Colorado Artificial Intelligence Act (SB 205 or the Act) on May 8, 2024. If approved by Gov. Jared Polis, it will be the first law in the U.S. to impose specific requirements intended to mitigate risk of "algorithmic discrimination" on both developers and deployers of artificial intelligence (AI) and on a variety of use cases.
SB 205 is one of a number of similar bills under consideration by state legislatures, and it could create a model for such legislation going forward. Utah recently enacted the Utah Artificial Intelligence Policy Act, which took effect May 1, 2024, and has limited requirements around identifying generative AI systems when used to respond to consumer requests or when used by certain regulated occupations such as physicians. SB 205 goes further, potentially creating a new duty of care and rules around risk management, documentation and notification in the event a "high-risk" AI system causes algorithmic discrimination.
Key Takeaways
The Act will impose a number of new requirements related to AI and algorithmic discrimination, including:
- SB 205 would regulate the development and deployment of AI where used as a significant factor in decision-making in an enumerated set of situations considered to be "consequential."
- Businesses developing or deploying "high-risk" AI systems will need to take reasonable care to prevent algorithmic discrimination.
- The most significant impact on many businesses will be on the use of AI in employment matters.
- SB 205 would apply to many businesses that otherwise fall outside the scope of the Colorado Privacy Act (CPA), including many healthcare organizations, financial institutions and businesses processing lower volumes of personal data.
- A requirement to notify the Colorado attorney general (AG) when algorithmic discrimination is discovered has the potential to increase the risk of investigations and trigger requests for copies of some of the extensive documentation required by the Act.
- Both SB 205 and CPA require transparency through notices and impact assessments, but some of the details regarding what must be included varies. SB 205 does not contain an opt-out right, meaning the use cases not covered by the CPA (such as employment) are not subject to consumer opt-out.
What Uses of AI Are Covered?
SB 205 would regulate the development and deployment of AI where used as a significant factor in decision-making in an enumerated set of situations considered to be "consequential." These include employment, education, lending, financial services and healthcare, and such AI systems are referred to as "high-risk" AI systems. The Act also requires labeling of AI systems that interact with individuals, whether or not the system is "high-risk."
What Is Algorithmic Discrimination?
"Algorithmic discrimination" is a use of AI that "results" in "unlawful differential treatment or impact" that disfavors an individual or group based on a protected classification, specifically: age, color, disability, ethnicity, genetic information, English proficiency, national origin, race, religion, reproductive health, sex, veteran status or another classification protected under law.
What Businesses Must Comply with SB 205?
The Act's key requirements apply to "developers" and "deployers" of "high-risk" AI systems. A "developer" is a person doing business in Colorado who develops or intentionally and substantially modifies an AI system. A "deployer" is a person or entity doing business in Colorado who utilizes a "high-risk" AI system.
There are no revenue or data volume thresholds, though small businesses acting as deployers may be exempt from certain requirements in limited situations. Unlike with some "comprehensive" privacy laws, there are no entity-level exemptions for healthcare or financial institutions or exemptions for employee or business contact data, and although certain regulated activities fall outside the scope, those exemptions are not as expansive as typical.
What Are the Key Requirements of the Act?
Requirements for "High-Risk" AI Systems
Labeling of AI Systems
A developer or deployer of an AI system intended to interact with individuals is required to disclose to the individual that they are interacting with AI unless it would be obvious to a reasonable person. Unlike most of the Act's other terms, these labeling requirements are not restricted to "high-risk" AI systems.
Enforcement
SB 205 does not provide for a private right of action. Instead, the Act would be enforced by the Colorado AG, with offenses constituting unfair trade practices under Colo. Rev. Stat. Section 6-1-105, punishable by fines of up to $20,000 per violation. There is a safe harbor for businesses that discover and cure a violation as a result of their own actions (rather than complaints) and that follow specified AI frameworks such as NIST.
Next Steps
Gov. Polis has not confirmed whether he intends to sign the legislation. Notably, a nearly identical bill in Connecticut failed to pass after Gov. Ned Lamont threatened to veto it over concerns the legislature was moving too fast. To address those concerns, the Colorado legislature adopted a separate bill creating a working group to study AI and biometric regulation, meaning that there is some likelihood that amendments to the Act will be proposed. If enacted, the law will go into operation on Feb. 1, 2026.
The Colorado AG is granted rulemaking authority. Although not mandatory, potential areas of rulemaking include: 1) developer documentation, 2) notice, 3) risk management, 4) impact assessments and 5) the Act's reputable presumptions and affirmative defenses. See Section 6-1-1607. Given the attention the Colorado AG gave to rulemaking under the CPA, it would not be surprising to again see thorough consideration of potential regulations.