- May 31, 2024
- by
- Industries
- 0 Comments
Concerning consumer protections in interactions with artificial intelligence systems.
SESSION: 2024 Regular Session
SUBJECTS: Business & Economic DevelopmentLabor & EmploymentTelecommunications & Information Technology
BILL SUMMARY
The bill requires a developer of a high-risk artificial intelligence system (high-risk system) to use reasonable care to avoid algorithmic discrimination in the high-risk system. There is a rebuttable presumption that a developer used reasonable care if the developer complied with specified provisions in the bill, including:
- Making available to a deployer of the high-risk system a statement disclosing specified information about the high-risk system;
- Making available to a deployer of the high-risk system information and documentation necessary to complete an impact assessment of the high-risk system;
- Making a publicly available statement summarizing the types of high-risk systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer and how the developer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of each of these high-risk systems; and
- Disclosing to the attorney general and known deployers of the high-risk system any known or reasonably foreseeable risk of algorithmic discrimination, within 90 days after the discovery or receipt of a credible report from the deployer, that the high-risk system has caused or is reasonably likely to have caused.
The bill also requires a deployer of a high-risk system to use reasonable care to avoid algorithmic discrimination in the high-risk system. There is a rebuttable presumption that a deployer used reasonable care if the deployer complied with specified provisions in the bill, including:
- Implementing a risk management policy and program for the high-risk system;
- Completing an impact assessment of the high-risk system;
- Annually reviewing the deployment of each high-risk system deployed by the deployer to ensure that the high-risk system is not causing algorithmic discrimination;
- Notifying a consumer of specified items if the high-risk system makes a consequential decision concerning a consumer;
- Providing a consumer with an opportunity to correct any incorrect personal data that a high-risk artificial intelligence system processed in making a consequential decision; and
- Providing a consumer with an opportunity to appeal, via human review if technically feasible, an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system;
- Making a publicly available statement summarizing the types of high-risk systems that the deployer currently deploys, and how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each of these high-risk systems, and the nature, source, and extent of the information collected and used by the deployer ; and
- Disclosing to the attorney general the discovery of algorithmic discrimination, within 90 days after the discovery, that the high-risk system has caused or is reasonably likely to have caused.
A developer of a general purpose artificial intelligence model (general purpose model) is required to create and maintain specified documentation for the general purpose model, including:
A policy to comply with federal and state copyright laws; andA detailed summary concerning the content used to train the general purpose model.
A developer of a general purpose model must create, implement, maintain, and make available to deployers who intend to integrate the general purpose model into the deployers’ artificial intelligence systems documentation and information that:
Enables the deployers to understand the capabilities and limitations of the general purpose model;Discloses the technical requirements for the general purpose model to be integrated into the deployers’ artificial intelligence systems;Discloses the design specifications of, and training processes for, the general purpose model, including the training methodologies and techniques for the general purpose model;Discloses the key design choices for the general purpose model, including the rationale and assumptions made;Discloses what the general purpose model is designed to optimize for and the relevance of the different parameters, as applicable; andProvides a description of the data that was used for purposes of training, testing, and validation, as applicable.
If an artificial intelligence system, including a general purpose model, generates or manipulates synthetic digital content, the bill requires the developer to:
Ensure that the outputs of the artificial intelligence system are marked in a machine-readable format and detectable as synthetic digital content; andEnsure that the developer’s technical solutions are effective, interoperable, robust, and reliable.
A person doing business in this state, including a deployer or other developer, that deploys or makes available an artificial intelligence system that is intended to interact with consumers must ensure that the artificial intelligence system discloses to each consumer who interacts with the artificial intelligence system that the consumer is interacting with an artificial intelligence system. If an artificial intelligence system, including a general purpose model, generates or manipulates synthetic digital content, the bill requires the deployer of the artificial intelligence system to disclose to a consumer that the synthetic digital content has been artificially generated or manipulated.
The attorney general and district attorneys have has exclusive authority to enforce the bill. During the period from July 1, 2025, through June 30, 2026, the attorney general or a district attorney, prior to initiating any action for a violation of the provisions of the bill, shall issue a notice of violation to the alleged violator and, if the attorney general or district attorney determines that a cure is possible, provide the alleged violator 60 days to cure the violation before bringing an enforcement action. The bill does not restrict a developer’s or deployer’s ability to engage in specified activities, including:
- Complying with federal, state, or municipal laws, ordinances, or regulations;
- Cooperating with and conducting specified investigations;
- Taking immediate steps to protect an interest that is essential for the life or physical safety of a consumer; and
- Conducting and engaging in specified research activities.
The bill provides an affirmative defense for a developer or deployer if:
- The developer or deployer of the a high-risk system
or generative systeminvolved in a potential violationhas implemented and maintained a program thatis in compliance with a nationally or internationally recognized risk management framework for artificial intelligence systems that the bill or the attorney general designates; and - The developer or deployer takes specified measures to discover
and correctviolations of the bill.
The bill grants the attorney general rule-making authority to implement and enforce the requirements of the bill.
(Note: Italicized words indicate new material added to the original summary; dashes through words indicate deletions from the original summary.)
(Note: This summary applies to the reengrossed version of this bill as introduced in the second house.)