California bill would form new AI regulation division
To address pressing concerns surrounding artificial intelligence, California state Sen. Scott Wiener proposed a bill recently that designates a unit responsible for AI regulations throughout the state, which is home to many of the world’s most prominent software firms.
If passed, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act would create a new office in the California Department of Technology called the Frontier Model Division, tasked with strengthening AI enforcement, such as ensuring mandatory testing for large AI models before they reach users.
“When we’re talking about safety risks related to extreme hazards, it’s far preferable to put protections in place before those risks occur as opposed to trying to play catch up,” Wiener told ABC News last week. “Let’s get ahead of this.”
Artificial intelligence, especially generative AI, offers state and local government offices the opportunity to improve digital services, automate administrative tasks, and increase office efficiency. However, adopting the rapidly advancing technology also opens the door to new ethical questions and risks, including cybersecurity vulnerabilities and disruptions to the government workforce.
Weiner’s bill would also require every major AI system to have a built-in emergency off-switch, in case something goes wrong. It would also require AI software to have hacking protections.
Though the legislation would strengthen AI regulations throughout California and potentially impact policies nationwide, the bill does not specify how the new division would work with the state’s ongoing policy work on AI or the state’s top technology officials, such as statewide Chief Information Officer Liana Bailey-Crimmins.
Spokespeople from both the California Department of Technology and California Government Operations Agency declined to comment for this story.