Big Tech group calls for California high-risk AI bill to narrow its scope

Business Software Alliance, a global trade association that represents large technology companies like Microsoft, Oracle and Salesforce, is calling for California Assembly Bill 1018, which would regulate high-risk use cases of artificial intelligence, to limit its scope and include more precise definitions.
Also known as the Automated Decisions Safety Act, the bill sets new rules for how artificial intelligence and other automated-decision systems are used in situations that significantly affect people’s lives, such as in the domains of housing, jobs, health care, credit, education and law.
If passed, starting in 2027, AI developers, companies that create or significantly modify such AI systems, as well as deployers, the companies that use them to make decisions, would be required to test these tools before they’re used, provide users clear notice and explanation and give people the right to correct, opt out, or appeal decisions made using these tools.
At a Senate Judiciary meeting in July, Rep. Rebecca Bauer-Kahan, the bill’s author, told lawmakers that the legislation sets common-sense guardrails for AI systems in order to reduce bias in critical areas.
“The reason this is so critically important is the way these AI tools are built is that we put in data, historical data, and we use that to decide how the world works and then it outputs a decision. And as everybody sitting here knows, historical data is full of bias,” Bauer-Kahan, a Democrat, said during the meeting.
But Craig Albright, a senior vice president at BSA, said the bill uses vague language that could affect low-risk AI systems, offers conflicting enforcement by various government entities and misunderstands how AI systems are practically developed and used.
“The bill is really misguided in a couple of ways and would have serious consequences,” Albright said.
Albright argued that the bill needs to clearly define terms like “tools that are used to assist human decision making” and “quality and accessibility of important opportunity benefits,” both of which, he said, could be widely applied. He also urged lawmakers to specify which AI tools would be defined as “systems that are intended to make decisions about that eligibility.”
“In sort of plain reading of the text as it stands, you could have a scenario where a doctor’s office is using scheduling software for appointments with its patients, and that could be considered affecting the accessibility of the health care business,” Albright said.
Albright also said the bill also misunderstands what he calls, the “AI value chain,” the stages involved in creating and deploying AI tools by various companies, each responsible for a different phase of the process. He argues that the bill expects each company along the chain to test the system for potential high-risk uses, which, he said, is “not feasible.”
BSA began lobbying legislators to address these concerns in February, after the bill was introduced. The group published an opposition letter in July, stating that “[e]ffective AI regulation should assign responsibility based on real-world roles and risks. Both developers and deployers must play a part — but AB 1018 gets it wrong.”
The next stop for the bill is the Senate Appropriations Committee, when state legislators return to the Capitol from summer recess on August 18.