Speaking before a panel of governors on the topic of artificial intelligence on Sunday, IBM CEO Ginni Rometty emphasized the shared responsibility that state leaders hold in facilitating the technology.
“Those of us who make these technologies, and you as a governing body need to think about this — they do have to be ushered safely into this world,” Rometty said.
Rometty’s encouragement came at the National Governors Association Winter Meeting in Washington, D.C. Sharing the stage with the NGA chairman, Nevada Republican Gov. Brian Sandoval, Rometty shared her tenets for responsible stewardship of what she dubbed “the most defining technology of this era.”
Rometty, who was an AI specialist early on in her IBM career, began her lecture by dispelling any sensationalism associated with the technology.
“I think it is completely wrong, some of this [AI hype],” Rometty said.
She referenced the conflation of different types of AI as a basis for the misguided fears of the public. There’s singularity, where AI “turns on man;” general, which is capable of “doing our brain’s work”; broad, which applies to business functions; and consumer, which comprises the simple tasks performed by digital assistants like Cortana, Siri and Alexa.
While the current market enthusiasm for autonomous vehicles and digital assistants may lure a user into fearing that Skynet — the fictional group-mind in “The Terminator” movies — is just a few lines of code away, Rometty told the listening governors not to fret.
“This idea of mimicking the human brain — I give it maybe 2050, and maybe singularity never. We got a lot of time between here and there,” Rometty said.
Until then, though, she told the governors that careful management of of AI deployment and scale will be critical to supporting a dynamic workforce in virtually every industry.
“One hundred percent of jobs will change. One hundred percent,” Rometty said. “Whether you’re a doctor, engineer, school teacher or CEO, AI is going to change the way you work.”
Rometty laid out three areas for governors to focus on to maintain a responsible AI environment: purpose, transparency and data principles.
“You need to be very clear about what the purpose of these technologies are,” she said. “We believe these technologies are to augment what man does. It’s man plus machine.”
Understanding who owns AI is an equally important aspect, she said — even with the intellectual property gleaned from AI, it’s necessary to know the input as well. Rometty gave the example of machine learning in cancer research. Patients and consumers, she said, deserve the right to know whether the data used by a cancer research AI came from top hospitals or elsewhere before letting it replace a doctor whose degrees are publicly visible.
The right to know the origin and the owner of input data, Rometty said, is one of the largest issues facing artificial intelligence.
“You can’t end up in a world of haves- and have-nots, where all the data goes to a few companies,” she said. “Being clear about those data principles — that’s responsible stewardship of these technologies.”