Advertisement

Nevada unveils policy for consistent statewide data classification

A new policy created by the Nevada Governor’s Technology Office strives to replace "bespoke" and potentially risky interagency data-sharing agreements.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
data classification
(Getty Images)

Nevada’s technology department on Wednesday announced a new policy aimed at uniformly classifying its data, a once arcane practice gaining celebrity in an age of AI and cyberattacks.

Officials said the new “proactive” policy replaces inconsistent practices used across state agencies to categorize the sensitivity of various data, ranging from innocuous meeting notes to cybersecurity defense plans. Officials hope, a press release said, the standard will “eliminate the need for separate, bespoke data-sharing agreements” and encourage more cross-agency work. But the chief motivation is keeping data secure: According to a video published by the Governor’s Technology Office, the absence of proper data classification could risk residents’ personal information being “handled with the same low level of security of, say, a press release.”

A spokesperson from the technology department said the new policy was developed over the course of a year, by a group led by the state’s chief data officer, Jason Benshoof. It contains four tiers of data sensitivity and an option for agencies to add subtiers. Information meant to be public, like meeting agendas, is “public,” internal communications and draft documents are “sensitive,” Social Security numbers and financial information are “confidential” and things like encryption keys or criminal history records are “restricted.” (According to the press release, the new policy does not affect what are considered public records, and that “classification tiers are for internal safeguarding and handling.”)

Nevada incurred a ransomware attack last May after a state employee unknowingly downloaded malware from a spoofed website. Michael Hanna-Butros Meyering, the technology bureau’s communications chief, said that attack wasn’t helpful, “by any means, but it sure clarified the urgency” of the data classification project that Benshoof had begun leading several months earlier. Meyering recalled Benshoof saying that the attack may have slowed the project. And Timothy Galluzi, the state’s chief information officer, saw his time divided as he faced months of scrutiny from the public and state lawmakers after the attack was discovered last August.

Advertisement

Meyering said that if agencies had been following a more uniform method of classifying their data, “a lot of these accidents could have been fortified a little bit. The one lesson we got out of that [cyberattack] is if you don’t know what’s sensitive, you can’t really protect it consistently. And if you can’t set consistent vendor expectations either, that’s kind of a problem.”

Data classification is also a growing concern for agencies beyond Nevada’s borders, many of which are outfitting their staffs with software powered by data-hungry large language models. Martha Wewer, North Carolina’s chief privacy officer and an avowed fan of data classification — “no one is as passionate about data classification as I am” — recently told this publication that her state’s effort spans AI, data, cybersecurity and privacy roles.

Chief data officers have for decades been alternately scolding and encouraging other agencies to inventory and classify their data, and though government leaders have in recent years become more interested in cleaning up their sprawling, messy troves, usually in hopes of tapping into AI, the number of states doing it properly may be limited. Itai Schwartz, chief technology officer at the data classification firm Mind, said the state government that conforms to a proper data classification scheme is “not very common at all. And I don’t blame them. It’s a really hard process to implement, even for a young innovative company.”

Schwartz said the aim of data classification is to “put guardrails and protect data, regardless of who’s getting access to it.” The traditional cybersecurity model was to prevent intrusion, but included few internal safeguards once bad guys found a way inside. Present cybersecurity frameworks include more fallbacks and ways of limiting risk, such as only granting users access to the data and applications they need for their jobs. But to make such schemes work, sensitive data first needs to be identified and catalogued. “Really,” Schwartz said, “the last line of defense is how much the data is exposed within the organization.”

In Nevada, the data classification framework is the beginning of a cybersecurity revamp. Testifying before the legislature’s interim finance committee last October, Galluzi, the state CIO, outlined and received approval for two projects designed to better secure the state’s data: an expansion of the state’s technical threat analysis program and a statewide security operations center. And according to the technology office’s press materials, Nevada’s data classification work “serves as the foundation for forthcoming technical safeguards, including multi-factor authentication, enhanced logging, and encryption standards aligned with federal requirements.”

Latest Podcasts