Three ways state and local governments can get people excited about data projects
A host of organizations must get involved with a state or local government’s open data program before it can begin generating information rich enough to inform policy or improve operations. But officials from Austin and the Texas state government told a conference audience Monday that with persistence and open communication, it’s possible to convert skeptics into open-data advocates.
Texas launched its open data portal as a pilot project in 2014 without much support. But as of early 2019, it’s logged 288 million downloads of its datasets, Ed Kelly, the state’s data coordinator, said at Tyler Technologies’ Connect conference in Dallas. The state has partnered with the city of Austin and is now looking for new partnerships with more city and county governments as it continues searching for ways to ply data against government’s challenges, he said.
1. Lead by example
Kelly said that lifting the state’s data program out of obscurity started with the Department of Information Resources putting its own data on the portal before any other agency had as a way to demonstrate belief in the project.
“We like to eat our own donuts,” he said.
State lawmakers are now considering making Kelly’s position permanent and creating new official points of contact for each department to facilitate data operations.
A data-sharing agreement between the Texas Department of Criminal Justice and the Texas Workforce Commission is credited with saving the state about $90 million over four years by averting fraudulent unemployment insurance claims. Kelly said a new projects involving juvenile justice and veterans’ suicide prevention will be demonstrated to state officials later this week.
Kelly estimated the 288 million dataset downloads from the statewide data portal equate to about $5.3 million in saved “opportunity costs.”
Data collaboration in Texas now means looking beyond the bounds of the state government, Kelly said. When Texas started its open data portal, DIR contacted the city of Austin, which he said has “been fantastic with sharing its resources” and luckily for the state, was “a couple years ahead” of where Texas was.
2. Charm the critics
Charles Purma, Austin’s IT manager, said the city first started looking into purchasing an open data portal about a decade ago, but was immediately faced with “a lot of pushback and resistance” from tech-savvy members of the public who didn’t like the traditional procurement approach it was taking.
“We found out very quickly we were on the same page on a lot of things,” Purma said. “It was just a different perspective and a different approach.”
Purma said his team took the feedback to Austin’s upper management, which led to a more transparent process and portal than the city would have had without help from the community. Better still, he said, it gave the city government a lasting alliance with its one-time techie critics.
“We’re pretty proud of that process that we turned them from demoters to promoters,” Purma said.
Between monthly meetups, hackathons, data-literacy training and other events, he said the city’s relationship with its most active technologists enriches the city’s base of knowledge, from work with smaller grassroots groups like Open Austin and the Austin Tech Alliance to well-funded institutions like the Dell Medical School at the University of Texas.
While Austin is an early adopter on many technologies, including open data, developing a sophisticated program has come with challenges, said Jamila Siller, a business process consultant with the city. The biggest challenge, she said, was developing standards around the city’s data that would encourage use of the city’s open data portal and inter-department data-sharing.
3. Define ‘data quality’ and then build standards around it
Austin adopted its first new city government-wide data standards in a decade last month, and Siller said she expects those standards encourage new participation in the city’s data program. A lack of data standards had been generating data that was inconsistent and often yielded poor results in their analyses, she said, adding that this led to a mistrust both of data in general and the city itself.
To remedy this, Siller said the city’s data team started by defining what “data quality” meant to each department, because departments could have “wildly” different needs. Siller also recommended conducting a data inventory to understand what data the city is collecting. Without strong data standards in Austin, she said, data analysts who didn’t document their workflows and then left the city could set back a department by years.
“One person would leave and the whole methodology would get lost,” Siller said.
Now with standards in place, the city is turning around its data program, she said. Where previously “customer data” could refer to a single person, an organization or a group of thousands of people, there’s now a common understanding throughout the government not only of what terms mean, but what formats and schemas should be used. Instead of presiding over conflicting datasets across departments, that city now aspires for centrally-managed records that everyone can trust as accurate, she said.
“The biggest thing for me is that single source of truth,” Siller said.