Advertisement

Artificial intelligence: 6 steps government agencies can take

Commentary: A fellow from the Harvard Ash Center for Democratic Governance and Innovation outlines key considerations as organizations survey a new generation of automation tech.

When citizens prefer interacting with cable television providers and credit card companies over dealing directly with government agencies, we know we have a problem.

Government tech failures are legion, from such high profile meltdowns as the rollout of Healthcare.gov to interminable wait times on the phone to ask about basic government services. It seems that no matter how much the private sector innovates, digital government offerings continue to be left in the dust.

Artificial intelligence (AI), however, may be the way to bridge the gap between citizen expectation and government’s ability, while also improving citizen engagement and service delivery. As I outline in my new paper for the Harvard Ash Center, AI is already permeating the way governments deliver citizen services while reducing administrative burdens, helping resolve resource allocation problems, and taking on significantly complex tasks.

Despite the clear opportunities, we shouldn’t think of AI as some sort of a silver bullet able to solve systemic problems in government in one fell swoop. Most government offices are still trying to reach basic modern operating standards, something that the hype around many modern tools doesn’t address. Nevertheless, there is benefit in preparing for the inevitable future and making technology investments to keep pace with trends in how citizens prefer to engage with service providers.

Advertisement

Governments can start thinking about implementing AI with these six strategies:

Make AI a part of a goals-based, citizen-centric program

AI should not be implemented in government just because it is a new, exciting technology. Government officials should be equipped to solve problems impacting their work, and AI should be offered as one tool in a toolkit to solve a given problem. The question should not be, “How will we use AI to solve a problem?” but, “What problem are we trying to solve, why, and how will we solve it?” If AI is the right tool, it cannot be a single touch-point for citizens, but part of a holistic, inclusive citizen journey.

Get citizen input

Citizen input and support for AI implementations will be essential. Everyone from citizens to policymakers needs to be educated about the technology and its tradeoffs. With that level of education, citizens can then offer other ways to be engaged with AI, and even help co-create ethics and privacy rules for use of their data. When it comes to building and deploying AI platforms, user feedback is essential from both citizens and government employee users.

Advertisement

Build upon existing resources

Adding the benefits of AI to government systems should not require building the systems from scratch. Though much evolution in AI has come from early government research, governments can also take advantage of the advances businesses, developers, and research institutions are making in AI. One place to start would be integrating AI into existing platforms, like 311 and SeeClickFix, where there is existing data and engagement.

Be data-prepared, and tread carefully with privacy

Many agencies will not be at the level of data management necessary for AI applications, and many may be lacking the significant amount of data needed to train and start using AI. There are many best practices to start preparing for the use of data in AI, such as monitoring shelf life. From the start, governments should be very transparent about the data collected and give citizens the choice to opt in when personal data will be used. Privacy concerns become more relevant when citizens have not provided consent or external datasets get mixed with government sources. Data use also becomes concerning when the data is inaccurate, this can lead to a cascading effect as the data travels.

Mitigate ethical risks and avoid AI decision making

Advertisement

AI is susceptible to bias because of how it is programmed and/or trained, or if the data inputs are already corrupted. A best practice for lessening bias is to involve multidisciplinary and diverse teams, in addition to ethicists, in all AI efforts. Governments can also leverage the work of groups of technologists who have come together to create common sets of ethics for AI, such as the Asilomar AI Principles and the Partnership on AI. Given the ethical issues surrounding AI and continuing developments in machine learning techniques, AI should not be tasked with making critical government decisions about citizens, and human oversight should remain prevalent.

Augment employees, do not replace them

Research varies highly in determining the threat AI poses to jobs within the next two decades — the 2016 White House report on automation and the economy places the figure somewhere between 9 and 47 percent. In some cases, AI may instead lead to increased and new employment — directly and indirectly — related to AI development and supervision. While job loss is a legitimate concern for civil servants, and blue and white collar workers alike, early research has found that AI works best in collaboration with humans. Any efforts to incorporate AI into the government should be approached as ways to augment human work, not to cut head count. Governments should also update fair labor practices in preparation for potential changes in workplaces where AI systems are in place.

Government should start small and test before scaling, and if the team lacks adequate data or resources, put in place strategies to build AI capacity for the long-term. With these steps, governments can approach the use of AI in citizen services with a focus on building trust, learning from the past, and improving citizen engagement through citizen-centric goals and solutions.

Latest Podcasts