The US city of Boston has been testing ChatGPT in its administration for several months. Officials have come to a positive initial verdict and warned against over-regulation.
Generative AI technologies such as OpenAI’s ChatGPT could reform outdated bureaucratic procedures and improve government processes and citizen participation, argue Santiago Garces, Chief Information Officer of the City of Boston, and Stephen Goldsmith, Professor of Urban Development Policy at Harvard Kennedy School.
In a recent article, Garces and Goldsmith, who say they have already driven technology innovation in five cities and worked with chief data officers from 20 other municipalities, argue that the Biden administration’s recent executive order on AI must be implemented dynamically and flexibly so as not to hinder the potential benefits of generative AI.
The technology could fundamentally change the way governments work, cutting red tape in particular. It could also help staff analyze problems and risks and respond quickly.
US city of Boston has been testing ChatGPT since May
In May, Garces introduced guidelines for the use of generative AI in Boston to ensure responsible use of the technology in city government. The guidelines emphasize transparency, fairness, and accountability in the use of AI-generated content to prevent bias and misinformation from influencing decision-making. They also include recommendations for identifying AI-generated content, assessing its credibility, and aligning its use with the city’s values and goals.
One application that is already being tested, according to Garces and Goldsmith, is a simplified interpretation of so-called 311 data. 311 is a citizen hotline that serves as an alternative to the 911 emergency number in some cities, allowing residents to report problems or ask questions. OpenAI’s technology enabled time-series analysis per case and comparative analysis per district, they write.
“This meant that city officials spent less time navigating the mechanics of computing an analysis, and had more time to dive into the patterns of discrepancy in service,” the two experts said. “With lower barriers to analyze data, our city officials can formulate more hypotheses and challenge assumptions, resulting in better decisions.” They say the experiment shows that with little training, even those without a STEM background can use generative AI to support their work.
Generative AI could produce changes “to bring residents back to the center of local decision-making”
The authors therefore propose that frontline workers be given more authority to solve problems, identify risks, and review data. “This is not inconsistent with accountability; rather, supervisors can utilize these same generative AI tools, to identify patterns or outliers—say, where race is inappropriately playing a part in decision-making, or where program effectiveness drops off (and why).”
They also point to the potential for generative AI tools to enable citizen groups to hold the government to account in new ways.
However, such widespread use of generative AI would need to be accompanied by improved data analysis skills in the public workforce to understand whether a tool is taking the right steps and generating accurate information. They also call for the development of technology partnerships with the private sector to address issues of privacy, security, and algorithmic bias.
Garces and Goldsmith believe that technology, if widely, correctly and fairly implemented, can bring about the changes needed “to bring residents back to the center of local decision-making.”