Sign up for The Brief, the Texas Tribune's daily newsletter that provides readers with the most important Texas news.
When the Texas Workforce Commission was inundated with unemployment claims in March 2020, it turned to artificial intelligence.
The chatbot, Larry, affectionately named after the agency's former chief, Larry Temple, who passed away a year ago, was designed to help Texans sign up for unemployment benefits.
Like a next-generation FAQ page, Larry answers user questions about unemployment issues. Using AI language processing, the bot determines which answers, pre-written by human staff, best fit the unique wording of the user's question. This chatbot answered over 21 million questions until last March when he was replaced by Larry 2.0.
Rally is an example of how state agencies are using artificial intelligence. Technology adoption in state government has increased in recent years. But that acceleration is also raising concerns about unintended consequences, including bias, loss of privacy, and loss of control over technology. This year, Congress pledged to take a more active role in monitoring how states use AI.
“This is going to completely change the way government works,” said state Rep. Giovanni Capriglione (R-Southlake). State House of Representatives members have authored a bill aimed at helping states better utilize AI technology.
In June, Gov. Greg Abbott signed the bill, House Bill 2060, into law to examine and inventory how state agencies are currently using AI and assess whether the state needs an AI ethics code. established an AI advisory committee. The council's role in monitoring what countries are doing with AI does not include formulating final policy.
Artificial intelligence refers to a type of technology that emulates and builds on human reasoning through computer systems. Chatbots use language processing to understand users' questions and match them with predetermined answers. New tools such as ChatGPT are classified as generative AI because the technology generates unique answers based on user prompts. AI can also analyze large datasets and use that information to automate tasks traditionally performed by humans. Automated decision-making is at the heart of HB 2060.
More than one-third of Texas government agencies already utilize some form of artificial intelligence, according to a 2022 report from the Texas Department of Information Resources. The Labor Board also has an AI tool for job seekers that suggests customized job offers. Various agencies are using AI to translate languages into English and call center tools like speech-to-text. AI will also be used to enhance cybersecurity and fraud detection.
Automation will also be used for time-consuming tasks to “improve workload and efficiency,” according to a statement from the Ministry of Information Resources. An example of this is tracking budget spending and invoices. In 2020, DIR launched the AI Center for Excellence, which aims to help state agencies adopt AI technology. Because participation in the DIR Center is voluntary and each agency typically has its own technology team, the scope of automation and AI adoption in state agencies is not closely tracked.
Currently, Texas agencies must ensure that the technology they use meets safety requirements set by state law, but there are no specific disclosure requirements about the type of technology or how it is used. HB 2060 would require agencies to provide that information to the AI Advisory Board by July 2024.
“We want agencies to be creative,” Capriglione said. Although he hopes to find more use cases for AI, he also acknowledges that there are concerns that poor data quality will prevent systems from working as intended. “We need to set some rules.”
As AI adoption increases, so too do concerns about the ethics and functionality of the technology. An AI advisory board is the first step to overseeing how the technology is deployed. The seven-member council includes state representatives and senators, the executive director, and four people appointed by the governor with expertise in AI, ethics, law enforcement, and constitutional law.
Samantha Shorey is an assistant professor at the University of Texas at Austin who studies the social impact of artificial intelligence, particularly the types of artificial intelligence designed to enhance automation. She worries that as technology allows us to make more decisions, social inequalities will be reproduced and even exacerbated. But is it moving toward the end goal we want? ”
Advocates for greater use of AI see automation as a way to make government operations more efficient. Leveraging the latest technology can help social services speed up case management, provide instant overviews of long-term policy analysis, and streamline the recruitment and training process for new government employees. .
However, Shorey is cautious about the potential for artificial intelligence to be brought into decision-making processes, such as determining who is eligible for social welfare benefits and the length of parole. Earlier this year, the U.S. Department of Justice opened an investigation into allegations that a Pennsylvania county's AI model aimed at improving child welfare discriminated against parents with disabilities and removed their children as a result.
Suresh Venkatasubramanian, director of the Center for Technology Responsibility at Brown University, said AI systems “tend to absorb any bias that exists in historical data.” Artificial intelligence trained on data containing all kinds of gender, religious, racial, and other biases is at risk of learning to discriminate.
In addition to the issue of flawed data that reproduces social inequalities, there are also privacy concerns that the technology relies on collecting large amounts of data. What AI does with that data over time also raises concerns that humans will lose control of the technology.
“As AI becomes increasingly complex, it is very difficult to understand how these systems are working and why they are making decisions the way they are,” Venkatasubramanian he says.
That concern is echoed by Jason Green Law, executive director of the AI Policy Center, a Washington, D.C., group that advocates for stronger AI safety. With the acceleration of technology and the lack of regulatory oversight, Greenlaw said, “we may soon find ourselves in a world where AI is primarily at the helm…and the world is driven by AI interests, not human interests. We are beginning to pivot towards serving the.”
But some technology experts believe humans will continue to remain in the driver's seat of expanding AI adoption. Alex Dimakis, a professor of electrical engineering and computer science at the University of Texas at Austin, served on the U.S. Chamber of Commerce's Artificial Intelligence Committee.
In Dimakis' view, AI systems need to be transparent and undergo independent evaluations, known as red teaming. Red teaming is a process in which the underlying data and technology decision-making processes are reviewed by multiple experts to determine whether more robust safeguards are needed.
“You can't hide behind AI,” Dimakis said. Dimakis said that beyond transparency and evaluation, states should enforce existing laws against those who create AI if the technology produces results that violate the law. “We will apply the current law without any confusion in the middle.”
The AI Advisory Board plans to submit its findings and recommendations to Congress by December 2024. Meanwhile, there is growing interest in implementing AI at all levels of government. DIR operates an Artificial Intelligence User Group comprised of representatives from state agencies, higher education institutions, and local governments interested in implementing AI.
According to a DIR spokesperson, interest in the user group is increasing day by day. The group has over 300 members representing over 85 different organizations.
Disclosure: The University of Texas at Austin and the U.S. Chamber of Commerce have financially supported The Texas Tribune, a nonprofit, nonpartisan news organization funded in part by contributions from members, foundations, and corporate sponsors. . Financial supporters play no role in the Tribune's journalism. See the complete list of them here.