Blog Post Image

In today’s rapidly evolving technological landscape, artificial intelligence (AI) tools are leaving an indelible mark on various sectors, including government. The reactions from government agencies and leaders towards these emerging AI tools have been mixed, ranging from outright prohibition to cautious exploration. At zCore, we have practical experience working with federal customers, extensive prototyping, and in-depth research, which leads us to believe that AI tools present a valuable opportunity for government agencies. It is high time for agencies to recognize this potential for several compelling reasons.

The consumer technology world is already integrating AI into its products, enhancing existing software and creating new experiences from the ground up. As a result, public expectations of high-quality experiences and services will increasingly be influenced by the advantages AI brings. Government agencies must be intentional in leveraging AI to improve their customer experience and avoid widening the gap between public expectations and what they can deliver.

AI sets itself apart from other emerging technologies by demonstrating its potential to bring value to government. While technologies like blockchain have received significant hype, they have yet to prove their worth in solving real government customer experience challenges. On the other hand, new AI tools, such as Large Language Models (LLMs), have already shown impressive alignment with common government problems. LLMs excel at refining and organizing large amounts of unstructured data, which is a challenge faced by nearly every government agency.

Recent advancements, particularly in LLM performance, have made AI a practical and exciting field for enhancing user experiences and software capabilities. Unlike emergent technologies like blockchain, AI integration into existing systems and applications is now simpler and less intrusive, thanks to mature tools and resources available.

To responsibly and effectively use AI tools, agencies must focus on solving real problems for real people. Whether it’s internal staff grappling with unstructured data or the public seeking plain-language answers about benefits eligibility, agencies should evaluate the potential of AI tools based on their ability to enhance customer experiences. This criterion should guide technology selection, application strategies, and partner collaborations.

While there are still unresolved policy and implementation issues surrounding AI tools in government, taking a pragmatic approach to apply AI tools to customer experience problems presents a golden opportunity to improve public services. Although challenges exist, exploring how AI tools can be thoughtfully applied to government services can yield valuable results.

Potential Use Cases of AI in Government

Enhanced search experience: LLMs have garnered attention for their ability to provide a more natural and familiar search experience compared to traditional web search. Semantic search, which understands the user’s query within context, coupled with LLMs’ summarization and question-answering capabilities, can drastically simplify and shorten the user’s experience of finding and understanding important information. Government agencies can leverage AI-powered semantic search to improve discoverability and accessibility of critical information, setting a high standard for others to follow.

Insights from messy data: LLMs can offer agencies insights into unstructured data, transforming information in ways that are not possible with other tools. Unstructured data poses a challenge in almost every government agency, but LLMs can extract meaning from any piece of text, thereby increasing the value of large data systems. By carefully applying LLM tools to specific data challenges, agencies can gain valuable insights about their customers, system usage, and prioritize changes that provide the most value to the public.

Plain language translations: Despite advancements in plain language requirements, nuances of benefit eligibility, process documentation, and regulations remain trapped in complex legal language. LLMs are well-suited to translate the style and tone of such text, converting it into understandable, plain language. Combined with their summarization capabilities, LLMs can aid individuals in grasping how federal laws and programs apply to them. The interactive nature of LLMs allows for clarifying questions and updated responses, making government services and benefits more accessible and promoting efficient and inclusive public service delivery.

Call center representative support: LLMs can serve as intelligent assistants for call center representatives, providing valuable information based on an agency’s policies and documentation. Representatives can use LLM tools to look up information for callers, tailoring responses to the caller’s specific needs. While ensuring accuracy remains a challenge, LLMs can boost the efficiency of call center representatives and improve the overall customer experience.

Expanding beyond text: Just as text-based AI has improved, tools focused on audio and images have made significant strides. Combined with LLMs, audio tools can convert speech to text and vice versa, making interactions more accessible for a wider range of devices. Similarly, AI models focused on images and computer vision can generate text descriptions, aiding in digitizing paper records and advancing modernization goals.

What AI is not ready to do

While AI tools have their benefits, it’s essential to understand their limitations. AI is not yet ready to make actual decisions on behalf of people. It is crucial to view AI as a tool for individuals, teams, and agencies to aid in decision-making processes rather than fully automated agents. Human involvement and oversight should remain integral in the decision-making process.

The risks of AI in government

As with any new tool or technology, applying AI to complex government systems and within government regulations comes with questions and risks. Safety, equity, accuracy, and control are primary concerns. The risks related to safety and equity arise from the distinction between providing information and making decisions. AI tools are not yet suitable for decision-making processes such as eligibility determinations or procurement awards. Accuracy is a key concern, and addressing it involves applying AI tools to the right problems while continuously improving the accuracy of AI models. Government agencies should also prioritize maintaining control over data sent to AI tools and explore self-hosted AI tools to retain data governance.

The risk of inaction

While risks exist, the danger of government agencies waiting for every implementation detail to be resolved is real. As the landscape of smooth, easy-to-use customer experiences shifts, agencies risk widening the gap between the public’s experience with consumer services and their expectations from government. To bridge this gap, agencies should proactively explore how AI tools can be applied to improve efficiency and enhance customer experiences.

AI and Large Language Models present a tremendous opportunity for government agencies to transform their services. By strategically applying AI tools, agencies can improve search experiences, gain insights from messy data, provide plain language translations, support call center representatives, and expand beyond text to audio and images. However, it’s essential to recognize the limitations of AI and address associated risks. A pragmatic approach, coupled with a focus on real problem-solving for real people, can lead to enhanced public services and bridge the gap between government and consumer technology experiences.

Related Tags