Public sector, Corporate Risk

The use of artificial intelligence (AI) in the private sector has skyrocketed. In the public sector, however, AI usage has moved at a much slower pace despite the fact that it has the power to help governments deliver on their mission and significantly improve the way they provide services to citizens. 

What’s the best way forward? A recent episode of the Government Transformation Show podcast aims to answer that question. Host Sam Birchall sits down with Dataminr’s Alex Jaimes, Chief Scientist and SVP of Artificial Intelligence, to discuss how governments can operationalize AI and ways the fast-evolving tech can help transform a key area of the public sector. 

For the full interview, listen to the podcast below or read the transcript that follows, which has been edited for clarity and length. 

What’s the current state of AI in government, including its maturity and how it is implemented?

It’s a tough question because I think different governments have different approaches to AI; it’s apples and oranges. Some governments are a bit more advanced than others in terms of regulation and adoption, and how they think about AI and how they’re using it. So there’s a pretty wide spectrum. To be frank, things in industry are moving so quickly that even without the fast pace, it’s traditionally been hard for governments to keep up with technology. 

It really depends on where you look, which government it is, and what that government is doing. But in general, I’ve seen a lot of progress in how AI is used across the board. There are also many more concerns about AI, in part because now it’s more prevalent. It’s everywhere and people are more aware of it, even though it’s been around for a long time. 

In the last almost 10 years, there’s been a pretty significant increase in the visibility of the work and more advanced techniques. This has made some of the applications more feasible, both for the private sector and government. And of course things have really exploded since November of 2022, in terms of hype and potential. But I think there’s a lot of work to do both in the private sector and government.

How is AI being used to automate routine tasks and processes? What are the opportunities?

AI is based on learning patterns from data. Anywhere there are repetitive patterns, those are areas in which you would apply AI. When you think of it from that perspective, there are tons and tons of applications both in the private sector and government. And arguably, many processes in government are repetitive, as they are in the private industry. 

There are kind of two branches of AI, and the techniques are pretty much the same or very similar. One is predictive AI and the other one is generative AI. Predictive AI basically consists of techniques that make predictions. They don’t necessarily make predictions about the future, they just predict labels or classifications for things. 

Generative AI generates content. When we see some of the more recent AI advances, generating fancy images and interesting text and so on and so forth, that’s the generative part. The predictive part would do things like just classify tasks and label things.


An area where AI can help: collecting data, extracting useful information and obtaining observations useful for policy making.


I think most of the applications of AI in government will probably be on the detection side, in the predictive area. But there will also be quite a few applications on the generative AI side, more in terms of how they can be used in chatbots—for instance, in generating summaries and pointing people to information. 

For labeling, there are tons of data that is public and that governments own. As this data increases—and it will continue to increase, in part because generative AI techniques are becoming more useful and prevalent—there will be more and more data being generated and more information generated. That means governments have a bigger challenge in detecting what’s important and what’s not, and summarizing what’s important and gathering statistics.

As you know, one of the things governments need to do continuously is collect statistics and understand what’s going on so they can make policies around what’s happening. A lot of it is based on data and that’s an area where AI can help: collecting data, extracting useful information and obtaining observations useful for policy making.  

There’s a lot of information within different agencies. Citizens need access to that information, but it’s often hard for them to get it. The same often holds true for government employees. And for them, the challenge is understanding what something means or where the information is. This makes search an important application. AI has been used in search before, but recent advances in chatbots can help government employees get information faster so that they can better answer citizen’s questions.

This is cutting-edge in the sense that some of these techniques require quite a bit of additional work to put the right safeguards and guardrails in place so that a chatbot doesn’t go off the rails and start talking about something that is irrelevant or giving false information. 

But if you’re looking for something around, say, healthcare, something related to regulation, a chatbot could potentially point you to the right document. Pointing to specific documents, instead of just giving an answer, can mitigate the risk of a chatbot providing something that is inaccurate. It definitely can speed up access to information and make it easier for people to engage and for government employees to collect that information.

What are some of the biggest challenges and pain points for government departments in operationalizing AI?

Education: Understanding AI

There are multiple challenges when it comes to making AI a reality in a government or a government agency. One of them is education, and making sure that stakeholders understand what AI is, what it does, when it succeeds, when it fails, what the risks are—and what it takes to actually build AI, deploy it successfully and monitor it so that it does what it’s supposed to do. 

The need for education can go from the president of the country all the way down to the engineers building it. I was fortunate enough to be part of the expert mission on AI for the Colombian government. I was told that the preceding president, President Duque, forced his entire cabinet to take an AI course. They understood the basic concept. It didn’t mean they needed to code or anything of that sort, but understanding what’s doable and what’s not is important. 

Data to build AI systems

The second challenge is data. Access to data, as I said before, is sometimes in silos, and so you need a data strategy. You need a cohesive view of how the data may be used with all the protections to make sure privacy is preserved, et cetera, et cetera. Getting access to the data is really critical for building AI systems. But oftentimes governments and companies are not what you would call digital natives. They’re not good at making their data available for the scientists or engineers that are building their AI systems.

Skills to build AI

The skills needed to build AI: This is another potential limitation. Sometimes governments lag a little bit behind in hiring the right talent, but this is key. They need to hire people with expertise, those with some experience—and ideally talent that has significant experience in building AI teams and ecosystems. 

Because, unlike traditional software, when you’re building AI, you do things differently. There’s a lot more experimentation. You have to monitor the models that you deploy and test them beforehand. With traditional software, you know exactly what you’re going to do when you build it. You have a specific timeframe, allocate resources and deploy it. Then you’re done. I mean, you’re not really done, but you’re kind of done.

With AI, the cycle is quite different. There’s more experimentation so it requires a different set of skills and a mindset open to experiments. When governments are running these experiments, they have to be as careful or more careful than corporations in making sure that the AI does what it’s supposed to.

Organizational structure

There is the challenge of organizational structure. How are things set up? Who owns the data? As I said, data is often in silos. Maybe one agency has data that’s useful, but doesn’t share it with another agency, even if the data is public. Sometimes organizational changes within government can have a big impact on how that’s done.

Computing infrastructure

One thing that’s happened in the private sector over the last several years has been the move to the cloud, where you can efficiently run processes on potentially thousands of computers just off of a laptop. You’re essentially renting to compute. That makes it very easy to scale when you need a lot of compute power. You use it immediately and pay only for what you use. When you don’t need it, you reduce what you use and only pay for what you’re using. 

That’s a very powerful paradigm because it allows you to scale up when demand is high and scale down when it’s low. Traditionally, governments have shied away from doing this because of concerns around data privacy. But adopting a cloud-based ecosystem definitely makes things faster and easier, particularly in the AI space. 

We’ve seen quite a bit of progress in that sense, in governments moving towards cloud-based solutions. When they do that, they often end up collaborating more with private industry companies that offer services through APIs (application programming interfaces) or other means and can be accessed through the same cloud providers.

What precautions can be taken to ensure AI applications respect individual privacy and don’t perpetuate biases or discrimination?

Regulation helps make sure that there’s consistency across the board. There are many different use cases for AI and how it is applied, and it really does depend a lot on the application. There’s already regulation around AI. In the United States, for instance, there are regulations regarding the use of AI in financial services and healthcare. That’s because traditionally governments have found that there are some potential risks in deploying certain types of algorithms in those industries. 

In general, I would say that most technologists and most companies really do try hard to ensure that the work they do is deployed in a responsible way. We at Dataminr certainly do. You have to consider several things, like what is the actual application? How is it going to be used? There are many ways of overseeing that. One is the legal framework under which your users can use the technology that you’re providing. Another way is to put limitations on your product to make sure it’s not used inappropriately. 

The risks vary depending on the application. One risk is that applications can reinforce existing biases found in training data. As I mentioned before, these methods are based on training and lots of data. So if the AI applications are fed biased data, the models will make biased decisions. 


One mistake people often make with AI is they go in one of two extremes. They either think it’s magic and can do anything, or that it’s too complicated and that the organization isn’t yet ready for it.


You have to pay attention to the data that you’re using during training, and the task that the model is going to perform. There’s a big difference between a model making a decision on something versus a model detecting something. It depends on the kind of model and what exactly it’s doing. 
With more recent advances in AI, another risk is hallucinations, which I’m sure many people have seen or heard about. Because these systems are based on very large amounts of training data—and they’re making predictions based on that data—they are like statistical machines that are predicting, in the simplest form, just the next word of a sentence. 

That concept can be extended to images, into more complex scenarios where the next word is actually the next sentence in a conversation, and so on and so forth. The way they’re built, they try to complete the sentence as well as possible. Statistically, what that might mean is that they make things up that are not necessarily true. And that is one of the biggest risks for some AI applications.

Let’s look at the use case of having a chatbot provide customer support for a government agency. A user can chat with the bot and ask for information. There is a risk that the chatbot will make up information and say, “Yeah, you can do this,” or, “You can’t do that,” or, “This is how you apply for this particular government benefit,” and it turns out that it’s completely wrong. You wouldn’t want that. One of the ways to mitigate that would be to make sure that when you’re deploying such a model, you first make sure you’re feeding it the right training data. 

So for the chatbot example, you may want to make sure that it’s using only or mostly the data that it’s going to need to answer the questions that it should be answering. That means, if you ask the chatbot questions about good recipes, it wouldn’t answer you. It would say, “This is outside of what I do, but I’m happy to help you get access to this particular government benefit.” 


People will need to get better at understanding the limitations of these AI systems, and to not take everything at face value when they’re interacting with them.


There are technical ways of doing this. For example, you can limit the kinds of answers that the chatbot gives so that instead of it providing a long-form textual answer, it can maybe give a short summary and point you to the part of the document or to the right place. I think over time these guardrails will get better. In some cases, it’s still hard to build them and to do them effectively, but they will get better. I also think people will need to get better at understanding the limitations of these AI systems, and to not take everything at face value when they’re interacting with them.

There’s a lot of work in the technical community in building techniques to mitigate these risks. Ultimately, it is very application-dependent, and people tend to lump it into AI is dangerous or AI is safe. But it’s a spectrum and depends exactly on where it’s being deployed, what it’s doing, what the training data is, what the models are. When you look at that whole spectrum, there are many places where you can have checkpoints.

One of the challenges with AI is that it’s not always possible to predict with 100% certainty what the output of the models is going to be, that’s why they’re non-deterministic. The models take an input and then generate an output, whether it’s a label or not, because they’re often working with new data not seen during training. Because of that, the models require additional safeguards. One is making sure that they do what they’re supposed to once they’re deployed.

How can the public sector successfully deploy AI?

It’s not too different from companies that are not AI-first. Typically, when companies move to using AI, they seek input from vendors or integrate services offered by AI companies into their workflows. Some focus on the “low-hanging fruits,” and very often the low-hanging fruits are cost-saving tasks, those that are more mundane and easier to automate where the risk is low.

Depending on the application, there are different levels of risk. Within each application, you can determine where the risks are low and where they’re high. For instance, sometimes the instructions for applying for a passport are a bit confusing. A chatbot could help people figure out what they need; maybe that’s not too high risk. But it has to be done properly so it doesn’t exclude people from certain populations or education levels. In some cases, it might actually help provide better access. 

Part of the trick is looking at the spectrum of possibilities where you have repetitive patterns in which you can make predictions and where those predictions are useful. Then identify within that domain what the low risk versus the medium and high risk applications are, and then identify where the biggest gains will be made. It’s not so different from how you would make an investment in a technology project, except that often with AI the outcomes are less clear. 

As more people and more companies and more governments adopt these technologies, there will be more sharing of information: “Hey, they did it this way and it worked, so we can do it,” or, “They did it this way and it didn’t work, so let’s not do that.” That sharing and knowledge gained is going to be pretty important. This is happening today quite significantly in the technical community with a lot of the open source in how to build the models, how to make them safer, how to make them better, how they behave, et cetera.

Do you have any examples of best practices when using AI?

I think the ethical issues and the responsible AI principles are ones that should be taken into account from day one, and they should be considered across the board. That includes the people building the models, which is important because when you’re building the models, you’re actually looking at data. You’re making a lot of decisions that can have an impact. 

Secondly, make sure you build processes around access to data and tasks, and that you measure things in terms of what matters. You need to use the right metrics to answer the right questions and apply them in the right places. One mistake people often make with AI is they go in one of two extremes. They either think it’s magic and can do anything, or that it’s too complicated and that the organization isn’t yet ready for it. 

It’s usually not that binary. There’s a spectrum with many things along it. Having a better understanding of where AI can be applied, what can be useful, where the low-hanging fruits are, where the risks are, where it can be beneficial, are critical.

Start with smaller projects and pay a lot of attention to the data, the metrics, the performance of the models, the ethics—and how the AI is going to be deployed responsibly, the impact of that deployment, how it will be used and how it’s trained.

Will using AI to communicate with people lead to a loss of empathy?

It could be the opposite. There have been studies that have shown that people tend to trust AI more than they trust humans. There is a study, and I’m forgetting exactly where this was done, where they had a robot help people evacuate from a fire. People trusted the robot, even though the robot in one case was taking them into the fire.

There have been other studies that show in chatbot interactions, people start trusting these chatbots. Although some studies have been done as well with toy robots where, if the users are instructed to switch them off, and the robot expresses emotions, like “Please don’t switch me off,” humans are less likely to switch them off—even though they know the robots don’t have any emotions. 

That’s actually a risk, when people start believing chatbots more than they do humans. But if used appropriately, I think they can actually increase empathy, especially in government services where the government is really trying to reach out to people and do things that will benefit them. it’s not clear yet how that’s going to go, but I think there’s potential for a lot of good stuff.

Are there areas where AI’s potential remains untapped in the public sector?

There are so many areas where it remains untapped and not just in the public sector, but in the private sector as well. Ideally, many government decisions would be informed by data and statistics. AI has shown tremendous promise in gathering data that can be used to make decisions, for example, population data. That’s one area that I think is interesting and important. 

Take for instance, censuses. They are expensive to conduct and done every 10, 15, 20 years. It’s very hard to get accurate data. Other techniques have been found to be as effective in collecting some of that information using AI. In healthcare, public health, there are many, many opportunities. And again, it’s based on data, population level data, not looking at individual data. There are a lot of statistical-based policymaking opportunities. 

Most governments really want to be effective and efficient in the services they provide, but they frequently struggle, which is why they often subcontract companies to do that, like to apply for visas, or to get a driver’s license. There are a lot of opportunities to make processes more efficient and to make citizen access more efficient, faster and easier to understand.

One of the challenges with government is that the data is often very confusing. When people need any kind of service, there are a lot of sources, they often don’t know which agency they need to go to—and when they do find it, the information is written in a language that’s hard to understand, even for the most educated. So I think chatbot technology, which has advanced pretty significantly in the last couple of years, could be very useful in helping citizens better understand that information and get what they need more quickly. If it’s done faster and better, governments can provide better access. 

Learn more

AI for Good

See how Dataminr is helping public sector organizations use AI to create tailored solutions that drive transformational impact for people and the planet.

First Alert

Learn about First Alert, Dataminr’s product for the public sector, and how it helps organizations serve their communities during crises and critical events.

Alex Jaimes is Chief Scientist and SVP of Artificial Intelligence at Dataminr focused on mixing qualitative and quantitative methods to gain insights on user behavior for product innovation. He is a scientist and innovator with 15-plus years of international experience in research leading to product impact at companies including Yahoo, KAIST, Telefónica, IDIAP-EPFL, Fuji Xerox, IBM, Siemens, and AT&T Bell Labs. Prior roles include head of R&D at DigitalOcean, CTO at AiCure, and director of research and video products at Yahoo. Alex has published widely in top-tier conferences (KDD, WWW, RecSys, CVPR, ACM Multimedia, etc.) and is a frequent speaker at international academic and industry events. He holds a PhD from Columbia University.

August 17, 2023
  • Public sector
  • Corporate Risk
  • Public Sector
  • Podcast

Related resources

Insight

5 Steps for Building a Successful Crisis Management Plan

Optimize your crisis management plan with five key steps, enabled by Dataminr Pulse’s real-time alerts and collaboration workflow capabilities.

Press

Dataminr Receives Silver Award From UK Ministry of Defence

Dataminr is proud to announce that it has been awarded the Employer Recognition Scheme (ERS) Silver Award by the United Kingdom's Ministry of Defence.

On-demand Webinar

Real-time Response in School Safety Emergencies

Baylor University, Humber College and Dataminr discuss the importance of publicly available information when responding to school emergencies.