AI at IACP: what law enforcement leaders can learn from the private sector’s adoption of AI

One week out from IACP Boston, I remain struck by just how many conference vendors were selling AI-powered services built on the cloud platform I spent most of my career selling to Fortune 500 firms. Many of these vendors articulated value in terms of efficiency gains or new insights unlocked – pitches straight from a B2B tech conference. 

Law enforcement leaders evaluating these technologies should be aware that, in the private sector, many of these pitches have not yet panned out: a study released last month by Battery Ventures indicated only 5.5% of Enterprise AI use cases are in production. It’s still very early. 

That does not mean AI is worthless. It means buyers evaluating new technology often underestimate the costs, security considerations, dependency mapping and organizational changes required to fully realize the value of a new technology. These miscalculations and misunderstandings delay or derail even worthwhile projects. Really good vendors will help you anticipate and plan around these complexities. Seek them out!

So, with the caveat that not every lesson from the private sector will apply, here’s what I think my friends in law enforcement should be asking of themselves and their vendors before they make big AI decisions:

Are we predicting things or creating things? 

You don’t need to be an expert in every sub category of AI. But you do need to know whether a vendor’s technology relies on historical data to forecast outcomes (Predictive AI) or creates something entirely new (Generative AI). This distinction has important implications for things like data dependencies, privacy, compliance and how you handle inevitable inaccuracies. 

What data sources are involved and what dependencies should I be aware of?

Effective AI requires lots of data. Make sure you’re clear on what data is proprietary to the vendor and what data is third-party. Third-party data may come with use restrictions you’ll need to understand. They will probably need your data, too. What assumptions is the vendor making about the cleanliness or accessibility of your data? How will your data move from your systems to theirs, securely? Are there performance, integration or cost implications of this data movement? This will get more complicated - and costly - if data inputs are real-time. 

How seriously does this vendor take security?

Your AI vendor probably runs in the cloud. They will inherit some security controls as a result. This inheritance does not extend to how they handle your data. That is their responsibility. Their architecture, configuration, encryption, tooling and operational policy choices are what enable them to meet compliance frameworks like CJIS or SOC2 and reduce the chance or scope of a data breach. AWS calls this the Shared Responsibility Model. Azure has a similar framework. Any vendor you are exposing your data to should have clear documentation outlining their security posture and third party attestations to back it up. They should be willing to codify minimum security and compliance obligations in their contract with you. 

What about data privacy?

Handling sensitive data is inevitable in law enforcement. Your vendor should be anonymizing or redacting appropriately. If prompts or free form text are part of your use case, will users be able to enter PII? Do they need to? Once this vendor has your data, especially sensitive data types, what do they do with it and how long do they keep it? Keep in mind that more than half of today’s data loss events involve user attempts to input PII into generative AI sites

Can this vendor explain and help me mitigate inaccurate or inappropriate outputs?

Predictions will not be perfect. Generated content will not always hit the mark. You are going to mitigate the risks associated with these inaccuracies by setting this technology within a human-centered workflow. To do that well, you need a vendor who is aware of and transparent about their technology’s limitations. Are they? Are they proactive in helping you think through what a good workflow looks like? 

What other costs do I need to plan for?

Cost-overruns are common in private-sector AI use cases, especially generative ones. Cloud storage costs money. Moving data between clouds costs money. Integrating software platforms costs money. Vendors are very aware of these costs. Are they including it in a flat pricing model? Is some of it on you? If they are subsidizing these costs for now, don’t expect that to last. Make sure you understand the full picture. 

What does success look like?

I heard a lot about efficiency gains at IACP. Efficiency gains are hard to measure. The assumptions a vendor is making about those gains may or may not hold for your organization. That doesn't mean AI has no value. Maybe a more modest efficiency gain is still impactful. Maybe there is another outcome that actually matters more to your organization. Start there. Don’t communicate hard and fast metrics to stakeholders without being really clear on how attainable they are and what assumptions underpin them. 

AI is here, it is real and when it’s deployed effectively, it can have a big impact. It can be impactful for you, too. Discount the hype, focus on the outcomes.

Next
Next

The problem with the quality of your sales hires is you.