How startups can sell AI to enterprises

Published on Jul 06, 2023

How startups can sell AI to enterprises

Summary:

  • Banks and other regulated enterprises are already aggressively testing proofs of concept for generative AI.
  • Large language models are more risky to integrate than traditional machine learning models given their black box nature.
  • Enterprises want an FDA-style dedicated regulatory body to ensure fairness, explainability, and accountability of AI models.
  • Easier AI applications for enterprises to adopt include customer support and market research, but there is more excitement about the potential of coding automation and predictive analytics tools.


Enterprises want to buy from startups that emphasize practical use cases with clear ROI and adequate compliance features.What’s keeping regulated enterprises from adopting generative AI? Fears around risk management and the auditability of AI models were the top concerns, according to leaders at top financial institutions and the startups that sell to them. SignalFire and Truist Ventures brought together a roundtable of founders and banking executives to explore what enterprises need to see in order to trust AI startups.

Overcoming LLM unpredictability

Each bank at the event had already spun up around a dozen proofs of concept in the past few months to aggressively test where their business could benefit from AI. Eight months into the explosion of large language models, leaders said the new technology is experiencing significantly faster adoption than the internet. But the consensus is that much of the technology, or at least the maturity of the products delivering it, isn’t quite ready for mass deployment.

The big difference between LLMs and traditional machine learning models is that LLMs like OpenAI’s GPTs aren’t transparent or reproducible. They can provide different answers each time they’re queried, making it challenging to ensure robustness and to test coverage.

Banking executives and AI startup founders gather at SignalFire’s Building Trust in AI event.

That unpredictability introduces significant issues around risk management and governance. Banking executives expressed the need for industry consensus and regulatory adjustments to address the unique challenges posed by LLMs. While some LLMs are highly advanced and user-friendly—with features like content customization, cognitive search, and the ability to integrate into existing systems—banks are wary that the rapid progress of LLMs may outpace governance and regulatory frameworks, creating challenges for both regulators and internal risk management teams.

Startups selling to the enterprise should emphasize how they’ve reined in rogue outputs with proper safeguards, human-in-the-loop feedback, and testing. Instead of pitching the unlimited potential of AI, founders should frame how their products control and harness its power.

Receptive to regulation

Executives believe regulators and auditors will need to develop new approaches to ensure the responsible and safe deployment of LLMs. Overall, the inclination of the roundtable was pro-regulation: there is a strong need for external oversight to ensure fairness, explainability, and accountability in LLM usage. The federal government is thorough in their scrutiny (especially for banks) and will emphasize the importance of explainability in the context of financial institutions. They discussed the role of regulators in guiding and facilitating the adoption of LLMs while balancing the need for innovation and mitigating potential risks.

SignalFire principals Bradford Jones and Lisa Liu from our New York City team

One approach would be an FDA-like approval process for commercializing LLMs, where experienced individuals would assess the safety and potential risks of these models. One banking exec drew a parallel to the automotive industry: “Before there was a [National Highway Traffic Safety Administration] every car company had their own safety standards. They just made it up. That's where we are now [with AI], right? Pretty much every bank has their own audit process. And I think it would be hugely beneficial, honestly, to have one certification authority.”

Developing such a regulatory body would certainly be complex. Still, the consensus is that it is necessary to ensure responsible and safe deployment of LLMs so that enterprises and the public can benefit from their capabilities at scale. Appearing resistant to regulation could scare off enterprise customers. Getting involved in public policy to help shape and cooperate with regulations can enhance trust.

The low-hanging AI fruit for enterprises

There are some use cases for AI that are less fraught for regulated enterprises. Executives see the lowest-hanging fruit as chatbots, digital front-end applications, customer support, and research tools. Internal solutions are also less likely to trigger regulatory scrutiny than public-facing tools. Executives pointed out that AI can greatly accelerate the progress of these areas, though they have yet to see truly transformative business use cases for generative AI. AI products that they would be excited to see include coding automation tools to solve for technical debt and predictive analytics within specific sectors.

Banking executives agree that startups may be best positioned to solve some of the deeper issues of model governance than enterprises themselves. “We don’t expect to build this type of capability and are looking to rely on external innovation,” one attendee said.

AI and enterprise leaders discuss opportunities at SignalFire
AI and enterprise leaders discuss opportunities at SignalFire's Building Trust in AI event.

In order to reach industry-wide adoption and build confidence in the technology, there is a strong need for startup partners to provide tools for spot-checking, middleware layers, and governance frameworks. Top needs include solutions for data leakage, model inversion, and model explainability. But as Truist’s head of AI Bjorn Austraat put it, “the last thing we want is a black box explaining another black box.”

Early-stage founders should internalize that enterprises care less about the underlying technology or dreams of what it could do in the future. Founders who frame their products around commercial impact, feasibility, and security will have better success converting enterprises into lucrative customers.

*Portfolio company founders listed above have not received any compensation for this feedback and did not invest in a SignalFire fund. Please refer to our disclosures page for additional disclosures.

Related posts

The SignalFire State of Talent Report: 2023 tech employee trends
Must-Read
Advice
April 15, 2024

The SignalFire State of Talent Report: 2023 tech employee trends

We’ve earmarked $50M for the SignalFire AI Lab to provide the resources, capital, and credibility to help tomorrow’s AI leaders today.
Building blocks for HR success: Set up a People function
Advice
March 27, 2024

Building blocks for HR success: Set up a People function

We’ve earmarked $50M for the SignalFire AI Lab to provide the resources, capital, and credibility to help tomorrow’s AI leaders today.
The new 9-box: Modernizing the talent review template
Advice
February 14, 2024

The new 9-box: Modernizing the talent review template

We’ve earmarked $50M for the SignalFire AI Lab to provide the resources, capital, and credibility to help tomorrow’s AI leaders today.