4 AI engineers, 3 products, and 2 key insights
Mar 11, 2025

What 1 local software engineering manager can teach us about AI in Production.
According to the 2025 AI & Data Leadership Executive Benchmark Survey, Fortune 1000 companies are continuing to make strides towards getting AI out of R&D and into Production. Whereas the 2024 survey found that only 5% of responding companies had AI in Production, this year’s release found 24%. Similarly, the “early production” stage jumped from 25% to 47%.
For one Boise-based engineering manager, getting AI into Production encompasses nearly his entire job description. For the sake of keeping the content of this blog as transparent as possible, while also protecting the privacy of the company, we’ll call our engineering manager Chris from here forward. As the leader of his company’s first dedicated AI engineering team, Chris manages eight engineers total. Four are dedicated to AI, and four manage the company’s legacy customer portal.
The first AI product
As a large IT solutions provider, Chris’s company has been quick to embrace AI for both internal efficiencies and customer-facing product development. “We went for sales efficiency with the first product,” he explained. The company had years of meeting notes - “trip reports” - from each interaction the sales staff had with clients. “We had years and years of these reports that historically were only used as a management tool.” But with Large Language Models (LLMs), suddenly the trip reports became a hidden treasure chest of quantifiable data.
Chris’s team deployed a RAG application to empower the sales staff and managers to interact with & query the trip reports.
At a high level, the application:
Filters to relevant trip reports upon user login - narrowing the search field, thereby reducing varied or surprising results
Features a chat-like interface for the staff member to interact with - increasing the approach-ability of the tool to non-technical staff
Transforms some prompts into SQL syntax and queries - also improving accuracy by reliably returning query-able information
By incorporating this tool into meeting prep, the company’s sales team members save time and improved relationships with customers.
AI Product #2
Continuing in the pursuit of efficiency, Chris’s company experimented with incorporating LLM-based workflows and actions to the internal service desk. “The general idea,” says Chris, “was to improve employee workflows by automatically taking care of certain service desk tickets.” Ultimately, though, adoption has remained low among the service desk team, as they’ve found it difficult to integrate AI into their workflow in a way that meets their quality expectations. “To fulfill a ticket,” the team “juggles complex business rules, multiple queues, and strict response time agreements.”
Thus, the vision of the AI-enabled Service Desk has yet to be fulfilled. This aligns [nearly exactly] with results seen in the AI Leadership survey for 2025. Overall, data leaders declined in their reports of AI’s primary value being in “worker liberation from mundane tasks.” The leading response - “exponential productivity gains; efficiency,” leaves some room for interpretation, as it is not too dissimilar from the proceeding options. However, one read of this could be that employees are not necessarily experiencing tasks being fully offloaded to AI. Rather, they’re working through tasks more quickly with AI assistance.

For the service desk team at Chris’s company, barriers to AI adoption remain.
AI Product #3
With the backbone of successful internal-tool development, Chris’s company started exploring customer-facing AI products. “We are basically taking our internal chat interface, and enabling customers to do the same [kinds of tasks],” he explained. The tool - which is currently industry-agnostic and in the early stages - is the “first step in helping our customers integrate AI into their infrastructure.”
This “vertical integration of AI,” he explains, is becoming more and more desired as data is collectively garnering recognition as the secret sauce to highly successful AI implementations.
Accordingly, this was reflected in the #1 principal finding from the 2025 Data & Leadership survey, noting respondents’ “... increased focus on the importance of data, spurred by the recognition that great AI relies on great data.”
The minimally viable product - “MVP” - at Chris’s company, has been pre-sold to a few existing customers. Having passed the initial market validation test, the application is being migrated to more robust front- and back-end infrastructure.
Two key insights
From listening to Chris explain the company, team, and products, two lessons stand out:
Executive sponsorship
Widespread user adoption (like in the case of a large sales staff) is born of both great products and substantial executive support. “We have strong leadership buy-in,” said Chris, referencing how the trip reports product got to a [stunning] 50% adoption rate across the sales teams. “We had organic uptake once people got into it, but at first the pressure came from above to learn the new tools and adapt.”
This point is often glossed over in mainstream AI commentary; the fact that people are creatures of habit. For employees in all kinds of roles, adapting to changing workflows - especially those that involve LLMs - can be annoying in the least, and threatening, in the worst cases.
This resistance could be identified in the 2025 survey, with the #3 principal finding: “Most organizations continue to struggle with adoption and transformation, with cultural challenges noted as the greatest obstacle to progress.”
3 Types of AI solutions
Chris’s three products reflect three distinct types of AI solutions. To overly simplify to make the point memorable, the three types boil down to:
Make us smarter … by unlocking historically difficult-to-access insights
Make us faster … by automating routine or mundane tasks
Make us money … by creating revenue-generating products
For companies with significant qualitative data (e.g. sales calls), LLMs have [largely but imperfectly] unlocked what used to take armies of user researchers and NLP experts to synthesize. And because of the ubiquity of this kind of data, its novelty, and the difficulty in falsifying it (i.e. it can be somewhat difficult to gauge if an LLM summarized 25,000 phone calls incorrectly without listening to all 25,000 calls), adoption of this flavor of AI is more palatable. Chris’s company saw this with Product #1.
But when the action is easily measurable - like replacing a key job duty - we expect performance to at least meet, if not exceed, human output. And being in the middle ground - the hybrid world where humans are trying to collaborate with AI solutions - can be very challenging. Like when a new sports team assembles, or a new band forms, everyone has to learn their new role. The same could be true for incorporating AI into workflows, which is the struggle highlighted in the service desk AI implementation.
And finally, when it comes to making money, it all seems to be a little bit up in the air. It’s clear that the providers of foundation models - OpenAI, Anthropic, Google - all make lots of money. But the question remains open, not just for Chris’s new product, but for thousands of other companies making internal and external AI products:
Can second- and third- degree companies make a good enough return to justify the investment when building on the backs of giants?