Newsletter Summary: Enterprise AI is pretty damn hard!
Sep 28, 2025

In under 15 words
Context is key for successful enterprise AI, but doesn't matter much in personal use.
In a few more words
Recently released papers from both OpenAI and Anthropic, detail chatbot usage on ChatGPT consumer plans, versus enterprise AI deployments through the Claude API (Anthropic).
On consumer ChatGPT plans, the likelihood of a message being work related has dropped to less than 30%. When one examines the message-type mix, we see that in the personal context, users are predominantly requesting Practical Guidance (28.8%), or Seeking Information (24.4%). Writing requests are a close third: 23.9%. Contrast this with messages categorized as "work-related," where Writing requests capture 40% and Seeking Information — a seeming replacement for web search — drops to 13.5%.
Personal use:

In our personal lives, web search doesn’t really have a right or a wrong. Rather, there are just gradients of quality. The “rightness” of a recipe or product review is completely subjective.
Enterprise use:

In contrast, the enterprise thrives on systems, standards, and reproduction of products and services with factory-like precision. Large enterprises often explicitly strive to create right and wrong. Looking at the plots, enterprise employees can’t Seek Information in the work context, because even the largest of Large Language Models cannot be expected to provide the right information for that company.
So, what do companies with enterprise AI deployments use it for?
In Anthropic’s paper, authors compare and contrast the usage of Claude Chat vs the Claude API. Overwhelmingly, the data suggests that enterprise deployments via the API are for automation tasks — 77% of API transcripts show automation patterns (especially full task delegation) versus just 12% for augmentation (e.g., collaborative refinement and learning)
This, combined with the overwhelmingly majority of enterprise Claude API usage being programming/coding-based, tells us that early success is being found in the handful of tasks where context is easiest to lend to LLMs. Namely, in coding, where the repository structure lends itself more easily to communicating fully-encapsulated context.
Contrast this with a complex sales process for an enterprise customer, which "…might require Claude having access not only to information contained within a Customer Relationship Management system, but also to tacit knowledge located in the minds of account executives, marketers, and external contacts. All else equal, lacking access to such contextual information will make Claude less capable."
The key takeaway
The importance of context is so pronounced, that authors cite it as a clear capability that will separate firms who succeed with AI from those who will continue to struggle:
Access to appropriate contextual information is needed for sophisticated deployment … Correcting for this bottleneck may require firms to restructure their organization, invest in new data infrastructure, and centralize information for effective model deployment.
Two practical applications
In the field: Daily voice logs — It has never been easier to turn a voice memo into structured data. Use this strategy operationally, or combine it with other data for analyses.
In the office: Home-brewed RAG with thoughtful chunking — Strategic chunking balances targeted information retrieval against just-enough broader context to craft highly-relevant, yet contextually-aware responses. Use it when uploading documents to ChatGPT isn't getting you quality enough performance.