AI 101: Cutting through the jargon

Feb 23, 2025

In under 15 words

ChatGPT brought “artificial intelligence” to the vernacular, but what do the words actually mean?

In a few more words

Last week, my long-time idea sparring partner (a real human), was explaining to me that they had been in an argument with another friend about how “artificial intelligence” differs from “machine learning.” He explained that his debate opponent felt AI was wholly magical, and completely different from the simple, probability-based machine learning. 

Separately, a very endearing colleague of mine said, “and now I am hearing about ‘machine learning’?!” The frustration and I-can’t-keep-up-ness, was evident through his intonation. 

In reality, the field of artificial intelligence is quite old. The famous Turing Test, was published in 1950. But it wasn’t until this past year, with the release of OpenAI’s GPT-4, that some experiments indicate we’ve created a model that can pass the test. Nonetheless, through great strides in both technical innovation, and product innovation, “artificial intelligence” has officially leapt from the realms of academia, to executive board rooms, to small-town dinner tables. 

The “overnight success” - and pervasive nature - of AI has forced multiple generations to suddenly become fluent not only in probability, but also in applied statistics - i.e. “What does this mean, in the real world, for me and my business?”

With a visual

Let’s dissect the field of artificial intelligence, starting from the outermost ring. Each ring will be explained through a sales-focused example.

💡 Key takeaway: With each step inward, sophistication increases and certainty decreases.

(1) Artificial intelligence, in simplest terms, is an umbrella term encompassing efforts to get machines to complete tasks typically requiring human intelligence. In life, we often develop if-this-then-that heuristics in order to accomplish tasks with little mental energy. When someone says, “As a rule of thumb…” they are referencing a heuristic. 

In business, we hard code these heuristics into our systems to make better decisions.

“If the lead source is ‘Referral,’ then they’re an A-grade lead.” 

“If someone’s title is ‘chief marketing officer,’ then they’re automatically in our ideal customer profile.”

“If the prospect doesn’t open ten emails in a row, they fall out of the sales pipeline.”

All of these examples are rules-based, and when we program them into our CRMs, we are transferring our human intelligence to the machine. This form of logic-transfer could be considered the ancient ancestor to today’s artificial intelligence. 

(2) Machine learning is the practice of analyzing historical data and leveraging the detected patterns to predict an outcome in the future. Compared to the first ring, machine learning is better at uncovering and exploiting relationships between variables, and associating them with a desired outcome (e.g. “bought” versus “didn’t buy”). Machine learning is also characterized by more precise predictions (like a 0-100 scale, for example), and the ability to adapt the predictions based on new data (this is what we are referring to when we say ‘learning’). 

Example: The aforementioned "features" (referral lead source, job title, email open activity) combine to form a Lead Score. As more deals close or fall through, the system learns which combination of these features best predicts success, automatically adjusting scores to become more accurate over time.

(3) Deep learning is a type of machine learning that uses neural networks - sophisticated mathematical processes with multiple layers of analysis, similar to how the human brain processes information in layers. While traditional machine learning often requires humans to specify which features to analyze, deep learning networks can automatically discover important, and varied features.

Example: Some sales- and customer success- focused call-recording software utilizes deep learning to analyze call recordings for sentiment analysis. Among other capabilities, a deep learning model could detect the difference between a sarcastic, “Yeah right, Jim” versus a question, “Yeah, right, Jim?” This level of nuanced understanding would be difficult to achieve with simpler machine learning approaches. 

💡 Key takeaway: Note how the rings up until now have been backward looking. The techniques focus on historical data. Broadly, they take things that have happened in the past, and project them forward. This changes a bit once we get to generative AI…  

(4) Generative AI is a field focused on creating new content in words, images, or code. Unlike the preceding models, which often try to find one correct answer (e.g. "Will this lead convert?"), generative AI deals with tasks where several different outputs could be equally valid. When there are infinitely many ways to end the sentence "The day started out sunny, and then…", the evaluation of what makes a "good" response becomes much more subjective (Cassie Kozyrkov explains this concept excellently here)

Example: Generative AI could analyze customer information in your CRM and create multiple versions of advertising copy, each potentially effective but taking different approaches - one might focus on pain points, another on aspirational messaging, and another on specific features. The data provides the “intelligence,” but the automatically-created ad copy is what qualifies it as “generative.”

(5) Large Language Models (LLMs) are generative AI systems trained on vast amounts of text data to understand and generate human-like language. They work by predicting the next most likely word (or "token") in a sequence, and extend well beyond “chatbots” to include sophisticated reasoning and analysis. “Large” language models get their namesake from their billions-to-trillions of parameters that make up the neural networks. Small language models also exist.

Example: LLMs are commonplace in CRMs for things like automatically summarizing calls, or drafting emails to prospects. They’re particularly impressive in understanding context in communications.

(6) ChatGPT is the user interface placed atop [usually] the latest LLM released by OpenAI. Over time, the product has been augmented to act as an agent - equipped with file search, image generation, and web search capabilities. Fundamentally, however, the reasoning engine behind ChatGPT is OpenAI’s suite of LLMs.

Why does this matter?

To tell you the truth, I don’t think it matters much whether the average business person can recite the nuanced differences between each ring. However, absorbing the implications of applying these technologies to our lives and businesses, does matter. When we begin understanding the applications of AI, we are forced to answer so many questions:

“What does it mean to be ‘right',’ in this context?”

”How ‘right,’ does the output even need to be?”

“When is AI necessary, and when could I get by with a simpler system?”

“How much control am I willing to let go of in x-y-z area?”

For businesses (and individuals), the journey from the outer- to inner-most rings provides baby-step-reps in the art of navigating variability and uncertainty. By stepping into AI through this lens, company culture, education programs, and measurement practices can be developed. In bringing everyone along through shared language and understanding, companies can create truly ROI-driving AI strategies - not just flashy ones.