Beginner35 minModule 1 of 6

Agentic AI Fundamentals

Understand what makes an AI system agentic, the concept of the augmented LLM, and how to decide between workflows and agents.

What you'll learn in this module

  • The difference between a single LLM call and an agentic system
  • What the "augmented LLM" is and why it matters
  • When to use a hardcoded workflow versus a free-form agent
  • The spectrum of agentic architectures from simple to autonomous

From Single Calls to Agents

Most AI applications start the same way: send a prompt, get a response.

User prompt → LLM → Response

This works for translation, summarization, and simple Q&A. But real-world tasks are rarely one-shot. Consider:

  • "Research this company and draft a competitive analysis" — requires multiple searches, reading, synthesizing
  • "Process incoming support emails and route them" — requires classification, entity extraction, conditional actions
  • "Monitor our API logs and alert on anomalies" — requires continuous observation, pattern recognition, decision-making

These tasks need multiple LLM calls, external tools, and decision logic. That's where agentic systems come in.


The Augmented LLM

Before building multi-step systems, you need the right building block. A raw LLM is powerful but limited — it can't search the web, call APIs, or remember past interactions. The augmented LLM adds three capabilities:

Augmented LLM Output Retrieval Language Model Tools Memory
CapabilityWhat it addsExample
RetrievalAccess to external knowledge beyond training dataRAG over documentation, web search
ToolsAbility to take actions in the real worldAPI calls, database queries, file operations
MemoryPersistence across interactionsConversation history, learned preferences

Every agentic architecture — from a two-step chain to a fully autonomous agent — is built from augmented LLMs connected together.


Workflows vs. Agents

There's a spectrum from fully hardcoded to fully autonomous. Understanding where your task falls determines the right architecture.

Workflows

LLM calls are orchestrated through predefined code paths. The developer decides the sequence, branching, and tool usage at design time.

Path A Path B Input LLM Call 1 Condition LLM Call 2a LLM Call 2b Output

Characteristics:

  • Predictable execution — same input follows the same path
  • Easy to debug — you can see exactly which step failed
  • Lower cost — only the steps you define run
  • Limited flexibility — can't handle unanticipated inputs

Agents

The LLM dynamically decides which tools to use, in what order, and when to stop. The developer provides tools and constraints, not a fixed sequence.

No Yes Task LLM decides next action Execute action Task complete? Return result

Characteristics:

  • Flexible — can handle novel inputs and tasks
  • Self-directing — adjusts strategy based on intermediate results
  • Higher cost — may make many LLM calls per task
  • Harder to debug — execution path isn't predetermined

When to use which

FactorUse a WorkflowUse an Agent
Task structureWell-defined stepsOpen-ended, exploratory
Predictability needHigh (compliance, billing)Lower (research, creative)
Error toleranceLow — failures must be caughtHigher — can retry or adapt
Cost sensitivityHighLower
Development speedFaster to build and testFaster to prototype, slower to harden

The Agentic Spectrum

In reality, most systems aren't purely one or the other. Here's the full spectrum:

Single LLM Call Prompt Chain Routing Workflow Parallel Workflow Orchestrator-Workers Autonomous Agent
LevelDescriptionLLM control
Single callOne prompt, one responseNone
Prompt chainSequential LLM calls, fixed orderNone — developer controls flow
RoutingLLM classifies input, developer routes to fixed pathsClassification only
ParallelMultiple LLM calls run simultaneouslyNone — developer controls fan-out
Orchestrator-workersOne LLM plans, others executePlanning LLM directs workers
Autonomous agentLLM decides all actions dynamicallyFull — LLM controls loop

The right position on this spectrum depends on your task's complexity, your need for reliability, and your tolerance for cost and latency.


A Mental Model for Architecture Decisions

When designing an agentic system, ask these three questions:

1. Can I define all the steps in advance? → Yes: Use a workflow (chain, route, or parallelize) → No: Consider an agent

2. How many tools does the task need? → 1–3 tools with clear usage rules: Workflow with tool calls → Many tools with ambiguous usage: Agent with tool selection

3. What happens when it fails? → Must recover gracefully with clear error messages: Workflow → Can retry and adapt: Agent

These questions will serve as your compass throughout this learning path.


What you now understand

ConceptKey takeaway
Augmented LLMThe building block: LLM + retrieval + tools + memory
WorkflowsPredefined code paths with LLM calls — predictable, debuggable, cost-effective
AgentsDynamic LLM-directed loops — flexible, adaptive, higher cost
The spectrumSingle call → chain → route → parallel → orchestrator → autonomous
Design heuristicStart with the simplest architecture that solves the task

Up next: Workflow Patterns — learn prompt chaining, routing, and parallelization in depth.

On this page