Agentic AI Fundamentals
Understand what makes an AI system agentic, the concept of the augmented LLM, and how to decide between workflows and agents.
What you'll learn in this module
- The difference between a single LLM call and an agentic system
- What the "augmented LLM" is and why it matters
- When to use a hardcoded workflow versus a free-form agent
- The spectrum of agentic architectures from simple to autonomous
From Single Calls to Agents
Most AI applications start the same way: send a prompt, get a response.
User prompt → LLM → ResponseThis works for translation, summarization, and simple Q&A. But real-world tasks are rarely one-shot. Consider:
- "Research this company and draft a competitive analysis" — requires multiple searches, reading, synthesizing
- "Process incoming support emails and route them" — requires classification, entity extraction, conditional actions
- "Monitor our API logs and alert on anomalies" — requires continuous observation, pattern recognition, decision-making
These tasks need multiple LLM calls, external tools, and decision logic. That's where agentic systems come in.
The Augmented LLM
Before building multi-step systems, you need the right building block. A raw LLM is powerful but limited — it can't search the web, call APIs, or remember past interactions. The augmented LLM adds three capabilities:
| Capability | What it adds | Example |
|---|---|---|
| Retrieval | Access to external knowledge beyond training data | RAG over documentation, web search |
| Tools | Ability to take actions in the real world | API calls, database queries, file operations |
| Memory | Persistence across interactions | Conversation history, learned preferences |
Every agentic architecture — from a two-step chain to a fully autonomous agent — is built from augmented LLMs connected together.
In the Skytells ecosystem, the augmented LLM maps directly to an action node in Orchestrator: an AI model call (via AI Gateway) combined with integrations (tools) and template variables (memory/context passing).
Workflows vs. Agents
There's a spectrum from fully hardcoded to fully autonomous. Understanding where your task falls determines the right architecture.
Workflows
LLM calls are orchestrated through predefined code paths. The developer decides the sequence, branching, and tool usage at design time.
Characteristics:
- Predictable execution — same input follows the same path
- Easy to debug — you can see exactly which step failed
- Lower cost — only the steps you define run
- Limited flexibility — can't handle unanticipated inputs
Agents
The LLM dynamically decides which tools to use, in what order, and when to stop. The developer provides tools and constraints, not a fixed sequence.
Characteristics:
- Flexible — can handle novel inputs and tasks
- Self-directing — adjusts strategy based on intermediate results
- Higher cost — may make many LLM calls per task
- Harder to debug — execution path isn't predetermined
When to use which
| Factor | Use a Workflow | Use an Agent |
|---|---|---|
| Task structure | Well-defined steps | Open-ended, exploratory |
| Predictability need | High (compliance, billing) | Lower (research, creative) |
| Error tolerance | Low — failures must be caught | Higher — can retry or adapt |
| Cost sensitivity | High | Lower |
| Development speed | Faster to build and test | Faster to prototype, slower to harden |
Anthropic's research shows that the most successful production AI systems start with workflows and only graduate to agents when the task genuinely requires dynamic decision-making. Don't reach for agents when a well-designed workflow will do.
The Agentic Spectrum
In reality, most systems aren't purely one or the other. Here's the full spectrum:
| Level | Description | LLM control |
|---|---|---|
| Single call | One prompt, one response | None |
| Prompt chain | Sequential LLM calls, fixed order | None — developer controls flow |
| Routing | LLM classifies input, developer routes to fixed paths | Classification only |
| Parallel | Multiple LLM calls run simultaneously | None — developer controls fan-out |
| Orchestrator-workers | One LLM plans, others execute | Planning LLM directs workers |
| Autonomous agent | LLM decides all actions dynamically | Full — LLM controls loop |
The right position on this spectrum depends on your task's complexity, your need for reliability, and your tolerance for cost and latency.
A Mental Model for Architecture Decisions
When designing an agentic system, ask these three questions:
1. Can I define all the steps in advance? → Yes: Use a workflow (chain, route, or parallelize) → No: Consider an agent
2. How many tools does the task need? → 1–3 tools with clear usage rules: Workflow with tool calls → Many tools with ambiguous usage: Agent with tool selection
3. What happens when it fails? → Must recover gracefully with clear error messages: Workflow → Can retry and adapt: Agent
These questions will serve as your compass throughout this learning path.
What you now understand
| Concept | Key takeaway |
|---|---|
| Augmented LLM | The building block: LLM + retrieval + tools + memory |
| Workflows | Predefined code paths with LLM calls — predictable, debuggable, cost-effective |
| Agents | Dynamic LLM-directed loops — flexible, adaptive, higher cost |
| The spectrum | Single call → chain → route → parallel → orchestrator → autonomous |
| Design heuristic | Start with the simplest architecture that solves the task |
Up next: Workflow Patterns — learn prompt chaining, routing, and parallelization in depth.
Agentic AI Workflows
Master the patterns, architectures, and best practices for building reliable AI agent systems — from simple chains to fully autonomous workflows.
Workflow Patterns
Master the three foundational workflow patterns — prompt chaining, routing, and parallelization — that form the building blocks of every AI system.