Runs
Track review and execute runs in Cloud Agents, understand statuses, and inspect timelines, findings, artifacts, and approvals.
In Cloud Agents, a run is the full execution record for a review or execute task. If you need evidence, history, or accountability, you go to Runs.
Who Uses the Runs Page
| Role | What they usually look for |
|---|---|
| Developers and code owners | What the run found, changed, or paused on |
| Engineering leads | Whether work is moving, blocked, or failing across repositories |
| Platform teams | Whether policy, validation, or automation defaults are causing friction |
| Approvers | Whether the run produced enough evidence to approve or reject the next step |
Runs List
The runs list is the operational queue for the workspace. It groups work under filters such as:
- All
- Running
- Queued
- Awaiting
- Completed
- Failed
Each row typically shows:
- the repository
- the run type such as Execute or Code Review
- age and duration
- current status
This view answers the immediate question: what is happening to our work right now?
Run Statuses
| Status | Meaning | Operator action |
|---|---|---|
| Queued | The run exists but has not begun active execution yet. | Wait or inspect whether capacity is constrained. |
| Running | The agent is actively processing the run. | Monitor if needed. |
| Awaiting | The run is paused for approval or another explicit checkpoint. | Open the run and decide. |
| Completed | The run finished successfully. | Inspect findings or artifacts and move forward. |
| Failed | The run ended with an execution or validation failure. | Open timeline and artifacts to diagnose. |
| Cancelled | The run was intentionally stopped. | Confirm whether a replacement run is needed. |
Review Runs vs Execute Runs
| Run type | Primary question | What to inspect first |
|---|---|---|
| Review | Is the existing change safe, complete, or risky? | Findings, then timeline |
| Execute | Did the agent prepare the intended change correctly and safely? | Artifacts, then timeline |
This distinction matters because the same status can mean different things. A completed review run is an analysis result. A completed execute run is a prepared change outcome that may still require validation or approval.
Run Detail Page
Opening a run reveals the detailed execution record. The current UI organizes this into tabs such as:
| Tab | What it contains |
|---|---|
| Overview | High-level summary of the run and its current state |
| Timeline | Ordered event log of what happened during execution |
| Findings | Review issues, risks, and identified problems |
| Artifacts | Files, patches, plans, or outputs produced by the run |
Timeline
The Timeline tab is the audit trail. It records major steps such as:
- base branch ready
- repository context prepared
- AI analyzed repository
- code changes prepared
- retry scheduled
- approval requested
This is the fastest way to understand sequence and failure locality.
Why timeline matters
Sequence clarity
You can tell whether the run failed during context prep, analysis, change preparation, or validation.
Visible waiting
A paused run does not disappear into ambiguity. The timeline shows where and why it stopped.
Post-run review
Teams can audit what happened after the fact instead of trusting a summary sentence.
Findings
The Findings tab is most important for review runs. It centralizes the issues the agent wants a person to evaluate.
Use findings for:
- bug risk review
- regression risk review
- missing test identification
- security or path-sensitive concerns
If a run reports 0 findings, that does not automatically mean the code is ready to merge. It means the automated review did not identify findings in its configured scope.
Artifacts
The Artifacts tab holds the tangible outputs of a run. Depending on run type and rollout, artifacts can include:
- generated plan blocks
- diff or patch outputs
- structured reports
- execution logs or related attachments
Artifacts matter because they turn a run from commentary into inspectable work product.
Pending Approvals Inside a Run
When a run requires approval, the detail page surfaces a clear approval panel with:
- the approval reason, such as merge
- action buttons such as Approve and Reject
- expiration information
This keeps governance inside the same operational record as the work itself.
Reading a Failed Run
When a run fails, read it in this order:
- Status at the list level
- Timeline to locate the failure stage
- Artifacts for the actual output or failing context
- Repository settings if the run appears to be blocked by policy rather than execution
Questions the Runs Page Should Help You Answer
| Question | Best place to look |
|---|---|
| Why is this run waiting? | Timeline and approval state |
| What exactly did the agent produce? | Artifacts |
| What risks were identified? | Findings |
| Is the problem policy or execution? | Timeline first, then repository settings |
| Is the workspace healthy overall? | Status filters here, then Usage & Add-ons |
Recommended Review Flow for Operators
- Filter Runs to the relevant status.
- Open the specific run.
- Read Timeline before interpreting summary text.
- Review Findings or Artifacts depending on run type.
- Move to Approvals if a decision is required.
Do not approve a run from the status badge alone. Use the timeline and artifacts to understand what the run actually did or intends to do.
Signals That the Workflow Needs Tuning
- many runs stop in Awaiting for work that should be low risk
- the same repository repeatedly produces cancellations or retries
- execute runs finish but validation frequently disagrees with the intended outcome
- review runs often produce either zero-value noise or miss obvious risk areas
When these patterns appear, the issue is often prompt scope, repository policy, or agent selection rather than the existence of the run system itself.
Related
Approvals
Learn how approval gates interact with run lifecycle.
Chat & Dispatch
Start the right kind of run with the right scope.
Repositories
Adjust repo policy if runs are too noisy, too broad, or too permissive.
Usage & Add-ons
Monitor capacity when queued or blocked work starts to accumulate.
How is this guide?