Error Monitoring Automation
Evaluation & Quality
What It Does
Automates Sentry error triage: listing unresolved issues, generating structured prompts for AI-powered fixes, creating tickets with full stacktrace context, and prioritizing by user impact.
The Pattern
Error monitoring has a natural pipeline: detect, investigate, act. This skill automates each step.
Detection queries the Sentry API for unresolved issues, sorted by occurrence count and affected users. The key fields per issue:
- Short ID: Human-readable identifier (like
APP-2D) - Title: The error message
- Count: How many times it's occurred
- User Count: How many users are affected
- Priority: High/medium/low
- Culprit: Source file and function
- Last Seen: Most recent occurrence
Investigation pulls the full event details including the latest stacktrace, browser/device context, and user breadcrumbs (the sequence of actions before the error). Breadcrumbs are often more useful than the stacktrace for understanding root cause.
Three action paths:
-
Create a ticket. For issues that need team attention. The ticket includes: error details, formatted stacktrace, browser/device context, reproduction hints from breadcrumbs, and a suggested fix based on error analysis.
-
Generate an AI-fixable prompt. For issues an AI agent can fix directly. The prompt is structured with the error, relevant source code (if source maps are available), user context, and investigation areas. This can be piped directly into a coding agent.
-
Auto-fix pipeline. For mature setups: list errors, analyze each, generate a fix, test it in a sandbox, open a draft PR if tests pass. The most autonomous path.
Triage priority rules:
| Occurrence Count | Action |
|---|---|
| 50+ | Create ticket immediately |
| 10-50 | Batch into weekly triage |
| Under 10 | Monitor, may be edge cases |
Key Decisions
Include breadcrumbs in AI prompts. The sequence of user actions before an error is often more diagnostic than the stacktrace. A button click followed by a network request followed by a crash tells a story that a stacktrace alone doesn't.
CLI scripts over MCP servers. Error monitoring tools need to work in cron jobs, terminal sessions, and CI pipelines. CLI scripts are portable across all of these. MCP servers tie you to a specific runtime.
Source maps matter. Without source maps, stacktraces show minified code. Upload source maps during your build process. The difference between "error at line 1, column 47823" and "error at auth.ts:42 in validateToken()" is the difference between a useful and useless bug report.
API over CLI for deep analysis. Sentry's CLI tool lacks stacktrace and breadcrumb details. Always use the REST API for investigation. The CLI is fine for quick status checks.
When to Use It
- Daily error triage (check for new issues each morning)
- After deployments (compare error rates before and after)
- Building automated error-to-fix pipelines
- Creating structured bug reports that contain enough context for someone (or something) to fix without additional investigation