AI Analysis
How FixSense uses AI to analyze E2E test failures and identify root causes.
Overview
Every time a Playwright or Cypress test fails in your CI, FixSense uses AI to perform intelligent root cause analysis. The AI examines the failure logs, test context, and error patterns to deliver actionable insights.
What Gets Analyzed
For each failed test, the AI receives:
- Error message and stack trace
- Test file name and test title
- Expected vs actual behavior
- Test framework log output (actions, assertions, network)
- Screenshot and trace references (if available)
Analysis Output
Each analysis produces a structured result:
Root Cause
A clear explanation of why the test failed. Examples:
- "The login button selector
#submitwas changed to.btn-primaryin the latest commit" - "A race condition: the API response arrives after the assertion timeout of 5000ms"
- "The test relies on a specific data seed that was modified in the database migration"
Failure Category
Each failure is categorized as:
| Category | Description |
|---|---|
| Regression | A code change broke existing functionality |
| Flaky | Intermittent failure due to timing, network, or environment |
| Test Maintenance | Test code needs updating to match new UI/behavior |
| Environment | CI environment issue (dependency, service, config) |
Fix Suggestion
Specific code changes to resolve the issue:
// Before (failing)
await page.click('#submit');
// After (suggested fix)
await page.click('.btn-primary');Confidence Score
A 0-100% score indicating how certain the AI is about its analysis. Scores above 80% are typically very accurate.
Analysis Actions
Each analysis card in the dashboard has actions you can take:
Reanalyze
Triggers a new AI API call on the same failure data (test name, error message, and diff context). The AI may produce different root cause explanations, flakiness scores, and fix suggestions on each run. Useful when:
- The original analysis had low confidence
- An AI error occurred during the first attempt
- You want a fresh perspective after understanding the failure better
The card shows a spinner while re-analyzing and updates in-place with the new results — root cause, flakiness score, suggested fix, and confidence are all replaced.
Each reanalyze counts as one analysis toward your monthly quota, since it makes a real AI API call. If you've reached your plan limit, the button will return an error.
Copy
Copies the full analysis to your clipboard in a clean text format — test name, root cause, error message, and suggested fix. Useful for pasting into Slack, Jira, or team discussions.
Delete
Permanently removes the analysis from your dashboard and database. Includes an inline confirmation step to prevent accidental deletion.
Pattern Learning
FixSense learns from your feedback to get smarter over time:
- When you mark an analysis as helpful, FixSense remembers the failure pattern. If the same test fails with the same error again, you get an instant cached result — no waiting, no extra analysis usage.
- When you mark an analysis as unhelpful, FixSense ensures it won't reuse that result. The next time the same failure occurs, a fresh analysis runs with improved context.
This means the more you use FixSense, the faster and more accurate it becomes for your specific codebase.
Efficiency
FixSense only analyzes failed tests. Passing tests are completely ignored, keeping your usage efficient and your monthly analysis count low.
A typical team with 500 tests uses only a fraction of their monthly analysis quota, even on the Free plan.