False Positives & Severity Tuning
Learn how to handle false positives and tune severity levels to keep SyntaxValid accurate and trustworthy.
## False Positives & Severity Tuning
Accuracy is critical for trust.
SyntaxValid is designed to minimize false positives while preserving sensitivity to real risk.
This page explains how false positives are handled and how severity tuning keeps results actionable.
---
## What is a false positive?
A false positive occurs when a rule flags an issue that does not represent real risk in context.
False positives:
- Reduce trust in tools
- Create alert fatigue
- Lead teams to ignore important findings
SyntaxValid treats false positives as a signal quality problem, not a user error.
---
## How SyntaxValid minimizes false positives
SyntaxValid reduces false positives through:
- Context-aware analysis
- Policy-based enforcement instead of hardcoded blocking
- Separation of detection and decision-making
- Deterministic, explainable rules
- Gradual enforcement via severity thresholds
Detection remains broad.
Enforcement remains selective.
---
## Severity tuning vs disabling rules
Disabling rules hides information.
Severity tuning preserves visibility.
Severity tuning allows teams to:
- Lower enforcement without losing insight
- Keep findings visible but non-blocking
- Adjust impact on TrustScore
This maintains long-term accuracy.
---
## When a finding feels incorrect
If an issue appears incorrect or irrelevant:
1. Review the explanation and context
2. Check the active policy
3. Confirm severity and blocking status
4. Adjust policy thresholds if needed
5. Re-run analysis to verify changes
Do not suppress findings blindly.
---
## Common causes of false positives
False positives often arise from:
- Unusual but safe architectural patterns
- Legacy code with accepted trade-offs
- Domain-specific constraints
- Generated code with controlled behavior
Policies allow teams to reflect these realities.
---
## Severity tuning best practices
- Start with default severities
- Tune severity before changing blocking rules
- Avoid marking everything as low severity
- Re-evaluate tuning periodically
- Align tuning with real incidents and risk
Balanced tuning improves signal quality.
---
## False positives and TrustScore
TrustScore reflects policy-weighted risk, not raw findings.
Reducing severity or blocking impact:
- Lowers TrustScore sensitivity
- Does not hide issues
- Preserves auditability
TrustScore remains meaningful when tuning is intentional.
---
## Organizational learning
False positives are feedback.
They help:
- Improve policies
- Refine enforcement
- Align tools with real-world usage
Teams that tune thoughtfully gain better long-term results.
---
## What not to do
- Do not disable detection globally
- Do not ignore repeated false positives
- Do not over-tune to silence important signals
Accuracy requires balance.
---
## Next steps
- Trust and security guarantees
- Advanced integrations
- Reference documentation