Severity levels
Every finding produced by a ticketyboo scan carries a severity level. Severity levels let you calibrate which findings your contract treats as blocking and which are informational. This page explains what each level means and how the scanner assigns them.
The five levels
Findings are assigned one of five severity levels in descending order of urgency:
| Level | Meaning | Typical use |
|---|---|---|
| critical | An issue that poses immediate risk to security, data integrity, or compliance. Requires remediation before merge. | Hardcoded AWS secrets, SQL injection, unencrypted public S3 bucket |
| high | A serious issue with known exploitability or significant compliance exposure. Strong remediation recommendation. | Known CVE in a direct dependency, command injection, IAM wildcard policy |
| medium | A noteworthy concern that increases risk but does not represent immediate exploitation. Plan to address. | Unpinned dependency versions, weak cryptography usage, missing README |
| low | A minor issue or deviation from best practice. Low exploitation risk. Address during normal maintenance. | Missing type hints, moderate function complexity, few TODO comments |
| info | An observation with no inherent risk. Informational only. | Dependency inventory, license classification, file structure notes |
How the scanner assigns severity
Each scan layer assigns severity according to its own rules, documented below. The learning loop can adjust severity up or down based on accumulated human feedback, but the default rules are deterministic.
Governance layer
| Finding | Default severity |
|---|---|
| Missing README | medium |
| No CI workflow detected | medium |
| No test files found | medium |
Dependency layer (deep)
| Finding | Default severity |
|---|---|
| Critical CVE (CVSS >= 9.0) | critical |
| High CVE (CVSS 7.0-8.9) | high |
| Moderate CVE (CVSS 4.0-6.9) | medium |
| Low CVE (CVSS < 4.0) | low |
| Unpinned dependency version | medium |
Secret detection layer (deep)
| Finding | Default severity |
|---|---|
| AWS access key or secret key pattern | critical |
| Private key (RSA/EC/SSH) | critical |
| Database connection URL with credentials | critical |
| High-entropy token (Shannon > 4.5, length >= 16) | high |
| Generic API key or webhook secret pattern | high |
| JWT token hardcoded | medium |
SAST layer (deep)
| Finding | Default severity |
|---|---|
| Command injection (Python AST) | critical |
| SQL injection (Python AST) | critical |
| Insecure deserialization (Python AST) | high |
| Cross-site scripting (Python AST) | high |
| Path traversal (Python AST) | high |
| Weak cryptography usage | medium |
| Regex-based patterns (JS, Go, Ruby) | medium |
IaC layer (deep)
| Finding | Default severity |
|---|---|
| S3 bucket with no encryption | high |
| S3 bucket with public ACL | critical |
| Security group open to 0.0.0.0/0 (unrestricted ingress) | high |
| IAM policy with wildcard action (*) | high |
| RDS instance with encryption disabled | high |
License layer (deep)
| Finding | Default severity |
|---|---|
| GPL/AGPL copyleft dependency in commercial project | high |
| LGPL dependency (weaker copyleft) | medium |
| License field mismatch (package.json vs LICENSE file) | medium |
| No license file detected | medium |
| License classification (MIT, Apache-2.0, BSD, ISC) | info |
Quality layer (deep)
| Finding | Default severity |
|---|---|
| Cyclomatic complexity > 15 | medium |
| Function longer than 100 lines | low |
| File larger than 500 lines | low |
| Type hint coverage below 50% | low |
| More than 5 TODO/FIXME comments | low |
Severity in contracts
A contract gate specifies a minimum severity level. The gate is evaluated against all findings in its category that meet or exceed that severity.
Example: A gate with "category": "security" and "severity": "high" counts findings at severity high and critical. Findings at medium, low, or info are ignored by that gate.
The severity field in a gate acts as a lower bound. Use it to focus blocking gates on the issues that matter most to your team.
The learning loop and severity adjustment
When you submit feedback on a finding (thumbs up / thumbs down via the scan results UI), that signal is recorded in the scanner-feedback DynamoDB table. The learning loop aggregates feedback over 30 days and writes a lessons-learned.md document to S3. On subsequent deep scans, the scanner loads this document and adjusts finding confidence scores accordingly.
Confidence adjustments affect the reported severity indirectly. A finding with repeatedly negative feedback will have its confidence reduced. Findings with very low confidence are still reported but labelled as uncertain. Severity levels themselves are not changed by the learning loop; confidence is a separate field on the finding.
Confidence adjustment is an additive overlay. The underlying severity from the scan layer is preserved; only the confidence field changes.