Skip to main content

Severity levels

Every finding produced by a ticketyboo scan carries a severity level. Severity levels let you calibrate which findings your contract treats as blocking and which are informational. This page explains what each level means and how the scanner assigns them.

The five levels

Findings are assigned one of five severity levels in descending order of urgency:

Level Meaning Typical use
critical An issue that poses immediate risk to security, data integrity, or compliance. Requires remediation before merge. Hardcoded AWS secrets, SQL injection, unencrypted public S3 bucket
high A serious issue with known exploitability or significant compliance exposure. Strong remediation recommendation. Known CVE in a direct dependency, command injection, IAM wildcard policy
medium A noteworthy concern that increases risk but does not represent immediate exploitation. Plan to address. Unpinned dependency versions, weak cryptography usage, missing README
low A minor issue or deviation from best practice. Low exploitation risk. Address during normal maintenance. Missing type hints, moderate function complexity, few TODO comments
info An observation with no inherent risk. Informational only. Dependency inventory, license classification, file structure notes

How the scanner assigns severity

Each scan layer assigns severity according to its own rules, documented below. The learning loop can adjust severity up or down based on accumulated human feedback, but the default rules are deterministic.

Governance layer

FindingDefault severity
Missing READMEmedium
No CI workflow detectedmedium
No test files foundmedium

Dependency layer (deep)

FindingDefault severity
Critical CVE (CVSS >= 9.0)critical
High CVE (CVSS 7.0-8.9)high
Moderate CVE (CVSS 4.0-6.9)medium
Low CVE (CVSS < 4.0)low
Unpinned dependency versionmedium

Secret detection layer (deep)

FindingDefault severity
AWS access key or secret key patterncritical
Private key (RSA/EC/SSH)critical
Database connection URL with credentialscritical
High-entropy token (Shannon > 4.5, length >= 16)high
Generic API key or webhook secret patternhigh
JWT token hardcodedmedium

SAST layer (deep)

FindingDefault severity
Command injection (Python AST)critical
SQL injection (Python AST)critical
Insecure deserialization (Python AST)high
Cross-site scripting (Python AST)high
Path traversal (Python AST)high
Weak cryptography usagemedium
Regex-based patterns (JS, Go, Ruby)medium

IaC layer (deep)

FindingDefault severity
S3 bucket with no encryptionhigh
S3 bucket with public ACLcritical
Security group open to 0.0.0.0/0 (unrestricted ingress)high
IAM policy with wildcard action (*)high
RDS instance with encryption disabledhigh

License layer (deep)

FindingDefault severity
GPL/AGPL copyleft dependency in commercial projecthigh
LGPL dependency (weaker copyleft)medium
License field mismatch (package.json vs LICENSE file)medium
No license file detectedmedium
License classification (MIT, Apache-2.0, BSD, ISC)info

Quality layer (deep)

FindingDefault severity
Cyclomatic complexity > 15medium
Function longer than 100 lineslow
File larger than 500 lineslow
Type hint coverage below 50%low
More than 5 TODO/FIXME commentslow

Severity in contracts

A contract gate specifies a minimum severity level. The gate is evaluated against all findings in its category that meet or exceed that severity.

Example: A gate with "category": "security" and "severity": "high" counts findings at severity high and critical. Findings at medium, low, or info are ignored by that gate.

The severity field in a gate acts as a lower bound. Use it to focus blocking gates on the issues that matter most to your team.

The learning loop and severity adjustment

When you submit feedback on a finding (thumbs up / thumbs down via the scan results UI), that signal is recorded in the scanner-feedback DynamoDB table. The learning loop aggregates feedback over 30 days and writes a lessons-learned.md document to S3. On subsequent deep scans, the scanner loads this document and adjusts finding confidence scores accordingly.

Confidence adjustments affect the reported severity indirectly. A finding with repeatedly negative feedback will have its confidence reduced. Findings with very low confidence are still reported but labelled as uncertain. Severity levels themselves are not changed by the learning loop; confidence is a separate field on the finding.

Confidence adjustment is an additive overlay. The underlying severity from the scan layer is preserved; only the confidence field changes.

Next steps