Decision Intelligence Layer
Analysis that cites its evidence. No hallucinated CVEs. No unsupported recommendations.
AI-assisted security analysis has a well-understood failure mode: confident-sounding assertions that aren't supported by available evidence. In security, this failure mode has direct costs. An analyst acting on a hallucinated attack path spends time on a non-existent problem. An escalation based on a hallucinated severity assessment misallocates remediation resources. A compliance report citing hallucinated control evidence fails an audit.
SPNT's Decision Intelligence Layer is built around one architectural constraint: the platform's reasoning engine cannot produce analysis that isn't supported by data in the substrate. Every claim cites the substrate records it's based on. An analyst reviewing the output can trace any assertion back to the record it came from.
The five structured reasoning outputs
Operational Digest. Produced daily (or at a customer-configured time). A written summary of the most significant changes overnight: new critical findings, enrichment events that changed priority, control-health failures, and research results. Replaces the morning dashboard review.
Prioritization Output. A ranked list of findings with written rationales. Explains why each finding is at its current priority given the available evidence — which OSINT signals contributed, which telemetry events are relevant, what the verification status is.
Consequence Analysis. An assessment of what could happen if a specific vulnerability or pattern of findings were exploited. Estimates blast radius, identifies assets and identities within scope of the potential impact, and assesses which compensating controls are active.
Remediation Sequence. An ordered set of remediation steps that accounts for dependencies. If fixing finding A requires a service restart that would temporarily expose finding B, the sequence captures that relationship.
Confidence Assessment. An evaluation of how much trust to place in a specific finding given the quality and quantity of evidence. Used when a finding has conflicting signals or uncertain evidence.
The autonomous research engine
For Enterprise and Sovereign tiers, an autonomous research engine runs continuously, investigating substrate patterns that warrant deeper analysis. It is given a tool set — read access to substrate records, lookups against OSINT signals, calls to the structured reasoning outputs — and a research objective. It iterates until it reaches a conclusion or a resource cap.
Five automatic trigger conditions launch investigations without human intervention:
- A cluster of new critical findings arriving in a short window.
- Multiple enrichment signals arriving for the same asset.
- A pattern of control-health failures suggesting a systemic problem.
- Correlated signals across detection, telemetry, and OSINT for the same asset.
- A shift in confidence scores across a finding cluster.
How "grounded" actually works
The engine operates with a constrained tool set. It cannot browse the web, query arbitrary external APIs, or access information that hasn't been written to the substrate by one of the platform's normalised ingestion processes. This means:
- Every factual claim about your environment must derive from a substrate record.
- Every substrate record has an origin — a scan run, a telemetry event, an OSINT signal, or a previous reasoning output. Each has a timestamp and a confidence score.
- Each output includes the list of substrate identifiers that support each claim.
An analyst can open any claim, see the records it cites, and verify the records say what the engine says they say. If a claim is wrong, it is traceable — either the underlying data was wrong, or the engine misinterpreted it. Both are inspectable.
Resource caps and discipline
Research runs operate within explicit caps: maximum tool invocations per run, maximum cost per run, maximum substrate reads per run, maximum recursion depth for nested calls.
A run that cannot reach a conclusion within its resource budget stops and records a partial output flagged as such. It does not extend indefinitely into speculative territory. Analysts know when to treat a result as preliminary.
The caps are not primarily about cost control — they are a discipline against runaway speculation.
Which LLM providers
SPNT uses a multi-provider architecture. The standard configuration routes inference through leading commercial providers with a fallback path for availability. The Sovereign tier enforces EU-hosted inference only, and includes a self-hosted large-language-model option that runs entirely on customer-controlled infrastructure for organisations with strict data-residency obligations.
Tier availability
| Capability | Free | Commercial | Enterprise | Sovereign |
|---|---|---|---|---|
| All five structured reasoning outputs | — | Full | Full | Full |
| Autonomous research engine | — | — | Full | Full |
| Preferred-provider override | — | — | Full | Full |
| EU-hosted inference enforcement | — | — | — | Full |
| Self-hosted LLM option | — | — | — | Full |
See reasoning in action
A demonstration of a Prioritization Output or Consequence Analysis — with citations to the substrate records that support each claim.