Layer Details
How the four-layer pipeline operates: agents, source management, scoring, staleness detection, and the distributed compute model.
Layer 1: Stochastic Discovery (AI — Always Researching)
The AI layer is a continuous research engine. It never stops scanning the outside world, the project's code, and the evolving standards landscape. It produces proposals — never artifacts.
What the agents do:
- intelligence-sync: Sweeps 134+ curated sources weekly, classifies findings as NEW / VARIANT / DUPLICATE
- 4 focused auditors (authorization, arithmetic, temporal, state): Review code against domain-specific checklists, cross-reference external research
- standards-sync: Monitors OWASP, NIST, FIPS updates and Canton/Splice/DAML SDK releases for security-relevant changes
What they cannot do: Create vector entries, write tests, modify semgrep rules, or merge anything. They propose.
intelligence-sync Agent
The backbone of Layer 1. This agent has two jobs, not one:
- Sweep known sources — check every source in the catalog for new findings
- Discover new sources — actively search for credible sources not yet in the catalog
Most security tools only do job 1. Job 2 is what makes the distributed model work — every project running this agent is doing original research, expanding the catalog over time.
Job 1: Sweep known sources
| Source type | Action | Example |
|---|---|---|
github_advisories | WebFetch URL, scan all entries | Canton security advisories |
github_releases | WebFetch URL, read 3 most recent | DAML SDK release notes |
github_issues | WebSearch site-scoped queries | site:github.com/digital-asset/daml-finance security |
page | WebFetch URL, extract findings | Canton security hardening guide |
search | WebSearch each query in search_terms | "Canton authorization issue" |
Schedule: Weekly for high-score sources. Monthly for medium. Quarterly for low.
Job 2: Discover new sources
After sweeping known sources, the agent runs open-ended discovery searches to find sources that aren't in the catalog yet.
Discovery is not a static list of queries. The agent constructs targeted search strategies based on three inputs:
- Gaps in current coverage — what domains have few sources? What tiers are thin?
- Recent findings from Job 1 — a new exploit pattern may lead to a new class of source
- User-provided research prompts — users can direct the agent toward specific areas
Search strategy construction:
| Input signal | Query strategy | Example |
|---|---|---|
| Domain gap: temporal has 8 sources, arithmetic has 22 | Target temporal domain | "smart contract" deadline bypass OR time manipulation 2026 |
| Tier gap: Tier 6 (regulatory) has 5 sources | Target regulatory bodies | digital asset regulation security guidance 2026 -SEC -FINRA |
| Recent finding: new Canton topology attack | Follow the thread | Canton topology delegation security OR vulnerability |
| Sweep result: audit firm X cited in 3 findings | Investigate the firm | "Firm X" audit reports blockchain smart contract |
| User prompt: "look into ZK proof security" | User-directed research | zero knowledge proof vulnerability DAML OR Canton OR blockchain |
| No DAML-specific results | Broaden to analogous patterns | smart contract state machine vulnerability Solidity OR Move |
User-directed discovery via bastion.yaml:
intel:
discovery_prompts:
- "Research Canton Network validator security and slashing conditions"
- "Find audit firms that have reviewed tokenized securities platforms"
- "Look for academic papers on distributed ledger time synchronization attacks"
- "Search for regulatory guidance on digital asset custody security in EU and US"
Source evaluation criteria:
| Criterion | Question | Disqualifying answer |
|---|---|---|
| Relevance | Does this cover DAML, Canton, or transferable patterns? | No connection to smart contracts or DLT |
| Credibility | Who publishes this? Institutional, firm, or individual? | Anonymous, no track record |
| Freshness | When was the last publication? | Nothing in 12+ months |
| Depth | Technical analysis or surface-level news? | Only headlines, no technical content |
| Uniqueness | Does this cover something not already in the catalog? | Fully redundant with existing source |
| Consistency | Does this publish regularly or was it a one-off? | Single blog post, no ongoing coverage |
If the source passes, the agent adds it to sources-local.yaml with an initial score, flags it with propose_upstream: true if broadly useful, and records the discovery trail.
Source tiers:
| Tier | Category | Examples | Sweep frequency |
|---|---|---|---|
| 1 | DAML/Canton official | SDK advisories, Canton docs, release notes | Weekly |
| 2 | Audit firms | Quantstamp, Trail of Bits, OpenZeppelin, Halborn | Monthly |
| 3 | Audit platforms | Solodit, Code4rena, Sherlock, Immunefi | Monthly |
| 4 | Exploit trackers | Rekt.news, DeFiHackLabs, SlowMist | Monthly |
| 5 | Vulnerability registries | NVD, GitHub Advisory, CWE, OWASP | Monthly |
| 6 | Regulatory | SEC, FINRA, BIS, FSB, OCC | Quarterly |
| 7 | Academic | arXiv, Ethereum security docs | Quarterly |
New tiers can be proposed by any project (see PROPOSALS.md).
How it self-improves:
- Each sweep reads
security/results/latest.jsonto know what's already covered - Content-stale and unreachable sources are auto-degraded (see Source Staleness)
- Discovery queries target gaps — if the vector database is thin in a domain, searches focus there
- The
sync-status.yamllog prevents redundant sweeps - Variant detection improves as the vector database grows
Source Scoring Rubric
Each source has a multi-dimensional quality score.
Score dimensions (each 1-5):
| Dimension | What it measures | 5 (highest) | 1 (lowest) |
|---|---|---|---|
| Recency | How recently the source published relevant content | Published this week | No updates in 12+ months |
| Community | Size and activity of the contributing community | Large active community (100+ contributors) | Single author, no community |
| Depth | Technical depth and specificity to DAML/Canton | DAML/Canton-specific findings with PoC | Generic security advice |
| Acceptance | How widely cited/trusted in the security community | Industry standard (OWASP, NVD) | Unknown, unverified |
| Authority | Official status or institutional backing | Official vendor (Digital Asset) | Blog post, no affiliation |
Composite score:
composite = (recency x 0.25) + (community x 0.15) + (depth x 0.25) + (acceptance x 0.15) + (authority x 0.20)
| Composite score | Sweep frequency | Action |
|---|---|---|
| 4.0 - 5.0 | Weekly | Always sweep first |
| 3.0 - 3.9 | Monthly | Sweep on regular cadence |
| 2.0 - 2.9 | Quarterly | Sweep during deep reviews |
| 1.0 - 1.9 | On hold | Flag for review |
Source schema extension:
- id: quantstamp-audits
name: "Quantstamp Audit Reports"
url: "https://quantstamp.com/audits"
type: search
domains: [all]
refresh: monthly
why: "Professional audit firm with Canton/DAML experience."
search_terms: ["Canton audit", "DAML smart contract security"]
score:
recency: 4
community: 3
depth: 5
acceptance: 4
authority: 4
composite: 4.05
last_scored: "2026-03-21"
lifecycle:
status: active
added: "2026-01-15"
last_successful_fetch: "2026-03-14"
last_new_content: "2026-03-14"
consecutive_failures: 0
consecutive_empty_sweeps: 0
findings_contributed: 3
last_finding: "2026-02-20"
Staleness Detection
A source can be stale in two ways:
- Unreachable — 404, timeout, or error
- Content-stale — URL works but no new content for longer than expected
"Expected" depends on the source's refresh cadence:
Source refresh | Content-stale after |
|---|---|
weekly | 6 weeks with no new content |
monthly | 4 months with no new content |
quarterly | 9 months with no new content |
annual | 18 months with no new content |
Source lifecycle states:
Degradation triggers (automatic):
| Trigger | From | To |
|---|---|---|
| 2 consecutive fetch failures | active | degraded (unreachable) |
| No new content past refresh threshold | active | degraded (content-stale) |
| 3 more failures after degradation | degraded | stale |
| 2x threshold with no content | degraded | stale |
Recovery triggers (automatic):
| Trigger | From | To |
|---|---|---|
| Successful fetch with NEW content | degraded | active |
| Score re-evaluated with improved metrics | degraded | active |
Removal requires Layer 2 human approval — the agent proposes, the human decides.
Agent sweep protocol:
- Attempt fetch — record success/failure
- If failure: increment
consecutive_failures, check degradation threshold - If success: reset
consecutive_failures, compare content to last sweep - If new content: reset
consecutive_empty_sweeps, updatelast_new_content, process findings - If no new content: increment
consecutive_empty_sweeps, check staleness threshold - Auto-downgrade
recencyscore when stale (decrement by 1 per missed threshold, floor at 1) - Auto-upgrade
recencyscore when content resumes (4 on first new content, 5 after 2 consecutive)
4 Focused Auditor Agents
Each auditor owns one security domain exclusively.
How they self-improve:
- Each reads its own domain vector file before reviewing — checklist grows as vectors accumulate
- Results from Layer 4 tell the auditor which vectors are COVERED vs MISSING — it focuses on gaps
- External research is domain-filtered
- Cross-domain findings are routed to the correct auditor (root cause determines domain)
Domain ownership:
| Auditor | Owns | Checklist focus |
|---|---|---|
| authorization-auditor | security/vectors/authorization.yaml | Controllers, signatories, observers, multi-party auth, Canton topology |
| arithmetic-auditor | security/vectors/arithmetic.yaml | Division guards, integer vs decimal, overflow, precision, rounding |
| temporal-auditor | security/vectors/temporal.yaml | Deadlines, expiry, staleness, sequencer time drift |
| state-auditor | security/vectors/state.yaml | Lifecycle transitions, archive, stale references, state machines |
New Auditor Domain Discovery
The 4 domains are not fixed. When 3+ vectors don't fit existing domains, the intelligence-sync agent proposes a new auditor.
Detection signals:
| Signal | Example |
|---|---|
Vectors accumulating with domain: other | 5+ vectors about privacy/divulgence |
| Agents flagging "out of scope" repeatedly | Multiple agents note crypto signature issues |
| New standards category emerges | EU DORA regulation |
| User prompts targeting uncovered area | Multiple projects ask about "Canton privacy" |
New auditor proposal includes:
proposed_domain:
name: privacy
description: "Data visibility, divulgence, GDPR compliance, observer leakage"
evidence:
orphan_vectors: ["AV-052", "AV-058", "AV-061"]
out_of_scope_flags: 4
user_prompts_related: 2
proposed_checklist:
- "Observer set is minimal"
- "Divulgence paths are documented and intentional"
- "PII is never stored in contract payloads"
- "Archive does not leak historical data"
Potential future domains:
| Domain | Would cover | Trigger condition |
|---|---|---|
| Privacy | Divulgence, observer leakage, GDPR | EU regulation + Canton privacy features |
| Compliance | KYC, AML, sanctions | Regulated financial instrument projects |
| Integration | API boundaries, oracle trust | Canton interop and Splice patterns |
| Cryptographic | Signatures, hashes, key management | Post-quantum crypto migration |
standards-sync Agent
Monitors regulatory and platform updates.
What it watches:
| Category | Sources |
|---|---|
| Security standards | OWASP Top 10, NIST 800-53, FIPS 140-3 |
| Platform releases | Canton releases + advisories, Splice releases |
| SDK changes | DAML SDK releases, deprecations |
| Regulatory | SEC, FINRA digital asset guidance |
How it self-improves:
- Reads
standards/sync-status.yamlto focus on deltas - Diffs each Canton/Splice release against the previous
- OWASP/NIST mapping coverage percentages reveal gaps
- Deprecation tracking flags security-impacting tool changes
- Breaking changes flagged at higher priority for Layer 2 review
Distributed Compute Model
The framework repository runs nothing. All compute happens at the project level.
| Aspect | How it scales |
|---|---|
| Token cost | Distributed across all projects |
| Coverage breadth | N projects = N x chance of finding new sources/vectors |
| Domain diversity | Custody projects find custody sources, exchange projects find trading sources |
| Validation depth | A vector independently confirmed by multiple projects is higher confidence |
| Catalog growth | 10 projects x 1 new source/quarter = 40 new sources/year |
Proactive Upstream Nudging
Community contribution cannot depend on users proactively remembering to share findings upstream. In practice, teams are focused on their own deliverables and may not recognize when a local discovery has broader value. The system must be self-aware about novelty and actively surface contribution opportunities at the right moment.
| What the agent checks | Novelty signal | Nudge |
|---|---|---|
| New vector not in core | Novel pattern | "Submit to the core framework?" |
| New source scored above 3.5 | High-quality source | "Other DAML projects would benefit" |
| New semgrep rule not in core | Novel detection pattern | "Share it?" |
3+ vectors tagged domain: other | Potential new domain | "Propose a new audit category?" |
| Standards mapping gap filled locally | Control the framework lacks | "Share your mapping?" |
Pending proposals queue (security/proposals/pending/queue.yaml) persists nudges across sessions so discoveries don't get lost.
Gamification and Rewards (Future)
| Mechanism | Incentive |
|---|---|
| Contributor badge | Recognition in the ecosystem |
| Leaderboard | Competitive motivation |
| Early access | Beta channel for top contributors |
| Governance weight | Sustained contributors nominated to security board |
| Severity multiplier | CRITICAL = 5 points, LOW = 1 point |
| Discovery bonus | Source that produces 3+ accepted vectors |
Layer 2: Human Validation (Trust Layer)
Every proposal stops here. A human reviews and makes an explicit accept/reject/revise decision.
At local project scope: The developer sees a structured proposal and decides whether it's valid for their codebase.
At core framework scope: The governance committee applies a higher bar — does this apply to any DAML/Canton project? See GOVERNANCE.md for the full review process.
Layer 3: Automated Artifact Generation (Post-Human AI)
Once a human accepts, /integrate-vector generates everything needed — but only because a human said "yes."
| Artifact | Location | Purpose |
|---|---|---|
| Vector YAML entry | security/vectors/<domain>.yaml | Permanent record of the threat |
| Semgrep rule | semgrep/daml-security.yaml | Static pattern detection |
| DAML test skeleton | security/tests/Test<Domain>.daml | Executable validation |
| Index update | security/vectors/_index.yaml | Tracking and coverage |
| Verification run | security/results/latest.json | Confirm artifacts work |
Layer 4: Deterministic Execution (Runs Forever)
The artifacts from Layer 3 run automatically on every commit, CI build, and verification pass.
What runs automatically:
| Hook | When | What it checks |
|---|---|---|
| Pre-commit | Every git commit | Semgrep scan against all accumulated rules |
| CI pipeline | Every push/PR | dpm test runs all DAML tests |
make bastion-verify | On demand or CI | Full check: semgrep + tests + coverage + compliance |
| Scheduled | Weekly/monthly | Coverage report, staleness detection |
The feedback loop: Layer 4 results feed back to Layer 1. Agents on their next pass see which vectors are COVERED vs MISSING, which tests passed vs failed, and where the gaps are.