PeerAI Trust Center
Welcome

PeerAI Trust Center

Transparent by design. Secure by default. Continuously verifiable.

This portal provides customers and partners with a transparent view of how PeerAI products are built, secured, and operated — releases, SBOMs, vulnerability remediation, compliance progress, and architecture posture.

Last verified May 1, 2026a390ee4
0
Open critical CVEs
0
Open high CVEs
Daily
Vulnerability scans
2,672
SBOM components tracked

Risk profile

View detail
  • Data access level
    PeerAI never accesses customer data
    Customer-controlled
  • Impact level
    Single-user install, no shared data plane
    Local
  • Recovery time objective
    Installable client; no PeerAI-side production data plane
    N/A
  • Deployment model
    Tauri desktop app + Python sidecar
    Customer endpoint

Compliance status

Full roadmap
SOC2
In progress
SOC 2
Type I — service org controls
ISO
Planned
ISO 27001
Information security management
GDPR
Planned
GDPR
EU data protection alignment
SBOM
Attested
CycloneDX SBOM
Per-release software bill of materials

How we earn trust

Transparent by design

Every release publishes a CycloneDX SBOM, vulnerability scan rollup, and signed checksums. Source of every claim is linked.

Secure by default

Local-first execution, customer-controlled LLM and database, daily automated dependency scanning, pre-commit checks, and a documented release security gate.

Continuously verifiable

Scans run daily and on every release. Compliance status carries evidence links and last-verified dates. Trust shouldn't be an annual claim.

Operating model

Input is yours. Output is yours.
PeerAI is just the accelerator.

PeerAI is scaffolding for AI-native delivery — not your permanent stack. Once your modernization is built, the scaffolding comes down. The code, architecture, and systems are yours to keep, modify, and operate without us.

How the scaffolding model works

AI Security

AI agents work in your boundary, not ours.

BYO LLM provider, BYO database, local agent execution, sandboxed code paths, prompt-injection mitigations, structured tool guardrails.

Learn more

Responsible AI

Customer data trains nothing. Outputs belong to you.

Customer-selected provider, no training on your prompts, human-in-the-loop controls, evaluation + observability, output ownership.

Learn more

Recent releases

All releases
v0.75.0-alpha.4
alpha
2026-04-25

Added

  • Code Bench v1 — Multi-model consensus evaluator for code quality, sibling to Doc Bench under Testing & QA → Evaluators
    • Project types: Spring Boot, FastAPI, React, Generic
    • Languages: Python, TypeScript, JavaScript, Java, Other
    • Per-file, classification-aware rubrics keyed by (language, project_type, classification)
    • 5-step wizard: Scan → Select → Classify → Configure → Results
    • Tri-state file tree, per-file dimension + model breakdown
    • Mongo-backed rubric authoring with in-app wizard editor
    • Markdown / PDF / CSV export
    • SSE streaming evaluation with cancel support
  • Markview DMN/DRL viewers — Native rendering for DMN decision tables and Drools DRL files
  • Code Insights user scoping — Runs now scoped by user email and machine ID for multi-user installations

Changed

  • Evaluators grouping — Doc Bench and Code Bench unified under a single Evaluators section
  • Code Bench results UX — Refined per-file results presentation, distinct navigation icon
  • LLM-only recommendations — Code Bench recommendations now strictly LLM-provided (no rule-based fallback)
  • Model registry — Tightened to latest model options across providers
  • Progressive markdown rendering — Improved rendering performance for large markdown documents

Fixed

  • License activation reliability with v2 safeguards
  • Code Bench evaluation progress now visible after page refresh (server-side fallback when SSE hook is cold)
  • Code Bench evaluation hook: cancel correctly resolves stuck evaluating state
  • Code Bench list page: runs sorted by recency, Space-key activation works for keyboard users
  • Code Insights spec tree fixes

Security

  • Vulnerable dependency upgrades across the stack
  • Tightened release vulnerability gates in CI
v0.75.0-alpha.3
alpha
2026-04-23

Security

  • ⬆️ Dependency CVE patches — upgraded packages across all lockfiles to resolve critical/high vulnerabilities reported by Aikido:
    • Python: cryptography 46.0.7, aiohttp 3.13.5, pillow 12.2.0, lxml 6.1.0, nltk 3.9.4, langchain-core ≥ 1.3.0 (in src-python, src-datamigration, peerai-sdk)
    • Rust: openssl / openssl-sys / aws-lc-rs (GHSA-hppc-g8h3-xhp3 and related)
    • JS: undici ≥ 8.1.0, flatted, basic-ftp, picomatch ≥ 2.3.2, next ≥ 16.2.3, vite ≥ 8.0.5, rollup ≥ 4.59.0, minimatch 10.2.4, dompurify ≥ 3.3.2 across root, portal, and marketplace lockfiles
  • 🔥 Replace GPL-3 html2text with MIT markdownify — removed GPL-3 html2text from src-python to eliminate proprietary-distribution license risk; product_backlog_import_service.py now uses markdownify

Fixed

  • 🐛 SQL injection via unsafe identifier concatenation — added _qi() helper in postgresql.py to safely double-quote PostgreSQL identifiers; fixed SQL export in StepResults.tsx to quote table and column names (Aikido SAST: AIK_python_B608, AIK_ts_sql_injection_template_literal)
  • 🐛 Vite 8 build failure — converted manualChunks from object to function form required by rolldown (Vite 8's bundler); chunk groupings are unchanged
  • 🐛 Doc-bench CI evaluator — moved get_tracked_llm() inside the rubric-evaluation phase guard so it is not called when rubric eval is skipped; guarded both llm.get_usage_summary() call sites against an unbound llm
  • ✅ Doc-bench test fixes — updated TestCompositeScore to unpack the 3-tuple from _compute_composite_score; updated ALL_DIMENSIONS count assertion to 10; patched get_tracked_llm / resolve_api_keys_for_agent in TestPhaseSkipIntegration so tests pass in CI without API keys configured
v0.75.0-alpha.2
alpha
2026-04-21

Added

  • ✨ Azure Blob Storage support for Coder published file browsing (#490) — the Coder editor's Published source toggle now reads from Azure Blob publications in addition to GitHub, matching the universal publisher's multi-backend shape
  • ✨ Specs activity + activity-aware navigation (#488) — new Specs activity in the project lifecycle, a custom project picker, and contextual navigation that follows the current activity
  • ✨ Proactive OpenCode install guidance — Coder launch now shows install instructions up front when the binary isn't detected, instead of failing late

Fixed

  • 🐛 Backend test expectation — aligned non-GitHub published-source error handling with what the runtime actually returns
  • 🐛 OpenCode install command — runtime contract parity test + alignment so the install path matches what the harness invokes

Changed

  • 📝 CHANGELOG — restored the v0.75.0-alpha.1 entry that was accidentally removed by a main-branch sync; reference links now point at v0.75.0-alpha.1 → v0.75.0-alpha.2

Continue exploring

Need confidential evidence?

Pen test reports, completed CAIQ / SIG questionnaires, and detailed CVE remediation logs are available to qualified customers and prospects under NDA. We're building a gated document portal — until then, the security team responds within one business day.