AI Security
AI agents work in your boundary, not ours.
PeerAI Studio is scaffolding for AI-native delivery — the AI runtime, the data, and the model provider are all selected and controlled by the customer. PeerAI never operates a data plane that holds your code, prompts, or model outputs. Input is yours. Output is yours.
Commitments
- Customer-controlled LLM
BYO API key for OpenAI / Anthropic / Google / Azure / AWS Bedrock / Cohere / OpenRouter / HuggingFace. Studio relays requests; PeerAI does not proxy or cache them.
- Customer-controlled database
Studio connects to your DBs using credentials you provide. No PeerAI-side replica.
- Local agent execution
Agents run inside the Tauri sidecar on the customer endpoint. No PeerAI-hosted agent runtime.
- Structured tool guardrails
Tools are registered with strict input/output schemas. Outputs are validated before being returned to the agent loop.
- Sandboxed code execution
Code-execution paths run in subprocess isolation with timeouts and resource limits.
- Prompt-injection mitigations
System-prompt isolation, tool-use allowlists per agent, output validation, retrieval-source attribution.
- Secrets stay local
API keys land in macOS Keychain (default) or an encrypted file at ~/.peerai/credentials.json. Never sent to PeerAI telemetry.
Architecture trust boundary
The dashed line marks the trust boundary. Code, prompts, model outputs, and database contents do not cross it unless the customer explicitly enables an optional outbound (Sentry crashes or Arize evaluation traces).
Threat model — what we worry about
- Prompt injection from retrieved content
Mitigation: system-prompt isolation, tool-use allowlists per agent, retrieved content is tagged as untrusted and not concatenated with system instructions.
- Tool misuse by the model
Mitigation: every tool call is schema-validated, scoped (e.g., file system access restricted to user-selected paths), and the agent loop logs each invocation locally.
- Code-execution escape
Mitigation: subprocess isolation, time limits, no privileged execution; user remains the OS principal.
- Credential exfiltration via agent
Mitigation: secrets are read by the sidecar from Keychain/credentials file at request time and never placed in agent context windows.
- Supply-chain compromise of dependencies
Mitigation: daily grype + pip-audit + cargo-audit + bun pm audit; release security gate; CycloneDX SBOM per release.
SBOM
Evidence
- Daily vulnerability scans
grype + pip-audit + cargo-audit + bun pm audit, daily 09:00 UTC, plus continuous Aikido on every lockfile push.
View rollup - Per-release CycloneDX SBOM
2,672 components tracked across npm/pypi/cargo with NTIA enrichment and license compliance tracking.
View SBOMs