Data Privacy
Input is yours. Output is yours. PeerAI is just the accelerator.
PeerAI's privacy posture is shaped by the operating model: Studio is scaffolding, not your permanent stack. It runs on the customer endpoint; the data plane is the customer's. PeerAI's role in the data flow is operational (release hosting, optional crash reporting) — not custodial. The artifacts produced — code, ADRs, architecture, migration scripts — are yours from the moment they're generated.
What PeerAI does and doesn't process
- Does not process customer code, prompts, or model outputs
Studio runs locally. Prompts go directly to the customer-selected LLM provider; outputs return to the customer endpoint. PeerAI does not see them.
- Does not process customer database content
Studio connects to customer databases using customer-supplied credentials. Queries and results stay between Studio and the database.
- Does process license / activation data
License keys, activation events, and entitlement state are processed by PeerAI's licensing service for product licensing. Minimal PII (org email).
- Optionally processes crash telemetry
If Sentry is enabled (off by default in privacy-conscious deployments), error stacks and metadata are processed by Sentry under their DPA. Customer-toggleable in Settings.
Subprocessors - Optionally processes LLM evaluation traces
If Arize is enabled, OpenTelemetry traces of LLM calls go to a customer-controlled Phoenix endpoint. PeerAI does not receive these traces.
Regulatory posture
- GDPR
Local-first execution model minimises personal-data flow to PeerAI. Formal DPIA scheduled after SOC 2 Type I attestation closes (planned).
Compliance roadmap - DPA
Standard DPA available under NDA on request. Scope is limited to PeerAI's operational subprocessors; the LLM provider's DPA is between the customer and the provider.
Documents - Subprocessors
Short list: GitHub, Google Cloud Storage, Vercel; optional Sentry / Arize.
Subprocessor list