Operating Model
Input is yours. Output is yours. PeerAI is just the accelerator.
PeerAI is not part of your permanent stack. It is scaffolding for development, modernization, and delivery — accelerating the journey to modern architectures. Once the solution is built, the scaffolding comes down. You keep the code, the architecture, and the systems. You always did.
Existing codebases, architectures, and systems that need modernization. Your IP, your runtime.
scaffolding
Customer IP — code, architecture, systems — fully owned and operated by you, with or without PeerAI.
PeerAI is removed once the solution is built. The customer keeps everything.
Four commitments that follow from the model
Runs inside the customer tenant
No PeerAI-hosted production data plane. Studio is a customer-installed application; sidecars run as local processes on the customer endpoint. Marketplace and licensing are PeerAI-operated, but operational data only — never customer code, prompts, or model outputs.
LLM agnostic
BYO API key for any major provider — OpenAI, Anthropic, Google, Azure, AWS Bedrock, Cohere, OpenRouter, HuggingFace. The customer contracts directly with the provider. Switching providers is a settings change, not a migration.
No data exfiltration
Customer code, prompts, queries, and model outputs do not leave the customer environment unless the customer explicitly enables an optional outbound (Sentry crash reports, Arize traces) — and even those go to customer-controlled endpoints.
Enterprise governance
The artifacts that produce the customer's modernized stack — code, ADRs, architecture diagrams, migration scripts, test data — are reviewable, auditable, and owned by the customer. PeerAI claims no rights to any output.
What 'just the accelerator' means in practice
- You can decommission Studio without losing artifacts
Code committed to your repos stays in your repos. Architecture documents land in your wiki. Migration scripts run from your CI. PeerAI is a tool that produced them — removing the tool does not remove the work.
- There is no PeerAI runtime dependency in production
Studio is a development-time accelerator. The systems it helps you build run on your infrastructure, against your databases, with your authentication, on your release cadence.
- Vendor lock-in risk is structurally low
Conventional vendor lock-in comes from data gravity (your data sits in their infra) or feature gravity (your runtime depends on their service). PeerAI has neither: data stays on your endpoint, runtime stays in your control.
Risk profile - We optimize for being unnecessary later
The product roadmap intentionally avoids creating runtime dependencies. New features accelerate development; they don't create services your stack needs to call back to.
Where this changes procurement math
Most AI-vendor risk assessments run two thought experiments: what if their security fails? and what happens if we want to leave?PeerAI's answers to those are unusually favourable not because of vendor diligence (though that's there too — see Vulnerabilities and Compliance) but because of the operating model:
- Security failure blast radius is local. A compromise of Studio affects a single endpoint, not a shared data plane.
- Exit cost is near-zero. Removing PeerAI does not require data migration, system rewrites, or feature rebuilds. The artifacts are already yours.
- Provider risk is unbundled. LLM provider risk is between you and the provider. PeerAI does not aggregate or proxy.
Input is yours. Output is yours.
PeerAI is just the accelerator.