Subprocessors
Studio runs on your endpoint. Subprocessors are the few services that touch operational data — most are customer-selected.
Because PeerAI Studio is an installable, customer-controlled platform, the conventional notion of 'subprocessor' applies primarily to the few services we contract operationally (release hosting, optional crash reporting). The model providers and databases that handle customer code, prompts, and queries are selected and contracted by the customer.
Direct PeerAI subprocessors
Services PeerAI contracts to operate the product. None of these process customer code, prompts, or model outputs by default.
Source-of-truth for releases, public binaries, and tap (Homebrew). No customer code or runtime data flows here.
Hosts release binaries, SBOM artifacts, and trust-portal JSON manifests. Public-read; no customer data.
Crash reporting from PeerAI Studio (Rust + frontend). Customer-toggleable in Settings; disabled by default in privacy-conscious deployments.
Optional LLM observability. When enabled, traces go to a customer-controlled Phoenix instance — PeerAI does not receive them.
Hosts the public Trust Center (this site) and the marketplace. Static + ISR content; no customer data.
Customer-selected LLM providers
Studio supports the following providers via BYO API key. The customer contracts directly with the provider; PeerAI does not proxy, cache, or aggregate. Verify your chosen provider's data-handling terms (no-train tier, data residency, retention) before broad rollout.
GPT-4 / GPT-4o / o1 family.
Claude 4.x family.
Gemini family.
Microsoft-hosted OpenAI models.
Anthropic, Mistral, Meta, others on AWS.
Command, embed, rerank models.
Aggregator routing to any of the above.
Open-source models via inference endpoints.
Customer-selected databases
Studio's data-migration and analysis features connect to databases you provide. PeerAI never holds customer database content.
Customer-selected MongoDB instance for Studio's local app data.
Optional, for analytics / data-migration target.
BYO connection string; Studio connects to whatever you provide.
What this means for procurement
- • PeerAI's subprocessor list is intentionally short— we don't need GPU providers, vector-DB hosts, or LLM aggregators because we're not running the model.
- • DPA scope is the operational layer (release hosting, crash reporting). The model-provider DPA is between the customer and the provider.
- • Adding a subprocessor requires a release-note line and update to this page. The trust portal is auto-published from CI; the SHA at the top tells you when the page last refreshed.