PeerAI Trust Center
Infrastructure

Subprocessors

Studio runs on your endpoint. Subprocessors are the few services that touch operational data — most are customer-selected.

Because PeerAI Studio is an installable, customer-controlled platform, the conventional notion of 'subprocessor' applies primarily to the few services we contract operationally (release hosting, optional crash reporting). The model providers and databases that handle customer code, prompts, and queries are selected and contracted by the customer.

a390ee4

Direct PeerAI subprocessors

Services PeerAI contracts to operate the product. None of these process customer code, prompts, or model outputs by default.

GitHub

Source-of-truth for releases, public binaries, and tap (Homebrew). No customer code or runtime data flows here.

Global
Google Cloud Storage

Hosts release binaries, SBOM artifacts, and trust-portal JSON manifests. Public-read; no customer data.

US
Sentry
Optional

Crash reporting from PeerAI Studio (Rust + frontend). Customer-toggleable in Settings; disabled by default in privacy-conscious deployments.

US / EU (configurable)
Arize Phoenix
Optional

Optional LLM observability. When enabled, traces go to a customer-controlled Phoenix instance — PeerAI does not receive them.

Customer-hosted
Vercel

Hosts the public Trust Center (this site) and the marketplace. Static + ISR content; no customer data.

Global edge

Customer-selected LLM providers

Studio supports the following providers via BYO API key. The customer contracts directly with the provider; PeerAI does not proxy, cache, or aggregate. Verify your chosen provider's data-handling terms (no-train tier, data residency, retention) before broad rollout.

OpenAI
Customer-selected

GPT-4 / GPT-4o / o1 family.

Anthropic
Customer-selected

Claude 4.x family.

Google
Customer-selected

Gemini family.

Azure OpenAI
Customer-selected

Microsoft-hosted OpenAI models.

AWS Bedrock
Customer-selected

Anthropic, Mistral, Meta, others on AWS.

Cohere
Customer-selected

Command, embed, rerank models.

OpenRouter
Customer-selected

Aggregator routing to any of the above.

HuggingFace
Customer-selected

Open-source models via inference endpoints.

Customer-selected databases

Studio's data-migration and analysis features connect to databases you provide. PeerAI never holds customer database content.

MongoDB
Customer-selected

Customer-selected MongoDB instance for Studio's local app data.

PostgreSQL
Customer-selected

Optional, for analytics / data-migration target.

Customer-selected
Customer-selected

BYO connection string; Studio connects to whatever you provide.

What this means for procurement

  • PeerAI's subprocessor list is intentionally short— we don't need GPU providers, vector-DB hosts, or LLM aggregators because we're not running the model.
  • DPA scope is the operational layer (release hosting, crash reporting). The model-provider DPA is between the customer and the provider.
  • Adding a subprocessor requires a release-note line and update to this page. The trust portal is auto-published from CI; the SHA at the top tells you when the page last refreshed.