PeerAI Trust Center
Product

Responsible AI

Input is yours. Output is yours. PeerAI is just the accelerator.

PeerAI Studio is scaffolding for AI-native delivery — the orchestration layer between your developers and the model providers you choose. Our responsible-AI posture follows from that operating model: we don't host the model, we don't aggregate prompts, we don't see outputs, and we don't claim rights to anything you produce.

a390ee4

Commitments

  • Customer chooses the model provider

    Studio supports OpenAI, Anthropic, Google, Azure OpenAI, AWS Bedrock, Cohere, OpenRouter, HuggingFace. Each customer contracts directly with the provider; PeerAI does not act as a reseller or aggregator.

  • No training on customer data — by architecture

    Customer prompts are sent only to the customer-selected provider, under that provider's training/usage terms. Customers should select providers and tiers (e.g., enterprise / no-train) per their data-handling policies. PeerAI never trains models on customer data.

  • Outputs belong to the customer

    Model outputs are the customer's. PeerAI claims no rights to generated code, generated documents, or any artifact produced by the agent.

  • Human-in-the-loop by default

    Destructive or ambiguous actions (file overwrites, code execution, external requests) require user confirmation. Auto-mode is opt-in and scoped.

  • Evaluation and observability

    Optional Arize Phoenix integration provides trace-level observability of agent runs, retrieval quality, and model behaviour — entirely customer-controlled and customer-hosted.

  • Source attribution for retrieved content

    Ask PeerAI and code-RAG flows attach source citations. Outputs derived from retrieval can be traced back to the source document and chunk.

  • Configurable guardrails

    Per-agent allowlists for tools, scoped file-system access, restricted execution paths. Defaults are conservative; customers tune them.

What customers should verify with their LLM provider

Because the provider is customer-selected, the data-handling terms come from the provider — not from PeerAI. Below is the checklist we recommend customers complete with their chosen provider before broad rollout.

  • No-train / opt-out tier

    Most providers offer an enterprise tier where prompts and completions are not used to train models. Confirm this is enabled on the API key Studio uses.

  • Data residency

    Provider-hosted regions (US / EU / etc.). Choose the region that matches your data-residency requirements.

  • Retention

    Provider-side retention windows for prompts/completions (often 0 days for enterprise, longer for default tiers).

  • DPA and subprocessors

    Sign the provider's DPA; review their subprocessor list; confirm GDPR/CCPA posture.

  • Logging and auditability

    Provider-side audit logs of API usage; export to your SIEM if needed.

Evaluation, observability, and guardrails

  • Arize Phoenix integration

    Optional OpenTelemetry-based tracing of LLM calls. When enabled, traces go to a customer-controlled Phoenix instance (cloud or self-hosted). PeerAI does not receive these traces.

  • Per-agent tool allowlists

    Each agent declares the tools it may call. Tool calls outside the allowlist are rejected by the runtime.

  • Output validation

    Structured outputs are validated against schema before reaching downstream consumers. Failed validations surface to the user.

  • Confirmation prompts for destructive actions

    File overwrites, code execution, and external requests require user confirmation in interactive modes.

Use restrictions

  • Acceptable Use Policy

    Studio may not be used to generate disallowed content (per the underlying model provider's policies) or for activities prohibited by applicable law.

    Policies
  • Reporting concerns

    Misuse, biased output, or safety concerns: security@peerislands.com.

    Contact security