PeerAI Trust Center

Responsible disclosure

Security Policy

Reporting a Vulnerability

The PeerAI Studio team takes security vulnerabilities seriously. We appreciate your efforts to responsibly disclose your findings.

How to Report

Please do NOT report security vulnerabilities through public GitHub issues.

Instead, please report them via email to:

security@peerislands.com

Include the following information in your report:

  • Type of vulnerability (e.g., XSS, SQL injection, authentication bypass)
  • Full paths of affected source files
  • Step-by-step instructions to reproduce
  • Proof-of-concept or exploit code (if possible)
  • Impact assessment
  • Suggested fix (if any)

What to Expect

  • Acknowledgment: We will acknowledge receipt within 48 hours
  • Assessment: We will assess the vulnerability within 7 days
  • Resolution: We aim to resolve critical issues within 30 days
  • Disclosure: We will coordinate disclosure timing with you

Safe Harbor

We consider security research conducted in accordance with this policy to be:

  • Authorized concerning any applicable anti-hacking laws
  • Authorized concerning any relevant anti-circumvention laws
  • Exempt from restrictions in our Terms of Service that would interfere with conducting security research

You are expected, in good faith, to:

  • Avoid privacy violations, destruction of data, and interruption of services
  • Only interact with accounts you own or have explicit permission to test
  • Not publicly disclose vulnerabilities before we've had a chance to address them

Supported Versions

VersionSupported
0.3.x:white_check_mark:
< 0.3:x:

Security Measures

PeerAI Studio implements the following security measures:

Application Security

  • Code Execution Sandboxing: Tool functions execute in a macOS sandbox with restricted file system and network access
  • Pattern Blocklist: Dangerous code patterns (eval, exec, subprocess) are blocked
  • Content Security Policy: Strict CSP headers prevent XSS attacks
  • CORS Restrictions: API only accepts requests from known origins

Data Security

  • Secrets Storage: API keys and credentials stored in macOS Keychain
  • Atomic File Writes: Data persistence uses atomic writes with file locking
  • Input Validation: All API inputs validated using Pydantic models

Network Security

  • Local-only Backend: Python sidecar binds to localhost only
  • No External Network: Sandbox blocks outbound network from tool execution

Security Best Practices for Users

  1. Keep Updated: Always use the latest version of PeerAI Studio
  2. Review Tools: Carefully review any tool code before execution
  3. Protect API Keys: Never share or expose your API keys
  4. Report Issues: Report any suspicious behavior immediately

Contact

For security concerns: security@peerislands.com

For general inquiries: support@peerislands.com


Thank you for helping keep PeerAI Studio and our users safe!