AI Companion SDK Overview
Apgard's AI Companion SDK is designed to help AI companion companies build safe, compliant, and caring chat experiences. It tracks user interactions in real time and provides guidance to maintain safe conversations.
Starting in 2026, California SB 243 will regulate AI companion chatbots. Companies must prevent self-harm and suicidal ideation, provide break reminders, and, for minors, prevent sexually explicit content and disclose AI usage.
Key Features
Content Monitoring & Intervention
Detects risks like self-harm or sexually explicit material in real time and provides response guidance.
Break Tracking
Tracks ongoing chat sessions and notifies users when to take breaks. Default: every 3 hours.
Age Prediction Infrastructure
Start collecting safe, privacy-preserving signals to power your own age-prediction workflows.
Visual Dashboard
Gain insights into safety performance across user sessions and flagged interactions.

Get Access
To start using the apgard SDK:
- Fill out the developer access form.
- Once approved, log in and generate your API key from the dashboard.
- Keep your key secret — it authenticates all SDK requests.

Installation
The SDK is available on PyPI:
pip install apgardGetting Started
Initialize your client using your API key:
from apgard import ApgardClient
client = ApgardClient(api_key="YOUR_API_KEY")Create or fetch an apgard user ID and start a conversation:
user_id = client.get_or_create_user_id("external_user_123") # Optional mapping to your user ID
conversation_id = client.moderation.start_conversation(user_id)Content Monitoring
The SDK provides real-time message moderation for self-harm and sexually explicit content. Responses include recommended actions and suggested AI messages.
result = client.moderation.moderate_message(
user_id="user_123",
conversation_id="conversation_123",
content="I feel hopeless...",
role="user"
)Response Schema
class ChatIntervention:
message_id: str
should_intervene: bool
severity: str # "low" | "medium" | "high" | "critical"
action: str # e.g. "block", "provide_crisis_hotline", "guide_mental_reflection"
risk_type: List[str] # e.g. ["self_harm", "sexual_content"]
suggested_message: Optional[str]Example Usage
if result.should_intervene:
ai.send_message(result.suggested_message)Break Tracking
Configure break intervals to promote healthy usage patterns and meet compliance requirements:
client = ApgardClient(api_key="YOUR_API_KEY", break_interval_minutes=180)Track Activity
break_status = client.breaks.record_activity(user_id="user_123")
if break_status.break_due:
print(break_status.message)Response Schema
class BreakStatus:
break_due: bool
message: str