PapiAI

The best standalone AI agent library in PHP.

Building AI Applications with PHP

PapiAI is a framework-agnostic, type-safe PHP library for building AI agents that reason, use tools, and produce structured output. It works with any PHP 8.2+ application — standalone scripts, Laravel, Symfony, or your own framework — and has zero runtime dependencies in its core package.

The library follows an interface-first design: a single ProviderInterface abstracts all LLM providers behind a common API, while capability interfaces (EmbeddingProviderInterface, ImageProviderInterface, TextToSpeechProviderInterface, TranscriptionProviderInterface) let you access provider-specific features through clean contracts. Swap Anthropic for OpenAI or Google without changing a single line of your agent logic.

PapiAI ships with 10 LLM providers, a voice service, 4 built-in middleware classes, official framework bridges for Laravel and Symfony, and a fluent builder API that makes agent configuration expressive and readable.

Multi-Provider

Anthropic, OpenAI, Google Gemini, Ollama, Mistral, Groq, Grok, DeepSeek, Cohere, Azure OpenAI — one API for all.

Tool Calling

Define tools as functions or class methods with PHP attributes. The agent decides when and how to use them.

Structured Output

Zod-like schema validation for LLM responses. Get typed, validated data back — not just text.

Streaming

First-class streaming support with text chunks and rich events (tool calls, results, errors).

Middleware Pipeline

Logging, caching, retry, rate limiting — composable middleware for production-grade agents.

Framework Bridges

Official Laravel and Symfony integrations with service providers, facades, DI, and queue support.

Installation

PapiAI is distributed as a set of Composer packages. The core package (papi-ai/papi-core) contains the agent runtime, tool system, schema validation, middleware pipeline, and all contracts. Each provider is a separate package that depends on core, so you only install what you actually use. Install core plus one or more providers:

composer require papi-ai/papi-core

# Pick your provider(s)
composer require papi-ai/anthropic   # Claude
composer require papi-ai/openai      # GPT-4o, o1
composer require papi-ai/google      # Gemini
composer require papi-ai/ollama      # Local models
composer require papi-ai/mistral     # Mistral
composer require papi-ai/groq        # Groq LPU
composer require papi-ai/grok        # xAI Grok
composer require papi-ai/deepseek    # DeepSeek
composer require papi-ai/cohere      # Cohere
composer require papi-ai/azure-openai # Azure OpenAI

For text-to-speech and audio services:

composer require papi-ai/elevenlabs  # ElevenLabs TTS
# OpenAI also supports TTS and transcription via the same provider package

Requirements

  • PHP 8.2+ with declare(strict_types=1) throughout
  • ext-curl for provider packages (all HTTP is done with curl directly, no Guzzle or HTTP abstraction)
  • Zero runtime dependencies in core — the core package requires nothing beyond PHP itself

Provider packages each require papi-ai/papi-core and ext-curl. For middleware that integrates with PSR standards, the core package suggests psr/log (for LoggingMiddleware) and psr/simple-cache (for CacheMiddleware) but does not require them.

Simple Example

The fastest way to get started: create a provider with your API key, wrap it in an Agent, and call run(). The agent sends your prompt to the LLM and returns a Response object containing the generated text, token usage, and any tool calls. The instructions parameter sets the system prompt that shapes the agent's personality and behavior:

use PapiAI\Core\Agent;
use PapiAI\Anthropic\AnthropicProvider;

$agent = new Agent(
    provider: new AnthropicProvider(apiKey: $_ENV['ANTHROPIC_API_KEY']),
    model: 'claude-sonnet-4-20250514',
    instructions: 'You are a helpful assistant.',
);

$response = $agent->run('What is 2 + 2?');
echo $response->text; // "4"

If you prefer a fluent API, the Agent::build() static method returns an AgentBuilder that chains configuration calls. The builder validates that a provider and model are set before creating the agent, and gives you a clean, readable way to configure tools, middleware, hooks, and generation parameters:

$agent = Agent::build()
    ->provider(new AnthropicProvider(apiKey: $_ENV['ANTHROPIC_API_KEY']))
    ->model('claude-sonnet-4-20250514')
    ->instructions('You are a helpful assistant.')
    ->maxTokens(4096)
    ->temperature(0.7)
    ->create();

$response = $agent->run('Tell me a joke');
echo $response->text;

Agentic PHP

An AI agent is more than a chatbot — it's an autonomous loop that reasons about a task, decides which tools to use, executes them, and incorporates the results before responding. PapiAI implements this loop natively: when an LLM response contains tool calls, the agent executes them, feeds the results back, and lets the model continue — repeating until the task is complete or maxTurns is reached. This section covers every building block you need to create production-grade agents.

Tool Calling

Tools give your agent the ability to take actions in the real world — query databases, call APIs, read files, or perform calculations. You define what the tool does and describe its parameters; the LLM decides when and how to call it. PapiAI handles the agentic loop automatically: the agent calls a tool, receives its result, and continues reasoning until the task is complete. You can define tools inline with closures using Tool::make():

use PapiAI\Core\Tool;

$weatherTool = Tool::make(
    name: 'get_weather',
    description: 'Get current weather for a city',
    parameters: [
        'city' => ['type' => 'string', 'description' => 'City name'],
    ],
    handler: fn(array $args) => fetchWeather($args['city']),
);

$agent = new Agent(
    provider: $provider,
    model: 'claude-sonnet-4-20250514',
    tools: [$weatherTool],
);

$response = $agent->run('What is the weather in London?');

Class-based Tools with Attributes

For more complex toolsets, group related tools into a class. PapiAI uses PHP 8 attributes to discover tool methods and extract parameter metadata automatically — no manual schema definition needed. The #[Tool] attribute marks a method as a tool with a description, and #[Description] annotates individual parameters. Call Tool::fromClass() to register all tools from a class at once:

use PapiAI\Core\Tool;
use PapiAI\Core\Attributes\Tool as ToolAttr;
use PapiAI\Core\Attributes\Description;

class WebTools
{
    #[ToolAttr('Fetch content from a URL')]
    public function fetchUrl(
        #[Description('The URL to fetch')] string $url,
        #[Description('Timeout in seconds')] int $timeout = 30,
    ): string {
        return file_get_contents($url);
    }

    #[ToolAttr('Search the web')]
    public function search(string $query, int $limit = 10): array
    {
        // Implementation
    }
}

$agent = new Agent(
    provider: $provider,
    model: 'claude-sonnet-4-20250514',
    tools: Tool::fromClass(WebTools::class),
);

Structured Output

LLMs return free-form text by default, but many applications need typed, validated data. PapiAI's schema system lets you define the exact shape of the response you expect — similar to Zod in TypeScript. When you pass an outputSchema, the agent instructs the LLM to return JSON matching your schema, then parses and validates the result into $response->data. This works across all providers that support structured output:

use PapiAI\Core\Schema\Schema;

$schema = Schema::object([
    'sentiment' => Schema::enum(['positive', 'negative', 'neutral']),
    'confidence' => Schema::number()->min(0)->max(1),
    'keywords' => Schema::array(Schema::string()),
]);

$response = $agent->run(
    prompt: 'Analyze: "Great product, highly recommend!"',
    options: ['outputSchema' => $schema],
);

$response->data['sentiment'];   // 'positive'
$response->data['confidence'];  // 0.95
$response->data['keywords'];    // ['great', 'recommend']

Schema Types

The schema API provides a fluent, composable way to describe data types. Each type supports constraints and modifiers that both validate the output and serve as hints to the LLM about what to generate:

Schema::string()                         // String values
Schema::string()->min(1)->max(100)       // Length constraints
Schema::string()->pattern('/regex/')     // Regex pattern

Schema::number()                         // Float values
Schema::integer()                        // Integer values
Schema::number()->min(0)->max(100)       // Range constraints

Schema::boolean()                        // Boolean values

Schema::array(Schema::string())          // Array of strings
Schema::array($item)->minItems(1)->maxItems(10)

Schema::object([                         // Object with properties
    'name' => Schema::string(),
    'age' => Schema::integer()->optional(),
])

Schema::enum(['a', 'b', 'c'])           // Enum values

// Modifiers work on any type
->nullable()           // Allow null
->optional()           // Not required in objects
->default('value')     // Default value
->description('...')   // Hint for the LLM

Streaming

Streaming lets you display responses as they're generated, rather than waiting for the full completion. PapiAI offers two streaming modes: stream() yields simple text chunks for basic use cases like printing to a terminal or sending to a browser. streamEvents() yields rich typed events that distinguish between text output, tool calls, tool results, completion, and errors — ideal for building interactive UIs or logging pipelines:

// Simple text streaming
foreach ($agent->stream('Tell me a story') as $chunk) {
    echo $chunk->text;
    flush();
}

// Rich event streaming
foreach ($agent->streamEvents('Use tools to help me') as $event) {
    match ($event->type) {
        'text'        => echo $event->text,
        'tool_call'   => echo "Calling: {$event->tool}\n",
        'tool_result' => echo "Result: " . json_encode($event->result) . "\n",
        'done'        => echo "\nComplete!\n",
        'error'       => echo "Error: {$event->error}\n",
    };
}

Hooks

Hooks provide observability into the agent's execution without modifying its behavior. They fire at key moments in the agentic loop: beforeToolCall runs before each tool invocation (useful for logging or authorization), afterToolCall runs after with the result and duration (useful for metrics), and onError catches any exception during execution. Hooks are closures passed at construction time:

$agent = new Agent(
    provider: $provider,
    model: 'claude-sonnet-4-20250514',
    tools: $tools,
    hooks: [
        'beforeToolCall' => function (string $name, array $input) {
            Log::info("Calling tool: {$name}", $input);
        },
        'afterToolCall' => function (string $name, mixed $result, float $duration) {
            Metrics::timing("tool.{$name}", $duration);
        },
        'onError' => function (Throwable $error) {
            Sentry::captureException($error);
        },
    ],
);

Middleware

Middleware wraps the agent's execution pipeline, letting you add cross-cutting concerns like retry logic, rate limiting, caching, and logging. Each middleware receives the AgentRequest and a $next callable, and can modify the request, short-circuit with a cached response, or handle errors. Middleware is composable and executes in the order you add it. PapiAI ships with four built-in middleware classes, and you can implement MiddlewareInterface to create your own:

use PapiAI\Core\Middleware\RetryMiddleware;
use PapiAI\Core\Middleware\RateLimitMiddleware;
use PapiAI\Core\Middleware\LoggingMiddleware;
use PapiAI\Core\Middleware\CacheMiddleware;

$agent = Agent::build()
    ->provider($provider)
    ->model('claude-sonnet-4-20250514')
    ->addMiddleware(new RetryMiddleware(maxRetries: 3))
    ->addMiddleware(new RateLimitMiddleware(maxRequests: 60, perSeconds: 60))
    ->addMiddleware(new LoggingMiddleware($psrLogger))
    ->addMiddleware(new CacheMiddleware($psrCache, ttl: 3600))
    ->create();

Conversations

The Conversation class manages multi-turn message history. It tracks system prompts, user messages, and assistant responses in order, and produces the Message[] array that providers expect. Use it to build chatbots, maintain context across turns, or persist conversation state with a ConversationStoreInterface (file-based, Eloquent, or Doctrine):

use PapiAI\Core\Message;
use PapiAI\Core\Conversation;

$conversation = new Conversation();
$conversation->setSystem('You are a helpful assistant');
$conversation->addUser('Hello');
$conversation->addAssistant('Hi! How can I help?');
$conversation->addUser('Tell me a joke');

$messages = $conversation->getMessages();

Configuration Options

The Agent constructor accepts all configuration in one place. maxTokens caps the response length, temperature controls creativity (0.0 for deterministic, 1.0 for creative), and maxTurns limits how many tool-call loops the agent can execute before stopping — a safety measure to prevent runaway agents:

$agent = new Agent(
    provider: $provider,
    model: 'claude-sonnet-4-20250514',
    instructions: 'System prompt here',
    tools: [...],
    hooks: [...],
    maxTokens: 4096,        // Max response tokens
    temperature: 0.7,       // 0.0 = deterministic, 1.0 = creative
    maxTurns: 10,           // Max tool-use loops before stopping
);

Quick Start with Laravel

The Laravel bridge provides a service provider (auto-discovered), a Papi facade, Eloquent-based conversation storage, and queue integration. Install the bridge alongside your chosen provider:

composer require papi-ai/laravel papi-ai/openai

Add your API key to .env. The bridge reads these environment variables to configure the provider automatically:

PAPI_PROVIDER=openai
OPENAI_API_KEY=your-key-here

Use the facade anywhere in your application — controllers, artisan commands, jobs, or middleware. The facade proxies to a pre-configured Agent instance resolved from the service container:

use PapiAI\Laravel\Facades\Papi;

$response = Papi::run('What is the capital of France?');
echo $response->text;

// Streaming
foreach (Papi::stream('Tell me a story') as $chunk) {
    echo $chunk->text;
}

See the Laravel Bridge section for full configuration, middleware, conversation storage, and queue integration.

Quick Start with Symfony

The Symfony bridge is a bundle that wires providers, conversation stores, and queues into the dependency injection container. With Symfony Flex the bundle auto-registers; otherwise add it to config/bundles.php. Install alongside your chosen provider:

composer require papi-ai/symfony papi-ai/anthropic

Configure your providers in YAML. You can define multiple providers and select a default. The bundle creates service definitions for each, making them injectable by interface:

papi:
    default_provider: anthropic
    providers:
        anthropic:
            driver: PapiAI\Anthropic\AnthropicProvider
            api_key: '%env(ANTHROPIC_API_KEY)%'
            model: claude-sonnet-4-20250514

Inject the provider by type-hinting ProviderInterface in any service constructor. Symfony's autowiring resolves it to the default provider configured above:

use PapiAI\Core\Contracts\ProviderInterface;

class ChatController
{
    public function __construct(
        private ProviderInterface $provider,
    ) {}

    public function chat(string $message): string
    {
        $response = $this->provider->chat([
            Message::user($message),
        ]);
        return $response->text;
    }
}

See the Symfony Bridge section for full details.

Anthropic (Claude)

Anthropic's Claude models are known for careful reasoning, long context windows, and strong instruction following. The PapiAI Anthropic provider maps Claude's messages API to PapiAI's core types, handling the format differences transparently. It supports prompt caching via cache_control blocks for reduced latency on repeated prefixes, and extracts retry-after headers from rate limit responses for intelligent retry behavior.

composer require papi-ai/anthropic
use PapiAI\Anthropic\AnthropicProvider;

$provider = new AnthropicProvider(
    apiKey: $_ENV['ANTHROPIC_API_KEY'],
);

$agent = new Agent(
    provider: $provider,
    model: 'claude-sonnet-4-20250514',
    instructions: 'You are a helpful assistant.',
);

Models

  • claude-sonnet-4-20250514 (default)
  • claude-3-opus-20240229
  • claude-3-sonnet-20240229
  • claude-3-haiku-20240307

Capabilities

Chat
Streaming
Tool calling
Vision
Prompt caching

OpenAI

The most feature-rich provider in PapiAI. Beyond chat and streaming, the OpenAI provider implements embeddings (EmbeddingProviderInterface), text-to-speech (TextToSpeechProviderInterface), and audio transcription (TranscriptionProviderInterface). It also supports a configurable baseUrl and apiVersion, making it compatible with Azure OpenAI deployments and any OpenAI-compatible endpoint.

composer require papi-ai/openai
use PapiAI\OpenAI\OpenAIProvider;

$provider = new OpenAIProvider(
    apiKey: $_ENV['OPENAI_API_KEY'],
    defaultModel: OpenAIProvider::MODEL_GPT_4O,
);

Models

OpenAIProvider::MODEL_GPT_4O        // 'gpt-4o' (default, multimodal)
OpenAIProvider::MODEL_GPT_4O_MINI   // 'gpt-4o-mini' (fast, cost-effective)
OpenAIProvider::MODEL_GPT_4_5       // 'gpt-4.5-preview' (latest)
OpenAIProvider::MODEL_GPT_4_TURBO   // 'gpt-4-turbo' (high quality)
OpenAIProvider::MODEL_O1            // 'o1' (reasoning)
OpenAIProvider::MODEL_O1_PREVIEW    // 'o1-preview' (reasoning)
OpenAIProvider::MODEL_O1_MINI       // 'o1-mini' (fast reasoning)
OpenAIProvider::MODEL_O3_MINI       // 'o3-mini' (next-gen reasoning)

Capabilities

Chat
Streaming
Tool calling
Vision
Structured output
Embeddings
Text-to-speech
Transcription

Google Gemini

Google's Gemini family spans from fast, cost-effective models (Flash) to high-capability reasoning models (Pro). The PapiAI Google provider handles the Generative Language API's unique format — content parts, thought signatures for multi-turn tool use, and inline image data for vision. It also integrates with Google's Imagen models for AI image generation and editing via the ImageProviderInterface. Authentication is via API key passed as a query parameter.

composer require papi-ai/google
use PapiAI\Google\GoogleProvider;

$provider = new GoogleProvider(
    apiKey: $_ENV['GOOGLE_API_KEY'],
    defaultModel: GoogleProvider::MODEL_3_0_PRO,
);

Models

// Chat models
GoogleProvider::MODEL_3_1_PRO       // gemini-3.1-pro-preview (newest)
GoogleProvider::MODEL_3_0_PRO       // gemini-3-pro-preview
GoogleProvider::MODEL_3_FLASH       // gemini-3-flash-preview
GoogleProvider::MODEL_3_PRO_IMAGE   // gemini-3-pro-image-preview
GoogleProvider::MODEL_2_5_PRO       // gemini-2.5-pro
GoogleProvider::MODEL_2_5_FLASH     // gemini-2.5-flash
GoogleProvider::MODEL_2_5_FLASH_LITE // gemini-2.5-flash-lite
GoogleProvider::MODEL_2_0_FLASH     // gemini-2.0-flash
GoogleProvider::MODEL_2_0_FLASH_LITE // gemini-2.0-flash-lite
GoogleProvider::MODEL_1_5_PRO       // gemini-1.5-pro
GoogleProvider::MODEL_1_5_FLASH     // gemini-1.5-flash

// Image generation (Imagen)
GoogleProvider::IMAGEN_4            // imagen-4.0-generate-001
GoogleProvider::IMAGEN_4_FAST       // imagen-4.0-fast-generate-001
GoogleProvider::IMAGEN_4_ULTRA      // imagen-4.0-ultra-generate-001
GoogleProvider::IMAGEN_EDIT         // imagen-3.0-capability-001

Image Generation

$result = $provider->generateImage(
    prompt: 'A professional product photo of headphones',
    options: [
        'model' => GoogleProvider::IMAGEN_4,
        'aspectRatio' => '1:1',
        'numberOfImages' => 1,
    ]
);

$imageData = base64_decode($result['images'][0]['data']);
file_put_contents('output.png', $imageData);

// Or save directly
$provider->generateImageToFile(
    prompt: 'A modern minimalist workspace',
    outputPath: '/path/to/image.png',
);

Capabilities

Chat
Streaming
Tool calling
Vision
Structured output
Embeddings
Image generation
Image editing

Ollama

Run AI agents entirely on your own hardware with no API keys, no cloud dependencies, and no usage costs. The Ollama provider connects to a local Ollama server and supports any model you've pulled — Llama, Mistral, CodeLlama, Qwen, and more. The baseUrl defaults to http://localhost:11434 but can point to any Ollama-compatible server on your network. Ideal for development, privacy-sensitive workloads, and air-gapped environments.

composer require papi-ai/ollama
use PapiAI\Ollama\OllamaProvider;

// No API key needed — runs locally
$provider = new OllamaProvider(
    baseUrl: 'http://localhost:11434',  // default
    defaultModel: 'llama3.1',
);

Models

ModelType
llama3.1 (default)General purpose
codellamaCode generation
mistralGeneral purpose
mixtralMixture of experts
qwen2.5-coderCode generation
nomic-embed-textEmbeddings

Capabilities

Chat
Streaming
Tool calling
Vision
Structured output
Embeddings

Mistral

Mistral AI offers high-performance European-hosted models with strong multilingual capabilities. The PapiAI Mistral provider supports chat, streaming, tool calling, vision, structured output, and text embeddings. Mistral Large is a competitive alternative to GPT-4 and Claude for complex reasoning tasks, while the embedding model is well-suited for multilingual RAG pipelines.

composer require papi-ai/mistral
use PapiAI\Mistral\MistralProvider;

$provider = new MistralProvider(
    apiKey: $_ENV['MISTRAL_API_KEY'],
);

Models

MistralProvider::MODEL_MISTRAL_LARGE  // 'mistral-large-latest' (default)
MistralProvider::MODEL_MISTRAL_EMBED  // 'mistral-embed' (embeddings)

Capabilities

Chat
Streaming
Tool calling
Vision
Structured output
Embeddings

Groq

Groq runs open-source models on custom LPU (Language Processing Unit) hardware, delivering inference speeds that are orders of magnitude faster than GPU-based providers. Use the Groq provider when latency matters more than model variety — it's an excellent choice for real-time applications, interactive tools, and high-throughput batch processing. The API is OpenAI-compatible, and PapiAI handles the format mapping automatically.

composer require papi-ai/groq
use PapiAI\Groq\GroqProvider;

$provider = new GroqProvider(
    apiKey: $_ENV['GROQ_API_KEY'],
);

Models

GroqProvider::MODEL_LLAMA_3_3_70B  // 'llama-3.3-70b-versatile' (default)
GroqProvider::MODEL_LLAMA_3_1_8B   // 'llama-3.1-8b-instant' (fast)
GroqProvider::MODEL_MIXTRAL_8X7B   // 'mixtral-8x7b-32768'

Capabilities

Chat
Streaming
Tool calling
Vision
Structured output

Groq uses custom LPU (Language Processing Unit) hardware for ultra-fast inference.

Grok (xAI)

Grok is xAI's large language model, designed for direct and informative responses. The Grok 3 family offers strong reasoning and vision capabilities through an OpenAI-compatible API. The PapiAI provider handles authentication via the XAI_API_KEY and maps between PapiAI's core types and xAI's API format.

composer require papi-ai/grok
use PapiAI\Grok\GrokProvider;

$provider = new GrokProvider(
    apiKey: $_ENV['XAI_API_KEY'],
);

Models

GrokProvider::MODEL_GROK_3      // 'grok-3' (default)
GrokProvider::MODEL_GROK_3_MINI // 'grok-3-mini' (fast)
GrokProvider::MODEL_GROK_2      // 'grok-2'

Capabilities

Chat
Streaming
Tool calling
Vision
Structured output

DeepSeek

DeepSeek offers cost-effective models with strong coding and reasoning capabilities. The deepseek-chat model is a general-purpose assistant, while deepseek-reasoner specializes in step-by-step reasoning tasks like math, logic, and complex analysis. DeepSeek's API is OpenAI-compatible, making integration straightforward. A good choice when you need capable models at a lower price point.

composer require papi-ai/deepseek
use PapiAI\DeepSeek\DeepSeekProvider;

$provider = new DeepSeekProvider(
    apiKey: $_ENV['DEEPSEEK_API_KEY'],
);

Models

DeepSeekProvider::MODEL_DEEPSEEK_CHAT      // 'deepseek-chat' (default)
DeepSeekProvider::MODEL_DEEPSEEK_REASONER  // 'deepseek-reasoner' (reasoning)

Capabilities

Chat
Streaming
Tool calling
Structured output

Cohere

Cohere specializes in enterprise-grade language models with particular strength in retrieval-augmented generation (RAG). The Command R family excels at grounded generation — answering questions based on provided documents — and the embedding models support both English and multilingual text. Note that Cohere uses its own v2 Chat API, not the OpenAI format, so the PapiAI provider handles the complete format translation between Cohere's API and PapiAI's core types.

composer require papi-ai/cohere
use PapiAI\Cohere\CohereProvider;

$provider = new CohereProvider(
    apiKey: $_ENV['COHERE_API_KEY'],
);

Models

// Chat models
CohereProvider::MODEL_COMMAND_R_PLUS  // 'command-r-plus' (default)
CohereProvider::MODEL_COMMAND_R       // 'command-r'
CohereProvider::MODEL_COMMAND         // 'command'

// Embedding models
CohereProvider::MODEL_EMBED_ENGLISH       // 'embed-english-v3.0'
CohereProvider::MODEL_EMBED_MULTILINGUAL  // 'embed-multilingual-v3.0'

Capabilities

Chat
Streaming
Tool calling
Embeddings

Cohere uses its own v2 Chat API format (not OpenAI-compatible).

Azure OpenAI

For organizations that need OpenAI models deployed within Azure's compliance, security, and regional data residency guarantees. The Azure OpenAI provider uses deployment-based URLs (your Azure resource endpoint + deployment name) instead of the standard OpenAI API. It authenticates via the api-key header and supports Azure Active Directory tokens. The API is functionally identical to OpenAI, but hosted on Azure infrastructure with enterprise SLAs.

composer require papi-ai/azure-openai
use PapiAI\AzureOpenAI\AzureOpenAIProvider;

$provider = new AzureOpenAIProvider(
    apiKey: 'your-azure-api-key',
    endpoint: 'https://myresource.openai.azure.com',
    deployment: 'gpt-4o',
);

Embeddings

$response = $provider->embed('Hello world', [
    'model' => 'text-embedding-ada-002',
]);

$vector = $response->first();

Capabilities

Chat
Streaming
Tool calling
Vision
Structured output
Embeddings

Uses deployment-based URLs with api-key header authentication. Supports Azure AD tokens.

ElevenLabs

ElevenLabs is a voice service, not an LLM provider. It implements TextToSpeechProviderInterface only — you can't use it for chat, streaming, or tool calling. Use it to convert agent responses (or any text) into natural-sounding speech. The provider maps voice names to ElevenLabs voice IDs automatically, and supports custom voice IDs, multiple TTS models, and configurable audio output formats.

composer require papi-ai/elevenlabs
use PapiAI\ElevenLabs\ElevenLabsProvider;

$provider = new ElevenLabsProvider(
    apiKey: $_ENV['ELEVENLABS_API_KEY'],
);

$audio = $provider->synthesize('Hello world!');
$audio->save('output.mp3');

Voices

Built-in voices: Rachel (default), Domi, Bella, Antoni, Elli, Josh, Arnold, Adam, Sam. Custom voice IDs are also supported.

$audio = $provider->synthesize('Hello!', [
    'voice' => 'Josh',
    'model' => 'eleven_multilingual_v2',
]);

Capabilities

Text-to-speech

ElevenLabs implements TextToSpeechProviderInterface only — it is a voice service, not an LLM provider.

Laravel Bridge

The Laravel bridge integrates PapiAI into Laravel's ecosystem with zero boilerplate. It provides a PapiServiceProvider (auto-discovered via Laravel's package discovery), a Papi facade for quick access, an Eloquent-based conversation store for persisting chat history to your database, and a queue adapter that dispatches agent jobs through Laravel's queue system. Supports Laravel 10, 11, and 12.

composer require papi-ai/laravel papi-ai/openai

The service provider is auto-discovered. Publish the config to customize providers, middleware, and storage:

php artisan vendor:publish --tag=papi-config

Environment

The bridge reads provider configuration from environment variables. Set the default provider and its API key:

PAPI_PROVIDER=openai
OPENAI_API_KEY=your-openai-key
ANTHROPIC_API_KEY=your-anthropic-key

Facade

The Papi facade proxies to a pre-configured Agent instance. Use it in controllers, artisan commands, jobs, or anywhere in your Laravel application:

use PapiAI\Laravel\Facades\Papi;

// Simple prompt
$response = Papi::run('What is the capital of France?');
echo $response->text;

// Streaming
foreach (Papi::stream('Tell me a story') as $chunk) {
    echo $chunk->text;
}

// Add tools at runtime
Papi::addTool(new MyCustomTool());

// Add middleware at runtime
Papi::addMiddleware(new RateLimitMiddleware(maxRequests: 10));

Container Bindings

The bridge registers two bindings in Laravel's service container: papi resolves to the configured provider instance, and papi.agent resolves to a fully configured agent with middleware and tools applied:

// Get the configured provider
$provider = app('papi');

// Get the pre-configured agent
$agent = app('papi.agent');
$response = $agent->run('Hello!');

Middleware Configuration

Register middleware globally in config/papi.php. These will be applied to every agent request. You can also add middleware at runtime via the facade:

// config/papi.php
'middleware' => [
    \PapiAI\Core\Middleware\LoggingMiddleware::class,
    \PapiAI\Core\Middleware\RetryMiddleware::class,
],

Conversation Storage

PapiAI can persist conversation history between requests. The bridge supports two storage backends: file (default, stores JSON on disk) and eloquent (stores in your database via Eloquent). Switch to Eloquent storage in your config:

// config/papi.php
'conversation' => [
    'store' => 'eloquent',  // or 'file'
],

Create the migration for the Eloquent conversation store:

Schema::create('papi_conversations', function (Blueprint $table) {
    $table->string('id')->primary();
    $table->json('data');
    $table->timestamp('updated_at')->nullable();
});

Queue Integration

For long-running agent tasks, dispatch jobs to Laravel's queue system. The LaravelQueue adapter implements PapiAI's QueueInterface, letting you offload agent execution to background workers with job tracking and status monitoring:

use PapiAI\Laravel\Queue\LaravelQueue;
use PapiAI\Core\AgentJob;

$queue = app(LaravelQueue::class);

$jobId = $queue->dispatch(new AgentJob(
    agentClass: MyAgent::class,
    prompt: 'Process this in the background',
));

$status = $queue->status($jobId);

Symfony Bridge

The Symfony bridge is a fully-featured bundle that registers PapiAI providers, conversation stores, and queue adapters as services in Symfony's dependency injection container. It supports multiple provider configurations, YAML-based setup, and integrates with Doctrine for persistence and Symfony Messenger for async job dispatch. All services are injectable by interface with Symfony's autowiring.

composer require papi-ai/symfony papi-ai/anthropic

With Symfony Flex, the bundle is auto-registered. For manual setup, add it to config/bundles.php:

return [
    PapiAI\Symfony\PapiBundle::class => ['all' => true],
];

Configuration

Define your providers, middleware services, and conversation storage in YAML. You can configure multiple providers and set one as the default. The bundle creates container service definitions for each provider and wires them for injection:

# config/packages/papi.yaml
papi:
    default_provider: anthropic

    providers:
        openai:
            driver: PapiAI\OpenAI\OpenAIProvider
            api_key: '%env(OPENAI_API_KEY)%'
            model: gpt-4o

        anthropic:
            driver: PapiAI\Anthropic\AnthropicProvider
            api_key: '%env(ANTHROPIC_API_KEY)%'
            model: claude-sonnet-4-20250514

    middleware:
        - app.middleware.logging
        - app.middleware.rate_limit

    conversation:
        store: file
        path: '%kernel.project_dir%/var/papi/conversations'

Dependency Injection

Type-hint ProviderInterface in any service, controller, or command constructor. Symfony autowires it to the default provider. For named providers, use #[Autowire] attributes or explicit service wiring:

use PapiAI\Core\Contracts\ProviderInterface;

class ChatController
{
    public function __construct(
        private ProviderInterface $provider,
    ) {}

    public function chat(string $message): string
    {
        $response = $this->provider->chat([
            Message::user($message),
        ]);
        return $response->text;
    }
}

Doctrine Conversation Store

Persist conversation history to your database using Doctrine DBAL. The DoctrineConversationStore implements ConversationStoreInterface and stores conversations as JSON in a papi_conversations table. Install Doctrine DBAL and switch the store driver:

composer require doctrine/dbal
papi:
    conversation:
        store: doctrine
CREATE TABLE papi_conversations (
    id VARCHAR(255) PRIMARY KEY,
    data JSON NOT NULL,
    created_at DATETIME NOT NULL,
    updated_at DATETIME NOT NULL
);

Messenger Queue

Offload long-running agent tasks to background workers using Symfony Messenger. The MessengerQueue adapter implements QueueInterface, dispatching AgentJob messages through your configured transport (Redis, RabbitMQ, Doctrine, etc.). Install Messenger and inject the queue service:

composer require symfony/messenger
use PapiAI\Core\Contracts\QueueInterface;
use PapiAI\Core\AgentJob;

class AgentService
{
    public function __construct(
        private QueueInterface $queue,
    ) {}

    public function dispatchJob(string $agentClass, string $prompt): string
    {
        return $this->queue->dispatch(new AgentJob(
            agentClass: $agentClass,
            prompt: $prompt,
        ));
    }
}