kent013/laravel-prism-prompt
最新稳定版本:0.15.1
Composer 安装命令:
composer require kent013/laravel-prism-prompt
包简介
Laravel Mailable-like API for LLM prompts with Prism
README 文档
README
Laravel Mailable-like API for LLM prompts with Prism.
Define your prompts as YAML templates + PHP classes. Move text and model settings out of code, separate the system message from the user message, parse responses into typed DTOs, and observe every call via events with USD cost attached — the same way you'd compose a Mailable.
# resources/prompts/greeting.yaml provider: anthropic model: claude-sonnet-4-5-20250929 system_prompt: | You are a friendly greeting assistant. Always respond in JSON with "message" and "tone" fields. prompt: | Say hello to {{ $userName }}.
use Kent013\PrismPrompt\Prompt; $result = Prompt::load('greeting', ['userName' => 'Alice'])->executeSync();
That's the whole loop. Everything else in this package is a layer on top: DTO mapping, multi-provider fallback, prompt-injection defence, parallel execution with prompt caching, observability, durable operations.
Installation
composer require kent013/laravel-prism-prompt
Publish config:
# Provider defaults, cache, debug php artisan vendor:publish --tag=prism-prompt-config # Pricing table (per-model USD rates) php artisan vendor:publish --tag=prism-prompt-pricing
At a glance
| Need | Approach | Doc | Example |
|---|---|---|---|
| One-shot prompt with a YAML file | Prompt::load('name', $vars)->executeSync() |
yaml-template | 01 |
| Typed DTO response (legacy / text JSON) | Subclass + parseResponse() + extractJson() |
yaml-template | 02 |
| Schema-enforced DTO response (recommended) | Subclass + getJsonSchema() + parseStructured() |
structured-output | 13 |
| Send chat history natively | Override buildConversationMessages() |
yaml-template | 03 |
| Defend against prompt injection | UserInput::from() + DefensiveInstructions |
prompt-injection | 06 |
| Multi-provider fallback (BYOK) | YAML models[] + withApiKeys() |
providers | 08 |
| Parallel batch with shared cache | PromptPool::executeWithWarmup() |
parallel-execution | 09 |
| Embeddings for RAG | EmbeddingPrompt |
embedding | 10 |
| Chat assistant w/ history + defence | Combine UserInput + history override + DTO |
prompt-injection | 11 |
| Multi-prompt pipeline (NPC reply → eval → hint) | Chain Prompt subclasses | yaml-template | 12 |
| Cost / usage / audit trail | Listen to PromptExecutionCompleted + withMetadata() |
events-and-cost | 05 |
| Durable, resumable LLM operation | PromptOperation::for()->claimOrFollow() |
prompt-operation | 07 |
| Test mocked LLM calls | Prompt::fake() + assertions |
testing | 04 |
| Listener-based debug logs | PRISM_PROMPT_DEBUG=true |
debug-logging | — |
Settings priority
Settings resolve highest-wins:
- Class property (e.g.
protected ?float $temperature = 0.5) - YAML field
- Config default (
config('prism-prompt.default_*'))
Subclassing
Use Prompt::load() for the simplest case. When you need typed DTOs,
custom message construction, or fluent variables in __construct,
subclass:
use Kent013\PrismPrompt\Prompt; /** @extends Prompt<GreetingResponse> */ class GreetingPrompt extends Prompt { public function __construct(public readonly string $userName) { parent::__construct(); } protected function parseResponse(string $text): GreetingResponse { $data = $this->extractJson($text); return new GreetingResponse($data['message'], $data['tone']); } } $result = (new GreetingPrompt('Alice'))->executeSync();
Structured output (recommended for new code, since v0.15.0)
For prompts whose output must conform to a fixed shape, declare a Prism schema
via getJsonSchema(). The base will route the call through
Prism::structured() and pass the decoded array straight to
parseStructured() — no extractJson() regex, no YAML JSON example to drift
out of sync.
use Kent013\PrismPrompt\Prompt; use Prism\Prism\Contracts\Schema; use Prism\Prism\Schema\{ObjectSchema, StringSchema, NumberSchema}; /** @extends Prompt<GreetingResponse> */ class GreetingPrompt extends Prompt { public function __construct(public readonly string $userName) { parent::__construct(); } protected function getJsonSchema(): ?Schema { return new ObjectSchema( name: 'greeting_response', description: 'A greeting reply', properties: [ new StringSchema('message', 'the greeting text'), new NumberSchema( name: 'tone', description: 'tone score', minimum: 0.0, maximum: 1.0, ), ], requiredFields: ['message', 'tone'], ); } /** @param array<string, mixed> $data */ protected function parseStructured(array $data): GreetingResponse { return new GreetingResponse($data['message'], (float) $data['tone']); } // Keep parseResponse as a transitional fallback for Prompt::fake([TextResponseFake]) // tests, or remove once the legacy path is gone. protected function parseResponse(string $text): GreetingResponse { $data = $this->extractJson($text); return new GreetingResponse($data['message'], (float) $data['tone']); } }
getJsonSchema() returning null (the default) keeps the legacy
Prism::text() + extractJson() path, so existing subclasses continue to work
unchanged. See docs/structured-output.md for the
full contract (event payload, fakes, error handling).
YAML lookup (in priority order):
$promptNameproperty — relative path fromprompts_path- Naming convention —
GreetingPrompt→greeting.yaml $promptsDirectory— group prompts under a subdirectory
getTemplatePath() is overridable for full control.
Configuration reference
config/prism-prompt.php
| Key | Default | Description |
|---|---|---|
default_provider |
anthropic |
Default LLM provider for text generation |
default_model |
claude-sonnet-4-5-20250929 |
Default model for text generation |
default_max_tokens |
4096 |
Maximum tokens in LLM response |
default_temperature |
0.7 |
Response randomness (0.0 - 1.0) |
default_embedding_provider |
openai |
Default provider for embeddings |
default_embedding_model |
text-embedding-3-small |
Default model for embeddings |
prompts_path |
resource_path('prompts') |
Base path for YAML templates |
cache.enabled |
true |
Enable YAML template caching |
cache.ttl |
3600 |
Cache TTL in seconds |
cache.store |
null |
Cache store (null = default) |
pool.concurrency |
5 |
Default PromptPool concurrency (env PRISM_PROMPT_POOL_CONCURRENCY) |
debug.enabled |
false |
Auto-register PerformanceLogListener |
debug.log_channel |
prism-prompt |
Log channel for debug output |
debug.save_files |
false |
Auto-register PerformanceDebugFileListener |
debug.storage_path |
storage_path('prism-prompt-debug') |
Directory for debug files |
config/prism-prompt-pricing.php
| Key | Default | Description |
|---|---|---|
pricing_source |
defaults_shipped |
Label embedded in every PricingSnapshot. Override via PRISM_PROMPT_PRICING_SOURCE |
unknown_model_behavior |
zero |
zero returns a zero-cost snapshot; throw raises InvalidArgumentException |
models.{provider}.{model} |
Anthropic Claude set | Per-million-token rates: input, output, optional cache_write / cache_read |
Documentation
In-depth topic guides live under docs/:
- yaml-template.md — YAML schema, message structure, override hierarchy
- structured-output.md —
getJsonSchema()/parseStructured()for Prism::structured() (v0.15.0+) - providers.md — multi-provider fallback, runtime API keys
- prompt-injection.md —
UserInput,DefensiveInstructions - parallel-execution.md —
PromptPoolwith prompt caching - events-and-cost.md — events,
withMetadata, USD cost - testing.md —
Prompt::fake()+ assertions - debug-logging.md — listener-based debug
- embedding.md —
EmbeddingPrompt - prompt-operation.md — durable, resumable operations
Runnable examples are under examples/:
| File | Topic |
|---|---|
| 01-basic-system-prompt.php | Quickest path with Prompt::load() |
| 02-json-dto-response.php | Subclass + extractJson() → DTO (legacy text path) |
| 13-structured-output.php | Subclass + getJsonSchema() → DTO (Prism::structured, v0.15.0+) |
| 03-conversation-history.php | Native chat history via buildConversationMessages() |
| 04-testing.php | Message-aware Prompt::fake() assertions |
| 05-events-and-cost.php | PromptExecutionCompleted listener + cost log |
| 06-user-input-defense.php | UserInput + DefensiveInstructions |
| 07-prompt-operation.php | PromptOperation durable workflow |
| 08-multi-provider-fallback.php | BYOK with auto provider selection |
| 09-prompt-pool-parallel.php | 5-axis rubric grading via PromptPool |
| 10-embedding-rag.php | RAG document indexing with EmbeddingPrompt |
| 11-chatbot-with-defense.php | Chatbot combining history + UserInput + DTO |
| 12-bundle-pipeline.php | Multi-prompt pipeline (NPC reply → eval → hint) |
License
MIT
统计信息
- 总下载量: 91
- 月度下载量: 0
- 日度下载量: 0
- 收藏数: 1
- 点击次数: 0
- 依赖项目数: 0
- 推荐数: 0
其他信息
- 授权协议: MIT
- 更新时间: 2026-02-05