Inference Receipts
The @peac/adapter-openai-compatible package creates hash-first interaction evidence from chat completion API responses. It records SHA-256 digests of prompts and outputs -- no raw text is ever stored in the receipt.
Inference receipts use SHA-256 hashes exclusively. The receipt proves that a specific interaction occurred without revealing the content of prompts or completions.
Install
pnpm add @peac/adapter-openai-compatible
Quick start
import { fromChatCompletion } from '@peac/adapter-openai-compatible';
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Explain PEAC receipts' }],
});
const evidence = fromChatCompletion(response, {
messages: [{ role: 'user', content: 'Explain PEAC receipts' }],
});
The evidence object contains:
- Model ID -- which model produced the output
- Token counts -- prompt and completion token usage
- Timing -- response timestamps
- Input digest --
sha256(canonicalized_messages) - Output digest --
sha256(completion_content) - Finish reason --
stop,length,tool_calls, etc.
What is recorded
| Field | Source | Privacy |
|---|---|---|
model | Response model field | Visible |
usage.prompt_tokens | Token count | Visible |
usage.completion_tokens | Token count | Visible |
finish_reason | Completion choice | Visible |
input_digest | sha256(messages) | Hash only |
output_digest | sha256(completion) | Hash only |
created | Response timestamp | Visible |
What is NOT recorded
- Raw prompt text
- Raw completion text
- System instructions
- Function/tool definitions
- Image or audio content
Hashing functions
For advanced use cases, the individual hashing functions are exported:
import { hashMessages, hashOutput } from '@peac/adapter-openai-compatible';
// Hash the input messages
const inputDigest = hashMessages(messages);
// Hash the completion output
const outputDigest = hashOutput(completion.choices[0].message.content);
Both functions use SHA-256 with deterministic canonicalization to ensure the same content always produces the same digest.
Provider compatibility
The adapter works with any provider that returns OpenAI-compatible chat completion responses:
- OpenAI (GPT-4o, GPT-4, etc.)
- Anthropic (via OpenAI-compatible endpoint)
- Ollama (local models)
- vLLM (self-hosted)
- Together AI
- Any provider with OpenAI-compatible API format
The adapter operates on the response format, not the provider. Any API that returns a standard chat completion response object will work.
Attaching to a receipt
import { issueReceipt } from '@peac/protocol';
import { fromChatCompletion } from '@peac/adapter-openai-compatible';
const evidence = fromChatCompletion(response, { messages });
const receipt = await issueReceipt({
privateKey: process.env.PEAC_PRIVATE_KEY,
kid: 'peac-2026-02',
claims: {
iss: 'https://api.example.com',
sub: 'agent:assistant-1',
peac: {
type: 'inference',
attestation_type: 'interaction',
status: 'executed',
extensions: {
'org.peacprotocol/inference@0.1': evidence,
},
},
},
});