Skip to main content
Version: v0.11.2

Inference Receipts

The @peac/adapter-openai-compatible package creates hash-first interaction evidence from chat completion API responses. It records SHA-256 digests of prompts and outputs -- no raw text is ever stored in the receipt.

Privacy by default

Inference receipts use SHA-256 hashes exclusively. The receipt proves that a specific interaction occurred without revealing the content of prompts or completions.


Install

Terminal
pnpm add @peac/adapter-openai-compatible

Quick start

inference-evidence.ts
import { fromChatCompletion } from '@peac/adapter-openai-compatible';

const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Explain PEAC receipts' }],
});

const evidence = fromChatCompletion(response, {
messages: [{ role: 'user', content: 'Explain PEAC receipts' }],
});

The evidence object contains:

  • Model ID -- which model produced the output
  • Token counts -- prompt and completion token usage
  • Timing -- response timestamps
  • Input digest -- sha256(canonicalized_messages)
  • Output digest -- sha256(completion_content)
  • Finish reason -- stop, length, tool_calls, etc.

What is recorded

FieldSourcePrivacy
modelResponse model fieldVisible
usage.prompt_tokensToken countVisible
usage.completion_tokensToken countVisible
finish_reasonCompletion choiceVisible
input_digestsha256(messages)Hash only
output_digestsha256(completion)Hash only
createdResponse timestampVisible

What is NOT recorded

  • Raw prompt text
  • Raw completion text
  • System instructions
  • Function/tool definitions
  • Image or audio content

Hashing functions

For advanced use cases, the individual hashing functions are exported:

hashing.ts
import { hashMessages, hashOutput } from '@peac/adapter-openai-compatible';

// Hash the input messages
const inputDigest = hashMessages(messages);

// Hash the completion output
const outputDigest = hashOutput(completion.choices[0].message.content);

Both functions use SHA-256 with deterministic canonicalization to ensure the same content always produces the same digest.


Provider compatibility

The adapter works with any provider that returns OpenAI-compatible chat completion responses:

  • OpenAI (GPT-4o, GPT-4, etc.)
  • Anthropic (via OpenAI-compatible endpoint)
  • Ollama (local models)
  • vLLM (self-hosted)
  • Together AI
  • Any provider with OpenAI-compatible API format
Category-first

The adapter operates on the response format, not the provider. Any API that returns a standard chat completion response object will work.


Attaching to a receipt

issue-inference-receipt.ts
import { issueReceipt } from '@peac/protocol';
import { fromChatCompletion } from '@peac/adapter-openai-compatible';

const evidence = fromChatCompletion(response, { messages });

const receipt = await issueReceipt({
privateKey: process.env.PEAC_PRIVATE_KEY,
kid: 'peac-2026-02',
claims: {
iss: 'https://api.example.com',
sub: 'agent:assistant-1',
peac: {
type: 'inference',
attestation_type: 'interaction',
status: 'executed',
extensions: {
'org.peacprotocol/inference@0.1': evidence,
},
},
},
});

Next steps