Skip to content
v0.11.2Since v0.11.2

Inference Receipts

Hash-first interaction evidence for chat completion APIs. Prompts and outputs are reduced to SHA-256 digests before receipt issuance. No raw text is stored in the receipt. Privacy by default.

Package: @peac/adapter-openai-compatible

Hash-First Model

Privacy by Design (DD-138)

No raw prompt or completion text is stored in the receipt. All content is reduced to SHA-256 digests before the receipt is issued. A verifier with the original content can recompute the digest to confirm the receipt covers that specific interaction.

Install

pnpm add @peac/adapter-openai-compatible

What Is Recorded

FieldFormatDescription
input.digestsha256:<hex64>SHA-256 of the serialized input messages
output.digestsha256:<hex64>SHA-256 of the serialized output content
executor.platformstringInference platform identifier
executor.modelstringModel identifier from the completion response
extensions.model_idstringFull model identifier string
extensions.token_countsobjectToken usage: prompt_tokens, completion_tokens, total_tokens
extensions.finish_reasonstringCompletion finish reason (such as stop, length)

What Is NOT Recorded

  • Raw prompt text or system instructions
  • Raw completion output text
  • Function call arguments or results in plaintext
  • Any content that could be used to reconstruct the original conversation

Only SHA-256 digests and metadata (model, tokens, finish reason) appear in the receipt.

API

FunctionDescription
fromChatCompletion()Maps a chat completion response to interaction evidence with hashed input/output
hashMessages()Returns sha256:<hex64> digest of serialized messages array
hashOutput()Returns sha256:<hex64> digest of serialized completion output

Usage Example

inference-evidence.tsTypeScript
import { fromChatCompletion } from '@peac/adapter-openai-compatible';
import { issue } from '@peac/protocol';

// Chat completion response (OpenAI-compatible format)
const completion = {
  id: 'chatcmpl-abc123',
  model: 'gpt-4o',
  choices: [{
    index: 0,
    message: { role: 'assistant', content: 'The answer is 42.' },
    finish_reason: 'stop',
  }],
  usage: {
    prompt_tokens: 25,
    completion_tokens: 8,
    total_tokens: 33,
  },
};

const messages = [
  { role: 'user', content: 'What is the meaning of life?' },
];

// Map to interaction evidence (hash-first)
const evidence = await fromChatCompletion({
  messages,
  completion,
  platform: 'openai',
});

// evidence.input.digest  = 'sha256:a1b2c3...'
// evidence.output.digest = 'sha256:d4e5f6...'
// evidence.executor      = { platform: 'openai', model: 'gpt-4o' }
// evidence.extensions    = { model_id: 'gpt-4o', token_counts: {...}, finish_reason: 'stop' }

// Issue a signed receipt with the evidence
const receipt = await issue({
  iss: 'https://inference.example.com',
  aud: 'agent.consumer.com',
  evidence,
});

Compatibility

Works with any provider that returns an OpenAI-compatible chat completion response. The adapter reads the standard choices, model, and usage fields from the response object.

Implementation examples

ProviderNotes
OpenAIDirect API support via /v1/chat/completions
OllamaLocal inference with OpenAI-compatible endpoint
vLLMOpenAI-compatible server mode
TogetherOpenAI-compatible API
Any OpenAI-compatible endpointAny server implementing the chat completions response schema

Streaming Support

Streaming chat completions (SSE stream: true) are not yet supported. The adapter currently requires the complete response object. Streaming support is planned for v0.11.3.

Digest Verification

A verifier who has the original messages and completion can recompute the digests to confirm the receipt covers that interaction:

verify-digest.tsTypeScript
import { hashMessages, hashOutput } from '@peac/adapter-openai-compatible';

const inputDigest = await hashMessages(originalMessages);
const outputDigest = await hashOutput(completion);

// Compare against the digests in the receipt
if (inputDigest !== receipt.evidence.input.digest) {
  throw new Error('Input digest mismatch');
}
if (outputDigest !== receipt.evidence.output.digest) {
  throw new Error('Output digest mismatch');
}

Links

Evidence Without Exposure

Inference receipts prove that a specific interaction occurred with a specific model at a specific time, without exposing the content of the conversation. The SHA-256 digest binds the receipt to the interaction; a party with the original content can verify the binding, but the receipt alone reveals nothing about the prompt or response.