Skip to content

Graphorin API reference v0.1.0


Graphorin API reference / @graphorin/provider / adapters/llamacpp-server

adapters/llamacpp-server

Direct adapter for the upstream llama-server binary from the llama.cpp project. The binary speaks the OpenAI-compatible REST contract end-to-end (POST /v1/chat/completions, POST /v1/completions, POST /v1/embeddings); streaming is via text/event-stream chunks terminated by data: [DONE] exactly as the upstream OpenAI shape.

The adapter shares a single LocalProviderTrust classifier with ollamaAdapter and openAICompatibleAdapter — one classifier, one policy table, one error type.

Interfaces

InterfaceDescription
LlamaCppServerAdapterOptionsOptions accepted by llamaCppServerAdapter.

Variables

VariableDescription
DEFAULT_LLAMACPP_SERVER_BASE_URLDefault port used by the upstream llama-server binary.

Functions

FunctionDescription
llamaCppServerAdapterBuild a Graphorin Provider backed by the upstream llama-server binary. The factory does not start the binary — operators launch it themselves with the desired model + GPU flags and pass the URL here.