llama_cpp_sdk
v0.1.0
First concrete self-hosted inference backend for llama.cpp, owning llama-server boot specs, readiness probes, stop semantics, and endpoint publication through self_hosted_inference_core.
Current section
Readme
Jump to
Current section
Readme
Checksum
Dependency Config
mix.exs
rebar.config
Gleam
erlang.mk
Package Details
this version
63
yesterday
0
last 7 days
2
all time
63