self_hosted_inference_core
v0.1.0
Service-runtime kernel for self-hosted inference backends, owning readiness, health, lease reuse, and endpoint publication above the transport seam.
Packages depending on self_hosted_inference_core
1 package-
First concrete self-hosted inference backend for llama.cpp, owning llama-server boot specs, readiness probes, stop semantics, and endpoint publication through self_hosted_inference_core.
Published 2 weeks ago
61recent downloadstotal downloads: 61
Checksum
Dependency Config
mix.exs
rebar.config
Gleam
erlang.mk
Package Details
this version
90
yesterday
0
last 7 days
0
all time
90