Rust
The Rust workspace contains Enki's core runtime and the language bindings built on top of it. If you are consuming Enki from Rust, the crate you use is enki-next, imported as enki_next.
Rust package
The Rust crate in this workspace is packaged as enki-next and imported in Rust as enki_next.
After publishing, consumers can depend on it like this:
[dependencies]
enki_next = { package = "enki-next", version = "0.5.81" }
If you want the bundled universal LLM provider, enable the feature explicitly:
[dependencies]
enki_next = { package = "enki-next", version = "0.5.81", features = ["universal-llm-provider"] }
Choose the Rust API
These are the main public Rust entry points:
enki_next::agent::Agent: low-level agent type when you want direct control over the agent instance, tool executor, workspace, and loop.enki_next::runtime::RuntimeBuilder: the easiest way to assemble a single-agent runtime with custom tools, memory, workspace, and an injected LLM provider.enki_next::runtime::MultiAgentRuntime: multi-agent orchestration with agent discovery and delegation.enki_next::workflow::WorkflowRuntime: persisted DAG-style workflows with reusable tasks, inline tasks, transforms, decisions, joins, resume, and intervention handling.
If you just want to get started from Rust code, start with RuntimeBuilder for single-agent execution or WorkflowRuntime for structured orchestration.
Single-agent runtime
For application-owned Rust integrations, RuntimeBuilder is the cleanest single-agent entry point:
use enki_next::agent::AgentDefinition;
use enki_next::runtime::RuntimeBuilder;
let runtime = RuntimeBuilder::new(AgentDefinition {
name: "Rust Assistant".to_string(),
system_prompt_preamble: "You are a concise Rust assistant.".to_string(),
model: "openai::gpt-4o-mini".to_string(),
max_iterations: 8,
})
.with_workspace_home("./.enki")
.build()
.await?;
RuntimeBuilder lets you:
- set the model with
AgentDefinition.modelor.with_model(...) - inject a custom provider with
.with_llm(...) - add tools with
.register_tool(...)or.with_tool_registry(...) - override the task workspace with
.with_workspace_home(...) - add a custom memory manager with
.with_memory(...)
The runtime automatically injects the intrinsic ask_human tool so agents can pause for human input when you use the human-aware runtime methods.
Multi-agent runtime
Use MultiAgentRuntime::builder() when you want multiple named agents that can discover peers and delegate work:
use enki_next::agent::AgentDefinition;
use enki_next::runtime::MultiAgentRuntime;
let runtime = MultiAgentRuntime::builder()
.add_agent(
"coordinator",
AgentDefinition {
name: "Coordinator".to_string(),
system_prompt_preamble: "Route research tasks to other agents.".to_string(),
model: "openai::gpt-4o-mini".to_string(),
max_iterations: 8,
},
vec!["planning".to_string()],
)
.add_agent(
"researcher",
AgentDefinition {
name: "Researcher".to_string(),
system_prompt_preamble: "Read files and summarize findings.".to_string(),
model: "openai::gpt-4o-mini".to_string(),
max_iterations: 8,
},
vec!["research".to_string()],
)
.with_workspace_home("./.enki")
.build()
.await?;
This runtime injects the discover_agents and delegate_task tools so agents can route work across the shared registry.
Workspace layout
The main crates in this repository are:
crates/core: Rust agent runtime, memory system, tool execution, LLM provider abstraction, and CLI entrypointcrates/builder: theenkiCLI for manifest-driven projects and interactive sessionscrates/bindings/enki-py: UniFFI-based Python bindingscrates/bindings/enki-js: native Node.js bindings built withnapi-rs
What the runtime provides
The Rust core is responsible for:
- Agent execution and iteration control
- Session and workspace state management
- Memory handling
- Tool execution
- Workflow DAG execution with persisted run state, resume support, and intervention handling
- Human-in-the-loop support through the intrinsic
ask_humantool - Execution tracing via per-step
ExecutionStepevents - Provider/model resolution using the
provider::modelformat
Examples of model strings used in this workspace:
ollama::qwen3.5openai::gpt-4oanthropic::claude-3-opus-20240229google::gemini-3.1-pro-preview
Run the crate examples
crates/core/examples contains runnable Rust examples for the main crate-level APIs:
cargo run -p enki-next --example simple_agent -- "Summarize this repository": low-levelAgentsetup with a task workspace.cargo run -p enki-next --example runtime_builder:RuntimeBuilderplus a custom Rust tool and a mock LLM provider.cargo run -p enki-next --example multi_agent -- "Summarize the repository structure": multi-agent runtime with discovery and delegation.cargo run -p enki-next --example workflow: persisted workflow runtime using a mock task runner instead of a live model provider.
The simple_agent and multi_agent examples expect either ENKI_MODEL, AgentDefinition.model, or an injected provider. The runtime_builder and workflow examples are self-contained and can be run without external model credentials.
Run the detached library examples
If you want examples that look like a separate application crate consuming enki-next as a library, use example/enki-rs.
These examples use a path dependency on ../../crates/core, define their own Cargo.toml, and run outside the workspace package graph:
cargo run --manifest-path example/enki-rs/Cargo.toml --bin runtime_builder_detailedcargo run --manifest-path example/enki-rs/Cargo.toml --bin multi_agent_detailedcargo run --manifest-path example/enki-rs/Cargo.toml --bin workflow_detailed
This is the best reference when you want to embed Enki in your own Rust binary instead of extending the internal crate examples.
Build the workspace
From the repository root:
cargo build
cargo test
Core binary
The low-level core binary expects:
core <session_id> "<message>"
Example:
cargo run -p enki-next -- session-1 "Summarize the repository structure"
If you do not inject an LLM in code, the runtime resolves the model from ENKI_MODEL.
Builder CLI
For local app-style workflows, use the enki builder crate instead of the low-level core binary.
Current commands include:
enki initenki buildenki runenki testenki monitorenki joinenki tool newenki agent add
See Builder CLI for the manifest format and command flow.