Write an arka-deck worker
A worker is a single-shot headless LLM process. It receives structured JSON, produces structured JSON, and exits. No chat, no multi-turn, no direct user interface.
Principle
Section titled “Principle”Workers handle tasks that need an LLM but no interaction: analyze a project context, draft a note, propose a squad composition.
Worker cycle:
- arka-deck’s system invokes it with a JSON payload (via the
for-workers.tsport) - The worker calls the configured LLM with its prompt + payload
- It returns structured JSON
- The parent addon receives the result and exploits it
Existing workers and their parent addon:
| Worker | Parent addon | Status | Role |
|---|---|---|---|
cortex-favorites-suggester | cortex-actions | active | Proposes 5-15 Cortex artifacts relevant to a project |
memory-note-writer | memory-local | active | Drafts an L1-L5 memory note from a session transcript |
governance-policy-designer | gouvernance-lite | active | Assists project governance policy design |
squad-creator | squad-leader | active | Proposes a squad composition from a mission |
Worker structure
Section titled “Worker structure”A worker is declared by a manifest.json. The runtime implementation (LLM invocation, JSON parsing) is handled by arka-deck — the worker itself contains only its declaration and prompt.
workers/<name>/├── manifest.json # required├── prompt.json # if the prompt is local (optional — otherwise via Cortex)└── # worker descriptionmanifest.json
Section titled “manifest.json”{ "name": "my-worker", "version": "1.0.0", "description": "What the worker does in one sentence.", "status": "active", "input": "Expected JSON payload description.", "output": "Produced JSON description.", "addon_parent": "my-addon", "runtime": "claude-headless", "model": "haiku", "service": { "famille_cortex": "SERVICE", "sous_famille_cortex": "service/service/my_worker", "tool_name": "my_worker_template", "prompt_source": "cortex://atoms/my_worker_prompt", "prompt_atom_id": "atom:my_worker_prompt", "preferred_model": "haiku", "model_override_via": "Settings > Services (user config)" }}Required fields
Section titled “Required fields”| Field | Type | Description |
|---|---|---|
name | string | kebab-case identifier — must match the folder name |
version | string | Semver |
description | string | What the worker does |
status | string | "active" / "draft" |
input | string | Input JSON description (prose format or pseudo-schema) |
output | string | Output JSON description |
addon_parent | string | Name of the addon that owns this worker (null if standalone) |
runtime | string | See available runtimes below |
service.prompt_source | string | Prompt source — see formats below |
Available runtimes
Section titled “Available runtimes”| Runtime | Description |
|---|---|
claude-headless | Claude call via Anthropic SDK — ideal for short analysis tasks |
llm-json | Multi-provider LLM call in strict JSON mode — returns a parsed JSON object |
Prompt sources
Section titled “Prompt sources”| Format | Example | Description |
|---|---|---|
cortex://atoms/<id> | cortex://atoms/notewriter_note_prompt | Public Cortex atom — fetched at runtime |
| Local path | workers/squad-creator/prompt.json | Local JSON file in the repo |
The Cortex format is preferable for active workers: the prompt is versioned in Cortex and can be updated without modifying the repo.
Examples of existing workers
Section titled “Examples of existing workers”cortex-favorites-suggester
Section titled “cortex-favorites-suggester”{ "name": "cortex-favorites-suggester", "runtime": "claude-headless", "model": "haiku", "input": "JSON: { project: { name, description }, agents: [{label, sourceId}], recentSessions: [{summary, agentLabel}], memorySummary?: string, libraryTree: { modes, blocs: { types: { skill, expertise, tool, methode, scope } } } }", "output": "JSON: { suggestions: [{ ref: { type: 'atome'|'bloc', id, label }, justification }] } — 5 to 15 items"}memory-note-writer
Section titled “memory-note-writer”{ "name": "memory-note-writer", "runtime": "llm-json", "model": "service-assigned", "input": "JSON: { projectPath, sessionId, agentId?, agentLabel?, transcript, recentNotes, profileMemory, projectMemory }", "output": "JSON: { level: 'L1'|'L2'|'L3'|'L4'|'L5', title, summary?, content, tags?, actions?, decisions? }"}squad-creator
Section titled “squad-creator”{ "name": "squad-creator", "runtime": "llm-json", "model": "gemini-2.5-flash", "addon_parent": "squad-leader", "input": "Mission in free text, optional constraints, available profile catalogue", "output": "List of recommended agents (profile + justification), designated leader, strategy summary"}Integration with an addon
Section titled “Integration with an addon”A worker must be declared in its parent addon’s manifest.json:
{ "name": "my-addon", "workers": ["my-worker"]}The addon invokes the worker via the for-workers.ts port (use-case side):
const result = await deps.workers.invoke('my-worker', { projectPath, // ... JSON payload per the worker's manifest});The JSON result is directly usable by the addon.
Model choice
Section titled “Model choice”The manifest declares a preferred model. The user can override it from Settings → Services.
| Case | Recommended model |
|---|---|
| Short analysis, classification task | haiku |
| Structured drafting, extraction | gemini-2.5-flash or haiku |
| Complex task, reasoning | sonnet |
Rule of thumb: start with haiku — the worker is 1-shot, the cost is proportional to the chosen model.
Boundaries
Section titled “Boundaries”- A worker cannot start a multi-turn conversation
- A worker cannot directly call another worker
- A worker cannot write to
.arka-deck/— the parent addon persists the result draftworkers can evolve without notice