Light Process
Lightweight workflow engine with Docker container isolation, conditional DAG execution, and A2A protocol support.
Getting Started
Requirements
Install
$ npm install -g light-process
Verify your environment:
$ light doctor
Checking environment...
[ok] Node.js: v20.x.x
[ok] Docker: Docker version 24.x.x
[ok] Docker daemon: running
[ok] Ready
Create a project
$ light init my-project
$ cd my-project
This creates:
my-project/
package.json
main.js # SDK usage example
example/
workflow.json # DAG definition
hello/
.node.json # node config
index.js # code
lp.js # helper
Run the example
$ light run example
Running: Example (from folder)
> Hello
[Hello] Input: {}
[ok] Hello 2100ms
-> {"hello":"world","input":{}}
[ok] 2108ms
Run with input
$ light run example --input '{"name": "Alice"}'
-> {"hello":"world","input":{"name":"Alice"}}
Validate a workflow
$ light check example
Checking: Example (from folder)
[ok] workflow.json exists
[ok] workflow.json structure
[ok] Workflow loads
[ok] Nodes valid - 1 node(s)
[ok] Links valid - 0 link(s)
[ok] Entry nodes - 1 entry node(s)
[ok] 6/6 checks passed
Visualize the DAG
$ light describe example
Outputs a text tree and generates describe.html with an interactive Mermaid diagram.
Start the dashboard
$ light serve --port 3000
Open http://localhost:3000 to see the web dashboard with your workflow DAG.
Add a new node
$ cd example
$ light init --node ./transform
This creates a transform/ folder with .node.json, index.js, lp.js, and auto-registers it in workflow.json.
Add a Python node
$ light init --node ./analyze --lang python
Creates analyze/ with .node.json, main.py, and lp.py using python:3.12-alpine.
Use Cases
Light Process is useful anywhere you need to run multi-step code pipelines with isolation, validation, and conditional routing.
Data Processing Pipelines
Chain data transformations across languages. Each step runs in its own container with schema-validated I/O. Extract from APIs, clean with pandas, analyze with numpy, and generate reports - all in isolated containers.
AI/ML Workflows
Orchestrate model training, evaluation, and deployment with GPU support and network isolation. Conditional routing based on evaluation metrics, back-links for retraining loops with iteration limits.
CI/CD and Build Pipelines
Run build, test, and deploy steps in isolated containers. Parallel execution for independent steps, conditional deployment based on test results, and per-step timeouts to prevent hanging builds.
Document Processing
Process documents through transformation stages with format validation. Parse PDFs, classify with ML models, and route conditionally by document type to specialized handlers.
API Orchestration
Compose multiple API calls with error handling and conditional logic. Per-node network access, timeouts for slow services, and schema validation to enforce API contracts.
ETL (Extract-Transform-Load)
Move data between systems with transformation steps. JSON Schema validation catches data quality issues. Conditional routing separates valid and invalid records.
Multi-Agent AI Systems (A2A)
Expose workflows as A2A agents that other AI agents can discover and invoke. Each workflow appears as a skill in the agent card. Structured data exchange via JSON-RPC 2.0.
Automated Testing
Run test suites in isolated environments across multiple runtime versions in parallel. Isolated containers prevent test interference. Merge results and report conditionally.
Key Advantages
| Feature | Benefit |
|---|---|
| Docker isolation | Steps can't interfere with each other |
| Multi-language | Use the best tool for each step |
| Schema validation | Catch data issues between steps |
| Conditional routing | Handle success/failure/edge cases |
| Parallel execution | Fast pipelines with independent steps |
| A2A protocol | Integrate with AI agent ecosystems |
| Loop support | Retry and iteration patterns |
| Web dashboard | Visual inspection of workflow structure |
CLI Reference
light run
Execute a workflow or single node.
$ light run <file|dir|id|name> [options]
$ light run --node [dir] [options]
Options
| Flag | Description | Default |
|---|---|---|
--input <file|json> | Input data (JSON file or inline) | {} |
--input-file <file> | Read input from a JSON file (cannot combine with --input) | - |
--json | Output full result as JSON | off |
--timeout <ms> | Global timeout | 0 (none) |
--dir <dir> | Workflow search directory | . |
--json-source | Prefer .json over folder | off |
--node | Run current dir as single node | off |
--verbose | Verbose output | off |
Examples
# Run from folder
$ light run my-workflow
# Run with inline JSON input
$ light run my-workflow --input '{"key": "value"}'
# Run with input file
$ light run my-workflow --input data.json
# Full JSON output (for piping)
$ light run my-workflow --json | jq '.results'
# Run a single node
$ light run --node ./my-node
# Single node with input.json auto-loaded
$ cd my-node && light run --node .
# Search by name in a directory
$ light run my-workflow --dir ./custom-workflows
Resolution order
- If
--node: loads.node.jsonfrom target directory - If target is a folder with
workflow.json: loads from folder - If target is a
.jsonfile: loads directly - Searches
--dirfor matching workflow by ID or name
light serve
Start the A2A API server with web dashboard.
$ light serve [dir] [--port 3000] [--verbose]
Endpoints
| Method | Path | Description |
|---|---|---|
| GET | / | Web dashboard |
| GET | /health | Health check |
| GET | /.well-known/agent-card.json | A2A agent card |
| GET | /api/workflows | List workflows |
| GET | /api/workflows/:id | Workflow detail |
| POST | /api/workflows | Add a workflow (in-memory). Add ?persist=true to also save to disk |
| DELETE | /api/workflows/:id | Remove a workflow. Add ?persist=true to also delete file |
| POST | / or /a2a | A2A JSON-RPC 2.0 |
Examples
# Serve all workflows in a directory
$ light serve
# Custom port
$ light serve --port 8080
# Verbose Docker logging
$ light serve --verbose
# Set a custom API key
$ LP_API_KEY=my-secret-key light serve
Authentication
API key authentication is opt-in. Set the LP_API_KEY environment variable to enable Bearer auth. If unset, auth is disabled and all routes are public.
Protected routes (POST and /api/*) require a Bearer token in the Authorization header. GET routes like /health and /.well-known/agent-card.json are public.
# Public - no auth needed
$ curl http://localhost:3000/health
# Protected - requires Bearer token
$ curl -H "Authorization: Bearer <your-api-key>" http://localhost:3000/api/workflows
The AgentCard at /.well-known/agent-card.json advertises the security scheme so that A2A clients can discover that authentication is required.
light init
Scaffold a new project or node.
$ light init [dir] # full project
$ light init --node [dir] [--lang] # single node
Options
| Flag | Description | Default |
|---|---|---|
--node | Create a node instead of project | off |
--lang <js|python> | Node language | js |
--verbose | Show created files | off |
Project init creates:
package.jsonwith start/check scriptsexample/with a hello nodemain.jswith SDK example
Node init creates:
.node.jsonwith image and entrypointindex.jsormain.py(template code)lp.jsandlp.py(helpers)input.json(empty test input)- Auto-registers in parent
workflow.jsonif present
light check
Validate a workflow without running it.
$ light check <file|dir> [--fix]
Checks performed
workflow.jsonexists and parses- Node folders exist
.node.jsonfiles exist- Workflow loads (valid structure)
- All nodes have images
- All nodes have entrypoints or files
- Entry nodes exist
--fix auto-removes dead node references from workflow.json.
light describe
Show workflow structure and generate a visual diagram.
$ light describe <file|dir|id|name> [--no-html]
Example output
Order Pipeline (order-pipeline)
3 nodes, 2 links
Validate (node:20-alpine)
in: name (string), age (integer)
out: valid (boolean), score (number)
-> Process [valid = true]
Process (python:3.12-alpine)
out: result (string)
-> Notify
Notify (node:20-alpine)
light doctor
Check environment health.
$ light doctor
Checks: Node.js version (>= 18), Docker installation, Docker daemon status, gVisor (runsc) availability, GPU support (nvidia-smi), Docker GPU plugin.
light config
Manage global configuration stored at ~/.light/config.json.
$ light config <get|set|list|path> [key] [value]
| Subcommand | Description |
|---|---|
list, show | Show full config |
path | Show config file path |
get <key> | Get a value (supports dot notation) |
set <key> <value> | Set a value (JSON or string) |
light remote
Manage remote light-process servers. Run with no arguments to list configured remotes.
$ light remote <bind|set-key|use|forget|ping|ls|run|delete|rm> [...]
| Subcommand | Description |
|---|---|
bind <url> | Register a remote (--key, --name) |
set-key <key> | Update the API key on an existing remote (--name) |
use <name> | Set default remote |
forget <name> | Remove a remote |
ping | Ping the current remote |
ls | List workflows on remote (--json) |
run <id> | Run a workflow (--input, --input-file, --json) |
delete|rm <id> | Delete a workflow (--soft, --yes) |
light pull
Pull workflow(s) from a remote server into local folders.
$ light pull <id> [--path <dir>] [--force] [--remote <name>]
$ light pull --all [--force]
| Flag | Description |
|---|---|
--path <dir> | Target directory (default: ./<id>) |
--force | Overwrite existing target |
--remote <name> | Use a specific remote profile |
--all | Pull all workflows from the remote |
light push
Push local workflow folder(s) to a remote server. With no arguments, pushes all workflows in the current directory.
$ light push [<name>] [--path <dir>] [--remote <name>] [--yes]
| Flag | Description |
|---|---|
--path <dir> | Workflow folder path |
--remote <name> | Use a specific remote profile |
--yes, -y | Skip confirmation prompts |
light link
Manage links in a workflow folder. Without flags, opens workflow.json in $EDITOR.
$ light link <workflow-dir> [--from <id> --to <id>] [--when <json>]
$ light link <workflow-dir> --edit <link-id> [--when <json>] [--data <json>]
$ light link <workflow-dir> --list
$ light link <workflow-dir> --remove <link-id>
| Flag | Description |
|---|---|
--from <id> | Source node ID |
--to <id> | Target node ID |
--when <json> | Condition (MongoDB-style JSON) |
--data <json> | Data to inject on the link |
--max-iterations <n> | Max iterations for back-links |
--edit <id> | Edit an existing link (combine with --when, --data, etc.) |
--list | List existing links |
--remove <id> | Remove a link by ID |
--open | Open workflow.json in $EDITOR |
light list
List all workflows in a directory. Discovers both folder-based workflows and JSON files.
$ light list [--dir <path>] [--json]
| Flag | Description |
|---|---|
--dir <path> | Directory to scan (default: .) |
--json | Output as JSON |
light pack
Convert a workflow folder into a single JSON file. The source folder is removed after packing.
$ light pack [<folder>] [--to <file>] [--force] [--keep]
| Flag | Description |
|---|---|
--to <file> | Output file path (default: <id>.json) |
--force | Overwrite existing file |
--keep | Keep the source folder after packing |
light unpack
Convert a JSON file into a workflow folder. The source JSON is removed after unpacking.
$ light unpack <file> [--to <dir>] [--force] [--keep]
| Flag | Description |
|---|---|
--to <dir> | Target directory (default: ./<id>) |
--force | Overwrite existing directory |
--keep | Keep the source JSON after unpacking |
light node
Manage node metadata - inspect node info or edit schemas interactively.
$ light node info <dir> [--json]
$ light node schema <dir>
$ light node register <dir>
$ light node helpers <dir>
light node info
Show node metadata, input/output schema, and what it receives from upstream nodes. If a parent workflow.json exists, also displays incoming links with source node output schemas, conditions, and injected data.
light node schema
Reads .node.json in <dir> and lets you add, edit, or remove fields on the input and output schemas. Changes are written back to .node.json on save. Also regenerates lp.d.ts for editor autocomplete.
light node register
Register a node folder in the parent workflow.json. Adds the node to the nodes array if not already present.
light node helpers
Regenerate lp.d.ts from the node's schema. Useful after manually editing .node.json. The lp.d.ts file gives your editor autocomplete on input fields and send() parameters.
Examples
# Show node info and what it receives
$ light node info ./my-node
$ light node info ./my-node --json
# Edit the schema of an existing node folder
$ light node schema ./my-node
# Edit the hello example
$ light node schema ./example/hello
# Regenerate lp.d.ts after editing .node.json by hand
$ light node helpers ./my-node
Global options
| Flag | Description |
|---|---|
--version, -v | Show version |
--help, -h | Show help for a command |
SDK Guide
Use light-process programmatically in Node.js to build, configure, and execute workflows.
Install
$ npm install light-process
Basic workflow
import { Workflow, DockerRunner } from 'light-process';
const wf = new Workflow({ name: 'hello' });
const node = wf.addNode({ name: 'Greet', image: 'node:20-alpine' });
node.setCode((input) => ({ message: `Hello, ${input.name}!` }));
const result = await wf.execute(
{ name: 'World' },
{ runner: new DockerRunner() }
);
console.log(result.success); // true
console.log(result.results);
Multi-node pipeline
import { Workflow, Schema, DockerRunner } from 'light-process';
const wf = new Workflow({ name: 'pipeline' });
// Node 1: validate
const validate = wf.addNode({ name: 'Validate', image: 'node:20-alpine' });
validate.inputs = Schema.object({ email: Schema.string() }, ['email']);
validate.setCode((input) => ({
valid: input.email.includes('@'),
email: input.email,
}));
// Node 2: process (only runs if valid)
const process = wf.addNode({ name: 'Process', image: 'node:20-alpine' });
process.setCode((input) => ({
processed: true,
email: input.email,
}));
// Node 3: reject (only runs if invalid)
const reject = wf.addNode({ name: 'Reject', image: 'node:20-alpine' });
reject.setCode((input) => ({
rejected: true,
reason: 'Invalid email',
}));
// Conditional links
wf.addLink({
from: validate.id,
to: process.id,
when: { valid: true },
});
wf.addLink({
from: validate.id,
to: reject.id,
when: { valid: { ne: true } },
});
const result = await wf.execute(
{ email: 'alice@example.com' },
{ runner: new DockerRunner() }
);
Node from folder
import { Node, loadDirectory, DEFAULT_IGNORE } from 'light-process';
const node = new Node({
name: 'My Node',
image: 'node:20-alpine',
entrypoint: 'node index.js',
});
// Load all files from a directory
const files = loadDirectory('./my-node', { ignore: DEFAULT_IGNORE });
node.addFiles(files);
// Or use the shorthand
node.addFolder('./my-node', 'node index.js');
Load workflow from folder
import { loadWorkflowFromFolder, DockerRunner } from 'light-process';
const wf = loadWorkflowFromFolder('./my-workflow');
if (!wf) {
console.error('Invalid workflow folder');
process.exit(1);
}
const result = await wf.execute({}, { runner: new DockerRunner() });
Export workflow to folder
import { exportWorkflowToFolder } from 'light-process';
// After building a workflow programmatically
exportWorkflowToFolder(wf, './output/my-workflow');
// Creates workflow.json + node folders with .node.json and code files
Execution callbacks
const result = await wf.execute(input, {
runner: new DockerRunner(),
timeout: 30000, // 30s global timeout
onNodeStart: (nodeId, nodeName) => {
console.log(`Starting: ${nodeName}`);
},
onNodeComplete: (nodeId, nodeName, success, duration) => {
console.log(`${nodeName}: ${success ? 'ok' : 'failed'} (${duration}ms)`);
},
onLog: (nodeId, nodeName, log) => {
console.log(`[${nodeName}] ${log}`);
},
onStatusChange: (status) => {
console.log(`Current: ${status.currentNodeName}`);
console.log(`Done: ${status.completedNodes.length}`);
},
});
DockerRunner options
const runner = new DockerRunner({
memoryLimit: '512m',
cpuLimit: '1.5',
runtime: 'runsc', // 'runc', 'runsc' (gVisor), 'kata'
gpu: 'all', // false, 'all', number, or device ID
verbose: true,
tempDir: '/tmp/lp',
});
Node.setCode
Wraps a JavaScript function as node code. The function receives input as an argument and returns the output.
node.setCode((input) => {
// input is the parsed JSON from stdin
const result = { doubled: input.value * 2 };
return result; // written to .lp-output.json
});
Limitations: closures and external variables are not available at runtime (the function is serialized to a string).
Node.addHelper
Adds language-specific helper files (lp.js, lp.py) that provide input and send.
node.addHelper('javascript'); // adds lp.js
node.addHelper('python'); // adds lp.py
node.addHelper(); // adds all helpers
Workflow serialization
// To JSON
const json = wf.toJSON();
const str = JSON.stringify(json, null, 2);
// From JSON
const restored = Workflow.fromJSON(json);
Error types
import {
LightProcessError,
LinkValidationError,
CircularDependencyError,
WorkflowTimeoutError,
} from 'light-process';
| Error | Thrown when |
|---|---|
LinkValidationError | Invalid link (missing node, self-loop, cycle without maxIterations) |
CircularDependencyError | No entry nodes in non-empty workflow |
WorkflowTimeoutError | Execution exceeds timeout |
Workflows
A workflow is a directed acyclic graph (DAG) of nodes connected by links. Each node runs code in a Docker container.
Two formats
Workflows exist in two formats:
| Format | What | Use for |
|---|---|---|
| Folder | Directory with workflow.json + node subfolders | Editing, git, push to server |
| JSON | Single .json file with everything embedded | Transport, sharing, API |
Use light pack to convert folder to JSON, light unpack for the reverse. Both remove the source by default (use --keep to preserve it). Use light list to see all workflows in a directory.
Folder structure
my-workflow/ # folder format (working copy)
workflow.json # DAG definition
node-a/
.node.json # node config
index.js # code
lp.js # helper
node-b/
.node.json
main.py
lp.py
my-workflow.json # JSON format (portable)
workflow.json
Defines the DAG structure:
{
"id": "my-workflow",
"name": "My Workflow",
"network": null,
"nodes": [
{ "id": "node-a", "name": "Node A", "dir": "node-a" },
{ "id": "node-b", "name": "Node B", "dir": "node-b" }
],
"links": [
{ "from": "node-a", "to": "node-b" }
]
}
| Field | Required | Description |
|---|---|---|
id | yes | Unique identifier |
name | yes | Display name |
network | no | Docker network for all nodes (null = lp-isolated) |
nodes | yes | Array of node references (id, name, dir) |
links | no | Array of links between nodes |
.node.json
Configures a single node:
{
"id": "node-a",
"name": "Node A",
"image": "node:20-alpine",
"entrypoint": "node index.js",
"setup": ["npm install axios"],
"timeout": 10000,
"network": null,
"inputs": null,
"outputs": null
}
| Field | Required | Description |
|---|---|---|
id | yes | Unique identifier |
name | yes | Display name |
image | yes | Docker image |
entrypoint | yes | Command to run |
setup | no | Shell commands before entrypoint |
timeout | no | Node timeout in ms (0 = none) |
network | no | Override workflow network |
inputs | no | JSON Schema for input validation |
outputs | no | JSON Schema for output validation |
Links
Links connect nodes and control data flow:
{
"from": "node-a",
"to": "node-b",
"when": { "status": "ok" },
"data": { "extra": "value" },
"maxIterations": null
}
| Field | Required | Description |
|---|---|---|
from | yes | Source node ID |
to | yes | Target node ID |
when | no | Condition on source output (see Conditions) |
data | no | Extra data merged into target input |
maxIterations | no | Loop limit for back-links |
Execution model
- Entry nodes (no incoming forward links) start first with the initial input
- Nodes in the same layer run in parallel via
Promise.all() - After a node completes, outgoing links are evaluated
- If a link has
when, it only fires if the condition matches the output - Target nodes start when all incoming links have data ready
- Multiple incoming links merge their outputs with
Object.assign() - Link
datais merged on top of the source output - If any node fails, the workflow stops
Back-links (loops)
A link that creates a cycle requires maxIterations:
{
"from": "process",
"to": "validate",
"when": { "retry": true },
"maxIterations": 3
}
Without maxIterations, adding a cycle throws LinkValidationError.
Network inheritance
- Workflow
networkapplies to all nodes withnetwork: null - Node
networkoverrides the workflow network - Default network is
lp-isolated(bridge, no inter-container communication) - Set
network: "none"to fully isolate a node
Data flow
When multiple nodes feed into one:
Merge order follows link evaluation order. Later values overwrite earlier ones.
Conditions
Links support MongoDB-style when conditions to control routing based on node output.
Operators
| Operator | Example | Description |
|---|---|---|
| (none) | { status: "ok" } | Exact equality |
gt | { count: { gt: 5 } } | Greater than |
gte | { count: { gte: 5 } } | Greater or equal |
lt | { count: { lt: 10 } } | Less than |
lte | { count: { lte: 10 } } | Less or equal |
ne | { status: { ne: "error" } } | Not equal |
in | { role: { in: ["admin", "mod"] } } | Value in array |
exists | { token: { exists: true } } | Field exists |
regex | { token: { regex: "^ok" } } | Regex match |
or | { or: [{...}, {...}] } | Logical OR |
Logic
- All top-level fields are AND (all must match)
- Use
orfor OR logic - Conditions are evaluated against the source node's output
Examples
Simple equality
{ "status": "ok" }
Matches if output contains { "status": "ok" }.
Multiple conditions (AND)
{ "status": "ok", "count": { "gte": 10 } }
Matches if status is "ok" AND count is >= 10.
OR logic
{
"or": [
{ "status": "ok" },
{ "status": "warning" }
]
}
Matches if status is "ok" OR "warning".
Field existence
{ "token": { "exists": true } }
// Matches if the output has a "token" field
{ "error": { "exists": false } }
// Matches if the output does NOT have an "error" field
Membership
{ "role": { "in": ["admin", "moderator", "owner"] } }
Numeric range
{ "score": { "gte": 0, "lte": 100 } }
Not equal
{ "status": { "ne": "error" } }
Usage in links
workflow.json
{
"links": [
{
"from": "validate",
"to": "process",
"when": { "valid": true, "score": { "gte": 80 } }
},
{
"from": "validate",
"to": "reject",
"when": { "valid": { "ne": true } }
}
]
}
SDK
wf.addLink({
from: validate.id,
to: process.id,
when: { valid: true, score: { gte: 80 } },
});
Validation
Conditions are validated when a link is added. Unknown operators throw LinkValidationError:
Link "my-link" has invalid 'when' condition: Unknown operator: foo
Docker & Security
Each node runs in an isolated Docker container with security hardening.
Container lifecycle
- Node files are written to a temp directory
- An entrypoint script is generated from
setup+entrypoint docker runstarts the container with volume mounts- Input is piped to stdin as JSON
- Output is read from
.lp-output.jsonin the container - Container is removed after execution (
--rm)
DockerRunner options
const runner = new DockerRunner({
memoryLimit: '512m', // --memory flag
cpuLimit: '1.5', // --cpus flag
runtime: 'runsc', // --runtime flag
gpu: 'all', // --gpus flag
noNewPrivileges: true, // --security-opt (default: true)
verbose: false, // log Docker commands
tempDir: '/tmp/lp', // custom temp directory
});
| Option | Type | Default | Description |
|---|---|---|---|
memoryLimit | string | none | Container memory limit (e.g. "256m", "2g") |
cpuLimit | string | none | CPU cores (e.g. "0.5", "2") |
runtime | string | "runc" | Container runtime: "runc", "runsc" (gVisor), "kata" |
gpu | boolean/string/number | false | GPU access: false, "all", count, device ID |
noNewPrivileges | boolean | true | Prevent privilege escalation |
verbose | boolean | false | Log Docker commands |
tempDir | string | OS temp | Directory for node files |
Security hardening
Capabilities dropped
The following dangerous capabilities are always dropped:
NET_RAW- raw socket accessMKNOD- device file creationSYS_CHROOT- chrootSETPCAP- capability modificationSETFCAP- file capability modificationAUDIT_WRITE- audit log writing
Other security measures
--no-new-privilegesprevents privilege escalation--pids-limit 100limits process count- Temp files cleaned up after execution
- Path traversal checks on file operations
- Prototype pollution prevention on JSON parsing
Networks
Default: lp-isolated
By default, all containers run on a shared lp-isolated bridge network with inter-container communication disabled (ICC=false).
# Created automatically on first use:
$ docker network create --driver bridge \
-o com.docker.network.bridge.enable_icc=false \
lp-isolated
Network options
| Value | Effect |
|---|---|
null | Use workflow network (default: lp-isolated) |
"none" | No network access |
"host" | Host network (no isolation) |
"my-net" | Custom Docker network |
Set per-node in .node.json:
{ "network": "none" }
Or per-workflow in workflow.json:
{ "network": "my-custom-network" }
Node network overrides workflow network.
GPU support
const runner = new DockerRunner({ gpu: 'all' });
| Value | Docker flag |
|---|---|
false | no GPU |
'all' | --gpus all |
2 | --gpus 2 |
'"device=0,1"' | --gpus "device=0,1" |
Requires NVIDIA Container Toolkit. Check with light doctor.
Runtimes
| Runtime | Description |
|---|---|
runc | Default OCI runtime |
runsc | gVisor sandbox (stronger isolation) |
kata | Kata Containers (VM-level isolation) |
Check availability with light doctor.
Container naming
Containers are named lp-<nodeId>-<timestamp>-<seq> for easy identification:
lp-hello-1712345678901-0
Cancellation
Workflows support cancellation via AbortController:
const controller = new AbortController();
// Cancel after 5 seconds
setTimeout(() => controller.abort(), 5000);
const result = await wf.execute(input, {
runner,
signal: controller.signal,
});
Cancelled containers are killed with docker kill.
A2A Protocol
Light Process implements the A2A protocol (Agent-to-Agent) for exposing workflows as AI agents.
Start the server
# Start the server (no auth - public)
$ light serve --port 3000
# Enable Bearer auth by setting LP_API_KEY
$ LP_API_KEY=my-secret-key light serve --port 3000
This starts:
- Web dashboard at
http://localhost:3000/ - A2A agent at
http://localhost:3000/ - Agent card at
http://localhost:3000/.well-known/agent-card.json
API key authentication is opt-in via LP_API_KEY. When enabled, POST routes and /api/* routes require a Bearer token in the Authorization header. See light serve - Authentication for details.
Agent discovery
$ curl http://localhost:3000/.well-known/agent-card.json
{
"name": "Light Process",
"description": "Workflow engine with Docker container isolation",
"url": "http://localhost:3000",
"protocolVersion": "0.2.1",
"capabilities": {
"streaming": true,
"pushNotifications": false,
"stateTransitionHistory": true
},
"defaultInputModes": ["application/json"],
"defaultOutputModes": ["application/json"],
"skills": [
{
"id": "my-workflow",
"name": "My Workflow",
"description": "Workflow: My Workflow (3 nodes)",
"tags": ["workflow"]
}
]
}
Each registered workflow appears as a skill.
Send a task
$ curl -X POST http://localhost:3000 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <your-api-key>" \
-d '{
"jsonrpc": "2.0",
"id": "1",
"method": "message/send",
"params": {
"message": {
"messageId": "msg-1",
"role": "user",
"parts": [{
"kind": "data",
"data": {
"workflowId": "my-workflow",
"name": "Alice"
}
}]
}
}
}'
Workflow resolution
The executor selects a workflow using these rules:
- If
workflowIdis in the data, use that workflow - If
workflowNameis in the data, match by name (case-insensitive) - If only one workflow is registered, use it automatically
- Otherwise, return an error with available workflow names
REST API
# List all workflows (auth required)
$ curl -H "Authorization: Bearer <your-api-key>" http://localhost:3000/api/workflows
# Get workflow detail (auth required)
$ curl -H "Authorization: Bearer <your-api-key>" http://localhost:3000/api/workflows/my-workflow-id
# Health check (no auth required)
$ curl http://localhost:3000/health
# Add a workflow dynamically (auth required)
$ curl -X POST -H "Authorization: Bearer <your-api-key>" \
-H "Content-Type: application/json" \
-d '{"id":"my-wf","name":"My Workflow","nodes":[...],"links":[]}' \
http://localhost:3000/api/workflows
# Remove a workflow (auth required)
$ curl -X DELETE -H "Authorization: Bearer <your-api-key>" \
http://localhost:3000/api/workflows/my-wf
SDK usage
import { createA2AServer, Workflow, DockerRunner } from 'light-process';
const runner = new DockerRunner();
const app = createA2AServer({ port: 3000, runner });
// Register workflows
app.registerWorkflow(myWorkflow);
// Start listening
await app.listen();
// Later: stop
await app.close();
Server options
createA2AServer({
port: 3000, // listen port (default: 3000)
host: '0.0.0.0', // bind host (default: '0.0.0.0')
runner: new DockerRunner(), // shared runner instance
card: {
name: 'My Agent', // agent name
description: 'Custom', // agent description
url: 'https://my.host', // public URL
},
});
Task lifecycle
When a task is received via message/send:
- working - workflow execution starts
- working - status update per node start
- artifact-update - result per node completion
- completed or failed - final status with workflow result
CORS
The server allows cross-origin requests:
Access-Control-Allow-Origin: *Access-Control-Allow-Methods: GET, POST, OPTIONSAccess-Control-Allow-Headers: Content-Type, Authorization
Schema Validation
Nodes can define JSON Schema for input and output validation. Validation runs automatically during workflow execution.
Schema helpers
import { Schema } from 'light-process';
Schema.string() // { type: 'string' }
Schema.string({ minLength: 1 }) // { type: 'string', minLength: 1 }
Schema.number() // { type: 'number' }
Schema.number({ minimum: 0 }) // { type: 'number', minimum: 0 }
Schema.integer() // { type: 'integer' }
Schema.boolean() // { type: 'boolean' }
Schema.array(Schema.string()) // { type: 'array', items: { type: 'string' } }
Schema.object(props, required) // { type: 'object', properties, required }
Define on a node
SDK
node.inputs = Schema.object({
name: Schema.string({ minLength: 1 }),
age: Schema.integer({ minimum: 0, maximum: 150 }),
tags: Schema.array(Schema.string(), { minItems: 1 }),
active: Schema.boolean(),
}, ['name', 'age']); // required fields
node.outputs = Schema.object({
result: Schema.string(),
score: Schema.number({ minimum: 0, maximum: 100 }),
});
.node.json
{
"inputs": {
"type": "object",
"properties": {
"name": { "type": "string", "minLength": 1 },
"age": { "type": "integer", "minimum": 0 }
},
"required": ["name", "age"]
},
"outputs": {
"type": "object",
"properties": {
"result": { "type": "string" }
}
}
}
Validation behavior
- Input validation runs before the node executes
- Output validation runs after the node completes (only if successful)
- If validation fails, the node result is marked as failed with the error details
- If
inputsoroutputsisnull, validation is skipped
Supported JSON Schema properties
| Property | Applies to | Description |
|---|---|---|
type | all | "string", "number", "integer", "boolean", "array", "object" |
properties | object | Field definitions |
required | object | Required field names |
items | array | Item schema |
minItems | array | Minimum array length |
maxItems | array | Maximum array length |
minimum | number/integer | Minimum value |
maximum | number/integer | Maximum value |
minLength | string | Minimum string length |
maxLength | string | Maximum string length |
pattern | string | Regex pattern |
enum | all | Allowed values |
default | all | Default value |
description | all | Human-readable description |
Error format
Validation errors include the field path:
Input validation failed: input.name: must NOT have fewer than 1 characters
Output validation failed: output.score: must be >= 0
Manual validation
import { validate, validateInput, validateOutput } from 'light-process';
const schema = Schema.object({
name: Schema.string({ minLength: 1 }),
}, ['name']);
const result = validateInput({ name: '' }, schema);
// { valid: false, errors: ['input.name: must NOT have fewer than 1 characters'] }
const result2 = validateInput({ name: 'Alice' }, schema);
// { valid: true, errors: [] }
Versioning
Light Process follows Semantic Versioning (semver).
Format
MAJOR.MINOR.PATCH-PRERELEASE
| Part | Meaning | Example |
|---|---|---|
| MAJOR | Breaking API changes | 1.0.0 -> 2.0.0 |
| MINOR | New features (backwards-compatible) | 0.1.0 -> 0.2.0 |
| PATCH | Bug fixes (backwards-compatible) | 0.1.0 -> 0.1.1 |
| PRERELEASE | Pre-release tag | 0.1.0-alpha.0 |
Pre-1.0 (current)
While the major version is 0, the API is not considered stable. Minor version bumps may include breaking changes.
Release lifecycle
0.1.0-alpha.0 First alpha - core features, may have bugs
0.1.0-alpha.2 Second alpha - bug fixes from alpha.1
0.1.0-beta.1 Feature-complete, testing phase
0.1.0-beta.2 Bug fixes from beta.1
0.1.0-rc.1 Release candidate - final testing
0.1.0 Stable release
0.1.1 Patch - bug fix
0.2.0 Minor - new features
1.0.0 First major - stable API commitment
Pre-release ordering
npm and semver sort pre-releases correctly:
0.1.0-alpha.0 < 0.1.0-alpha.2 < 0.1.0-beta.1 < 0.1.0-rc.1 < 0.1.0
Installing pre-releases
# Install latest stable (skips pre-releases)
$ npm install light-process
# Install specific pre-release
$ npm install light-process@0.1.0-alpha.0
# Install latest including pre-releases
$ npm install light-process@next
Current version
Check package.json for the current version - it is the single source of truth.
What's included
- Core workflow engine (DAG execution, parallel batches)
- Node model (Docker containers, code files, I/O schema)
- Link model (conditions, data injection, back-links)
- CLI (run, serve, init, check, describe, doctor)
- A2A protocol server with web dashboard
- JavaScript and Python helpers
- JSON Schema validation
What's not stable yet
- API surface may change
- A2A integration (SDK compatibility)
- Dashboard features (roadmap items)