Light Process

Lightweight workflow engine with Docker container isolation, conditional DAG execution, and A2A protocol support.

Docker Isolated DAG Engine A2A Protocol Node.js >= 18 AGPL-3.0
$ npm install -g light-process

Getting Started

Requirements

Install

bash
$ npm install -g light-process

Verify your environment:

bash
$ light doctor

Checking environment...

  [ok] Node.js: v20.x.x
  [ok] Docker: Docker version 24.x.x
  [ok] Docker daemon: running

[ok] Ready

Create a project

bash
$ light init my-project
$ cd my-project

This creates:

structure
my-project/
  package.json
  main.js                           # SDK usage example
  example/
    workflow.json                    # DAG definition
    hello/
      .node.json                     # node config
      index.js                       # code
      lp.js                          # helper

Run the example

bash
$ light run example

Running: Example (from folder)
> Hello
  [Hello] Input: {}
  [ok] Hello 2100ms

-> {"hello":"world","input":{}}

[ok] 2108ms

Run with input

bash
$ light run example --input '{"name": "Alice"}'

-> {"hello":"world","input":{"name":"Alice"}}

Validate a workflow

bash
$ light check example

Checking: Example (from folder)

  [ok] workflow.json exists
  [ok] workflow.json structure
  [ok] Workflow loads
  [ok] Nodes valid - 1 node(s)
  [ok] Links valid - 0 link(s)
  [ok] Entry nodes - 1 entry node(s)

[ok] 6/6 checks passed

Visualize the DAG

bash
$ light describe example

Outputs a text tree and generates describe.html with an interactive Mermaid diagram.

Start the dashboard

bash
$ light serve --port 3000

Open http://localhost:3000 to see the web dashboard with your workflow DAG.

Add a new node

bash
$ cd example
$ light init --node ./transform

This creates a transform/ folder with .node.json, index.js, lp.js, and auto-registers it in workflow.json.

Add a Python node

bash
$ light init --node ./analyze --lang python

Creates analyze/ with .node.json, main.py, and lp.py using python:3.12-alpine.

Use Cases

Light Process is useful anywhere you need to run multi-step code pipelines with isolation, validation, and conditional routing.

Data Processing Pipelines

Chain data transformations across languages. Each step runs in its own container with schema-validated I/O. Extract from APIs, clean with pandas, analyze with numpy, and generate reports - all in isolated containers.

AI/ML Workflows

Orchestrate model training, evaluation, and deployment with GPU support and network isolation. Conditional routing based on evaluation metrics, back-links for retraining loops with iteration limits.

CI/CD and Build Pipelines

Run build, test, and deploy steps in isolated containers. Parallel execution for independent steps, conditional deployment based on test results, and per-step timeouts to prevent hanging builds.

Document Processing

Process documents through transformation stages with format validation. Parse PDFs, classify with ML models, and route conditionally by document type to specialized handlers.

API Orchestration

Compose multiple API calls with error handling and conditional logic. Per-node network access, timeouts for slow services, and schema validation to enforce API contracts.

ETL (Extract-Transform-Load)

Move data between systems with transformation steps. JSON Schema validation catches data quality issues. Conditional routing separates valid and invalid records.

Multi-Agent AI Systems (A2A)

Expose workflows as A2A agents that other AI agents can discover and invoke. Each workflow appears as a skill in the agent card. Structured data exchange via JSON-RPC 2.0.

Automated Testing

Run test suites in isolated environments across multiple runtime versions in parallel. Isolated containers prevent test interference. Merge results and report conditionally.

Key Advantages

FeatureBenefit
Docker isolationSteps can't interfere with each other
Multi-languageUse the best tool for each step
Schema validationCatch data issues between steps
Conditional routingHandle success/failure/edge cases
Parallel executionFast pipelines with independent steps
A2A protocolIntegrate with AI agent ecosystems
Loop supportRetry and iteration patterns
Web dashboardVisual inspection of workflow structure

CLI Reference

light run

Execute a workflow or single node.

bash
$ light run <file|dir|id|name> [options]
$ light run --node [dir] [options]

Options

FlagDescriptionDefault
--input <file|json>Input data (JSON file or inline){}
--input-file <file>Read input from a JSON file (cannot combine with --input)-
--jsonOutput full result as JSONoff
--timeout <ms>Global timeout0 (none)
--dir <dir>Workflow search directory.
--json-sourcePrefer .json over folderoff
--nodeRun current dir as single nodeoff
--verboseVerbose outputoff

Examples

bash
# Run from folder
$ light run my-workflow

# Run with inline JSON input
$ light run my-workflow --input '{"key": "value"}'

# Run with input file
$ light run my-workflow --input data.json

# Full JSON output (for piping)
$ light run my-workflow --json | jq '.results'

# Run a single node
$ light run --node ./my-node

# Single node with input.json auto-loaded
$ cd my-node && light run --node .

# Search by name in a directory
$ light run my-workflow --dir ./custom-workflows

Resolution order

  1. If --node: loads .node.json from target directory
  2. If target is a folder with workflow.json: loads from folder
  3. If target is a .json file: loads directly
  4. Searches --dir for matching workflow by ID or name

light serve

Start the A2A API server with web dashboard.

bash
$ light serve [dir] [--port 3000] [--verbose]

Endpoints

MethodPathDescription
GET/Web dashboard
GET/healthHealth check
GET/.well-known/agent-card.jsonA2A agent card
GET/api/workflowsList workflows
GET/api/workflows/:idWorkflow detail
POST/api/workflowsAdd a workflow (in-memory). Add ?persist=true to also save to disk
DELETE/api/workflows/:idRemove a workflow. Add ?persist=true to also delete file
POST/ or /a2aA2A JSON-RPC 2.0

Examples

bash
# Serve all workflows in a directory
$ light serve

# Custom port
$ light serve --port 8080

# Verbose Docker logging
$ light serve --verbose

# Set a custom API key
$ LP_API_KEY=my-secret-key light serve

Authentication

API key authentication is opt-in. Set the LP_API_KEY environment variable to enable Bearer auth. If unset, auth is disabled and all routes are public.

Protected routes (POST and /api/*) require a Bearer token in the Authorization header. GET routes like /health and /.well-known/agent-card.json are public.

bash
# Public - no auth needed
$ curl http://localhost:3000/health

# Protected - requires Bearer token
$ curl -H "Authorization: Bearer <your-api-key>" http://localhost:3000/api/workflows

The AgentCard at /.well-known/agent-card.json advertises the security scheme so that A2A clients can discover that authentication is required.

light init

Scaffold a new project or node.

bash
$ light init [dir]                    # full project
$ light init --node [dir] [--lang]    # single node

Options

FlagDescriptionDefault
--nodeCreate a node instead of projectoff
--lang <js|python>Node languagejs
--verboseShow created filesoff

Project init creates:

Node init creates:

light check

Validate a workflow without running it.

bash
$ light check <file|dir> [--fix]

Checks performed

  1. workflow.json exists and parses
  2. Node folders exist
  3. .node.json files exist
  4. Workflow loads (valid structure)
  5. All nodes have images
  6. All nodes have entrypoints or files
  7. Entry nodes exist

--fix auto-removes dead node references from workflow.json.

light describe

Show workflow structure and generate a visual diagram.

bash
$ light describe <file|dir|id|name> [--no-html]

Example output

output
  Order Pipeline (order-pipeline)
  3 nodes, 2 links

  Validate (node:20-alpine)
    in: name (string), age (integer)
    out: valid (boolean), score (number)
    -> Process [valid = true]
  Process (python:3.12-alpine)
    out: result (string)
    -> Notify
  Notify (node:20-alpine)

light doctor

Check environment health.

bash
$ light doctor

Checks: Node.js version (>= 18), Docker installation, Docker daemon status, gVisor (runsc) availability, GPU support (nvidia-smi), Docker GPU plugin.

light config

Manage global configuration stored at ~/.light/config.json.

bash
$ light config <get|set|list|path> [key] [value]
SubcommandDescription
list, showShow full config
pathShow config file path
get <key>Get a value (supports dot notation)
set <key> <value>Set a value (JSON or string)

light remote

Manage remote light-process servers. Run with no arguments to list configured remotes.

bash
$ light remote <bind|set-key|use|forget|ping|ls|run|delete|rm> [...]
SubcommandDescription
bind <url>Register a remote (--key, --name)
set-key <key>Update the API key on an existing remote (--name)
use <name>Set default remote
forget <name>Remove a remote
pingPing the current remote
lsList workflows on remote (--json)
run <id>Run a workflow (--input, --input-file, --json)
delete|rm <id>Delete a workflow (--soft, --yes)

light pull

Pull workflow(s) from a remote server into local folders.

bash
$ light pull <id> [--path <dir>] [--force] [--remote <name>]
$ light pull --all [--force]
FlagDescription
--path <dir>Target directory (default: ./<id>)
--forceOverwrite existing target
--remote <name>Use a specific remote profile
--allPull all workflows from the remote

light push

Push local workflow folder(s) to a remote server. With no arguments, pushes all workflows in the current directory.

bash
$ light push [<name>] [--path <dir>] [--remote <name>] [--yes]
FlagDescription
--path <dir>Workflow folder path
--remote <name>Use a specific remote profile
--yes, -ySkip confirmation prompts

light link

Manage links in a workflow folder. Without flags, opens workflow.json in $EDITOR.

bash
$ light link <workflow-dir> [--from <id> --to <id>] [--when <json>]
$ light link <workflow-dir> --edit <link-id> [--when <json>] [--data <json>]
$ light link <workflow-dir> --list
$ light link <workflow-dir> --remove <link-id>
FlagDescription
--from <id>Source node ID
--to <id>Target node ID
--when <json>Condition (MongoDB-style JSON)
--data <json>Data to inject on the link
--max-iterations <n>Max iterations for back-links
--edit <id>Edit an existing link (combine with --when, --data, etc.)
--listList existing links
--remove <id>Remove a link by ID
--openOpen workflow.json in $EDITOR

light list

List all workflows in a directory. Discovers both folder-based workflows and JSON files.

bash
$ light list [--dir <path>] [--json]
FlagDescription
--dir <path>Directory to scan (default: .)
--jsonOutput as JSON

light pack

Convert a workflow folder into a single JSON file. The source folder is removed after packing.

bash
$ light pack [<folder>] [--to <file>] [--force] [--keep]
FlagDescription
--to <file>Output file path (default: <id>.json)
--forceOverwrite existing file
--keepKeep the source folder after packing

light unpack

Convert a JSON file into a workflow folder. The source JSON is removed after unpacking.

bash
$ light unpack <file> [--to <dir>] [--force] [--keep]
FlagDescription
--to <dir>Target directory (default: ./<id>)
--forceOverwrite existing directory
--keepKeep the source JSON after unpacking

light node

Manage node metadata - inspect node info or edit schemas interactively.

bash
$ light node info <dir> [--json]
$ light node schema <dir>
$ light node register <dir>
$ light node helpers <dir>

light node info

Show node metadata, input/output schema, and what it receives from upstream nodes. If a parent workflow.json exists, also displays incoming links with source node output schemas, conditions, and injected data.

light node schema

Reads .node.json in <dir> and lets you add, edit, or remove fields on the input and output schemas. Changes are written back to .node.json on save. Also regenerates lp.d.ts for editor autocomplete.

light node register

Register a node folder in the parent workflow.json. Adds the node to the nodes array if not already present.

light node helpers

Regenerate lp.d.ts from the node's schema. Useful after manually editing .node.json. The lp.d.ts file gives your editor autocomplete on input fields and send() parameters.

Examples

bash
# Show node info and what it receives
$ light node info ./my-node
$ light node info ./my-node --json

# Edit the schema of an existing node folder
$ light node schema ./my-node

# Edit the hello example
$ light node schema ./example/hello

# Regenerate lp.d.ts after editing .node.json by hand
$ light node helpers ./my-node

Global options

FlagDescription
--version, -vShow version
--help, -hShow help for a command

SDK Guide

Use light-process programmatically in Node.js to build, configure, and execute workflows.

Install

bash
$ npm install light-process

Basic workflow

javascript
import { Workflow, DockerRunner } from 'light-process';

const wf = new Workflow({ name: 'hello' });

const node = wf.addNode({ name: 'Greet', image: 'node:20-alpine' });
node.setCode((input) => ({ message: `Hello, ${input.name}!` }));

const result = await wf.execute(
  { name: 'World' },
  { runner: new DockerRunner() }
);

console.log(result.success); // true
console.log(result.results);

Multi-node pipeline

javascript
import { Workflow, Schema, DockerRunner } from 'light-process';

const wf = new Workflow({ name: 'pipeline' });

// Node 1: validate
const validate = wf.addNode({ name: 'Validate', image: 'node:20-alpine' });
validate.inputs = Schema.object({ email: Schema.string() }, ['email']);
validate.setCode((input) => ({
  valid: input.email.includes('@'),
  email: input.email,
}));

// Node 2: process (only runs if valid)
const process = wf.addNode({ name: 'Process', image: 'node:20-alpine' });
process.setCode((input) => ({
  processed: true,
  email: input.email,
}));

// Node 3: reject (only runs if invalid)
const reject = wf.addNode({ name: 'Reject', image: 'node:20-alpine' });
reject.setCode((input) => ({
  rejected: true,
  reason: 'Invalid email',
}));

// Conditional links
wf.addLink({
  from: validate.id,
  to: process.id,
  when: { valid: true },
});

wf.addLink({
  from: validate.id,
  to: reject.id,
  when: { valid: { ne: true } },
});

const result = await wf.execute(
  { email: 'alice@example.com' },
  { runner: new DockerRunner() }
);

Node from folder

javascript
import { Node, loadDirectory, DEFAULT_IGNORE } from 'light-process';

const node = new Node({
  name: 'My Node',
  image: 'node:20-alpine',
  entrypoint: 'node index.js',
});

// Load all files from a directory
const files = loadDirectory('./my-node', { ignore: DEFAULT_IGNORE });
node.addFiles(files);

// Or use the shorthand
node.addFolder('./my-node', 'node index.js');

Load workflow from folder

javascript
import { loadWorkflowFromFolder, DockerRunner } from 'light-process';

const wf = loadWorkflowFromFolder('./my-workflow');
if (!wf) {
  console.error('Invalid workflow folder');
  process.exit(1);
}

const result = await wf.execute({}, { runner: new DockerRunner() });

Export workflow to folder

javascript
import { exportWorkflowToFolder } from 'light-process';

// After building a workflow programmatically
exportWorkflowToFolder(wf, './output/my-workflow');
// Creates workflow.json + node folders with .node.json and code files

Execution callbacks

javascript
const result = await wf.execute(input, {
  runner: new DockerRunner(),
  timeout: 30000, // 30s global timeout

  onNodeStart: (nodeId, nodeName) => {
    console.log(`Starting: ${nodeName}`);
  },

  onNodeComplete: (nodeId, nodeName, success, duration) => {
    console.log(`${nodeName}: ${success ? 'ok' : 'failed'} (${duration}ms)`);
  },

  onLog: (nodeId, nodeName, log) => {
    console.log(`[${nodeName}] ${log}`);
  },

  onStatusChange: (status) => {
    console.log(`Current: ${status.currentNodeName}`);
    console.log(`Done: ${status.completedNodes.length}`);
  },
});

DockerRunner options

javascript
const runner = new DockerRunner({
  memoryLimit: '512m',
  cpuLimit: '1.5',
  runtime: 'runsc',       // 'runc', 'runsc' (gVisor), 'kata'
  gpu: 'all',             // false, 'all', number, or device ID
  verbose: true,
  tempDir: '/tmp/lp',
});

Node.setCode

Wraps a JavaScript function as node code. The function receives input as an argument and returns the output.

javascript
node.setCode((input) => {
  // input is the parsed JSON from stdin
  const result = { doubled: input.value * 2 };
  return result; // written to .lp-output.json
});

Limitations: closures and external variables are not available at runtime (the function is serialized to a string).

Node.addHelper

Adds language-specific helper files (lp.js, lp.py) that provide input and send.

javascript
node.addHelper('javascript'); // adds lp.js
node.addHelper('python');     // adds lp.py
node.addHelper();             // adds all helpers

Workflow serialization

javascript
// To JSON
const json = wf.toJSON();
const str = JSON.stringify(json, null, 2);

// From JSON
const restored = Workflow.fromJSON(json);

Error types

javascript
import {
  LightProcessError,
  LinkValidationError,
  CircularDependencyError,
  WorkflowTimeoutError,
} from 'light-process';
ErrorThrown when
LinkValidationErrorInvalid link (missing node, self-loop, cycle without maxIterations)
CircularDependencyErrorNo entry nodes in non-empty workflow
WorkflowTimeoutErrorExecution exceeds timeout

Workflows

A workflow is a directed acyclic graph (DAG) of nodes connected by links. Each node runs code in a Docker container.

Two formats

Workflows exist in two formats:

FormatWhatUse for
FolderDirectory with workflow.json + node subfoldersEditing, git, push to server
JSONSingle .json file with everything embeddedTransport, sharing, API

Use light pack to convert folder to JSON, light unpack for the reverse. Both remove the source by default (use --keep to preserve it). Use light list to see all workflows in a directory.

Folder structure

structure
my-workflow/                     # folder format (working copy)
  workflow.json                  # DAG definition
  node-a/
    .node.json                   # node config
    index.js                     # code
    lp.js                        # helper
  node-b/
    .node.json
    main.py
    lp.py
my-workflow.json                 # JSON format (portable)

workflow.json

Defines the DAG structure:

json
{
  "id": "my-workflow",
  "name": "My Workflow",
  "network": null,
  "nodes": [
    { "id": "node-a", "name": "Node A", "dir": "node-a" },
    { "id": "node-b", "name": "Node B", "dir": "node-b" }
  ],
  "links": [
    { "from": "node-a", "to": "node-b" }
  ]
}
FieldRequiredDescription
idyesUnique identifier
nameyesDisplay name
networknoDocker network for all nodes (null = lp-isolated)
nodesyesArray of node references (id, name, dir)
linksnoArray of links between nodes

.node.json

Configures a single node:

json
{
  "id": "node-a",
  "name": "Node A",
  "image": "node:20-alpine",
  "entrypoint": "node index.js",
  "setup": ["npm install axios"],
  "timeout": 10000,
  "network": null,
  "inputs": null,
  "outputs": null
}
FieldRequiredDescription
idyesUnique identifier
nameyesDisplay name
imageyesDocker image
entrypointyesCommand to run
setupnoShell commands before entrypoint
timeoutnoNode timeout in ms (0 = none)
networknoOverride workflow network
inputsnoJSON Schema for input validation
outputsnoJSON Schema for output validation

Links

Links connect nodes and control data flow:

json
{
  "from": "node-a",
  "to": "node-b",
  "when": { "status": "ok" },
  "data": { "extra": "value" },
  "maxIterations": null
}
FieldRequiredDescription
fromyesSource node ID
toyesTarget node ID
whennoCondition on source output (see Conditions)
datanoExtra data merged into target input
maxIterationsnoLoop limit for back-links

Execution model

  1. Entry nodes (no incoming forward links) start first with the initial input
  2. Nodes in the same layer run in parallel via Promise.all()
  3. After a node completes, outgoing links are evaluated
  4. If a link has when, it only fires if the condition matches the output
  5. Target nodes start when all incoming links have data ready
  6. Multiple incoming links merge their outputs with Object.assign()
  7. Link data is merged on top of the source output
  8. If any node fails, the workflow stops

Back-links (loops)

A link that creates a cycle requires maxIterations:

json
{
  "from": "process",
  "to": "validate",
  "when": { "retry": true },
  "maxIterations": 3
}

Without maxIterations, adding a cycle throws LinkValidationError.

Network inheritance

Data flow

Input -> [Node A] -> output A | v (merged with link.data) [Node B] -> output B | v [Node C] -> final output

When multiple nodes feed into one:

[Node A] -> output A -+ |-> merged input -> [Node C] [Node B] -> output B -+

Merge order follows link evaluation order. Later values overwrite earlier ones.

Conditions

Links support MongoDB-style when conditions to control routing based on node output.

Operators

OperatorExampleDescription
(none){ status: "ok" }Exact equality
gt{ count: { gt: 5 } }Greater than
gte{ count: { gte: 5 } }Greater or equal
lt{ count: { lt: 10 } }Less than
lte{ count: { lte: 10 } }Less or equal
ne{ status: { ne: "error" } }Not equal
in{ role: { in: ["admin", "mod"] } }Value in array
exists{ token: { exists: true } }Field exists
regex{ token: { regex: "^ok" } }Regex match
or{ or: [{...}, {...}] }Logical OR

Logic

Examples

Simple equality

json
{ "status": "ok" }

Matches if output contains { "status": "ok" }.

Multiple conditions (AND)

json
{ "status": "ok", "count": { "gte": 10 } }

Matches if status is "ok" AND count is >= 10.

OR logic

json
{
  "or": [
    { "status": "ok" },
    { "status": "warning" }
  ]
}

Matches if status is "ok" OR "warning".

Field existence

json
{ "token": { "exists": true } }
// Matches if the output has a "token" field

{ "error": { "exists": false } }
// Matches if the output does NOT have an "error" field

Membership

json
{ "role": { "in": ["admin", "moderator", "owner"] } }

Numeric range

json
{ "score": { "gte": 0, "lte": 100 } }

Not equal

json
{ "status": { "ne": "error" } }

Usage in links

workflow.json

json
{
  "links": [
    {
      "from": "validate",
      "to": "process",
      "when": { "valid": true, "score": { "gte": 80 } }
    },
    {
      "from": "validate",
      "to": "reject",
      "when": { "valid": { "ne": true } }
    }
  ]
}

SDK

javascript
wf.addLink({
  from: validate.id,
  to: process.id,
  when: { valid: true, score: { gte: 80 } },
});

Validation

Conditions are validated when a link is added. Unknown operators throw LinkValidationError:

output
Link "my-link" has invalid 'when' condition: Unknown operator: foo

Docker & Security

Each node runs in an isolated Docker container with security hardening.

Container lifecycle

  1. Node files are written to a temp directory
  2. An entrypoint script is generated from setup + entrypoint
  3. docker run starts the container with volume mounts
  4. Input is piped to stdin as JSON
  5. Output is read from .lp-output.json in the container
  6. Container is removed after execution (--rm)

DockerRunner options

javascript
const runner = new DockerRunner({
  memoryLimit: '512m',     // --memory flag
  cpuLimit: '1.5',         // --cpus flag
  runtime: 'runsc',        // --runtime flag
  gpu: 'all',              // --gpus flag
  noNewPrivileges: true,   // --security-opt (default: true)
  verbose: false,          // log Docker commands
  tempDir: '/tmp/lp',      // custom temp directory
});
OptionTypeDefaultDescription
memoryLimitstringnoneContainer memory limit (e.g. "256m", "2g")
cpuLimitstringnoneCPU cores (e.g. "0.5", "2")
runtimestring"runc"Container runtime: "runc", "runsc" (gVisor), "kata"
gpuboolean/string/numberfalseGPU access: false, "all", count, device ID
noNewPrivilegesbooleantruePrevent privilege escalation
verbosebooleanfalseLog Docker commands
tempDirstringOS tempDirectory for node files

Security hardening

Capabilities dropped

The following dangerous capabilities are always dropped:

Other security measures

Networks

Default: lp-isolated

By default, all containers run on a shared lp-isolated bridge network with inter-container communication disabled (ICC=false).

bash
# Created automatically on first use:
$ docker network create --driver bridge \
  -o com.docker.network.bridge.enable_icc=false \
  lp-isolated

Network options

ValueEffect
nullUse workflow network (default: lp-isolated)
"none"No network access
"host"Host network (no isolation)
"my-net"Custom Docker network

Set per-node in .node.json:

json
{ "network": "none" }

Or per-workflow in workflow.json:

json
{ "network": "my-custom-network" }

Node network overrides workflow network.

GPU support

javascript
const runner = new DockerRunner({ gpu: 'all' });
ValueDocker flag
falseno GPU
'all'--gpus all
2--gpus 2
'"device=0,1"'--gpus "device=0,1"

Requires NVIDIA Container Toolkit. Check with light doctor.

Runtimes

RuntimeDescription
runcDefault OCI runtime
runscgVisor sandbox (stronger isolation)
kataKata Containers (VM-level isolation)

Check availability with light doctor.

Container naming

Containers are named lp-<nodeId>-<timestamp>-<seq> for easy identification:

output
lp-hello-1712345678901-0

Cancellation

Workflows support cancellation via AbortController:

javascript
const controller = new AbortController();

// Cancel after 5 seconds
setTimeout(() => controller.abort(), 5000);

const result = await wf.execute(input, {
  runner,
  signal: controller.signal,
});

Cancelled containers are killed with docker kill.

A2A Protocol

Light Process implements the A2A protocol (Agent-to-Agent) for exposing workflows as AI agents.

Start the server

bash
# Start the server (no auth - public)
$ light serve --port 3000

# Enable Bearer auth by setting LP_API_KEY
$ LP_API_KEY=my-secret-key light serve --port 3000

This starts:

API key authentication is opt-in via LP_API_KEY. When enabled, POST routes and /api/* routes require a Bearer token in the Authorization header. See light serve - Authentication for details.

Agent discovery

bash
$ curl http://localhost:3000/.well-known/agent-card.json
json
{
  "name": "Light Process",
  "description": "Workflow engine with Docker container isolation",
  "url": "http://localhost:3000",
  "protocolVersion": "0.2.1",
  "capabilities": {
    "streaming": true,
    "pushNotifications": false,
    "stateTransitionHistory": true
  },
  "defaultInputModes": ["application/json"],
  "defaultOutputModes": ["application/json"],
  "skills": [
    {
      "id": "my-workflow",
      "name": "My Workflow",
      "description": "Workflow: My Workflow (3 nodes)",
      "tags": ["workflow"]
    }
  ]
}

Each registered workflow appears as a skill.

Send a task

bash
$ curl -X POST http://localhost:3000 \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <your-api-key>" \
  -d '{
    "jsonrpc": "2.0",
    "id": "1",
    "method": "message/send",
    "params": {
      "message": {
        "messageId": "msg-1",
        "role": "user",
        "parts": [{
          "kind": "data",
          "data": {
            "workflowId": "my-workflow",
            "name": "Alice"
          }
        }]
      }
    }
  }'

Workflow resolution

The executor selects a workflow using these rules:

  1. If workflowId is in the data, use that workflow
  2. If workflowName is in the data, match by name (case-insensitive)
  3. If only one workflow is registered, use it automatically
  4. Otherwise, return an error with available workflow names

REST API

bash
# List all workflows (auth required)
$ curl -H "Authorization: Bearer <your-api-key>" http://localhost:3000/api/workflows

# Get workflow detail (auth required)
$ curl -H "Authorization: Bearer <your-api-key>" http://localhost:3000/api/workflows/my-workflow-id

# Health check (no auth required)
$ curl http://localhost:3000/health

# Add a workflow dynamically (auth required)
$ curl -X POST -H "Authorization: Bearer <your-api-key>" \
  -H "Content-Type: application/json" \
  -d '{"id":"my-wf","name":"My Workflow","nodes":[...],"links":[]}' \
  http://localhost:3000/api/workflows

# Remove a workflow (auth required)
$ curl -X DELETE -H "Authorization: Bearer <your-api-key>" \
  http://localhost:3000/api/workflows/my-wf

SDK usage

javascript
import { createA2AServer, Workflow, DockerRunner } from 'light-process';

const runner = new DockerRunner();
const app = createA2AServer({ port: 3000, runner });

// Register workflows
app.registerWorkflow(myWorkflow);

// Start listening
await app.listen();

// Later: stop
await app.close();

Server options

javascript
createA2AServer({
  port: 3000,              // listen port (default: 3000)
  host: '0.0.0.0',         // bind host (default: '0.0.0.0')
  runner: new DockerRunner(), // shared runner instance
  card: {
    name: 'My Agent',       // agent name
    description: 'Custom',  // agent description
    url: 'https://my.host', // public URL
  },
});

Task lifecycle

When a task is received via message/send:

  1. working - workflow execution starts
  2. working - status update per node start
  3. artifact-update - result per node completion
  4. completed or failed - final status with workflow result

CORS

The server allows cross-origin requests:

Schema Validation

Nodes can define JSON Schema for input and output validation. Validation runs automatically during workflow execution.

Schema helpers

javascript
import { Schema } from 'light-process';

Schema.string()                  // { type: 'string' }
Schema.string({ minLength: 1 }) // { type: 'string', minLength: 1 }
Schema.number()                  // { type: 'number' }
Schema.number({ minimum: 0 })   // { type: 'number', minimum: 0 }
Schema.integer()                 // { type: 'integer' }
Schema.boolean()                 // { type: 'boolean' }
Schema.array(Schema.string())   // { type: 'array', items: { type: 'string' } }
Schema.object(props, required)   // { type: 'object', properties, required }

Define on a node

SDK

javascript
node.inputs = Schema.object({
  name: Schema.string({ minLength: 1 }),
  age: Schema.integer({ minimum: 0, maximum: 150 }),
  tags: Schema.array(Schema.string(), { minItems: 1 }),
  active: Schema.boolean(),
}, ['name', 'age']); // required fields

node.outputs = Schema.object({
  result: Schema.string(),
  score: Schema.number({ minimum: 0, maximum: 100 }),
});

.node.json

json
{
  "inputs": {
    "type": "object",
    "properties": {
      "name": { "type": "string", "minLength": 1 },
      "age": { "type": "integer", "minimum": 0 }
    },
    "required": ["name", "age"]
  },
  "outputs": {
    "type": "object",
    "properties": {
      "result": { "type": "string" }
    }
  }
}

Validation behavior

Supported JSON Schema properties

PropertyApplies toDescription
typeall"string", "number", "integer", "boolean", "array", "object"
propertiesobjectField definitions
requiredobjectRequired field names
itemsarrayItem schema
minItemsarrayMinimum array length
maxItemsarrayMaximum array length
minimumnumber/integerMinimum value
maximumnumber/integerMaximum value
minLengthstringMinimum string length
maxLengthstringMaximum string length
patternstringRegex pattern
enumallAllowed values
defaultallDefault value
descriptionallHuman-readable description

Error format

Validation errors include the field path:

output
Input validation failed: input.name: must NOT have fewer than 1 characters
Output validation failed: output.score: must be >= 0

Manual validation

javascript
import { validate, validateInput, validateOutput } from 'light-process';

const schema = Schema.object({
  name: Schema.string({ minLength: 1 }),
}, ['name']);

const result = validateInput({ name: '' }, schema);
// { valid: false, errors: ['input.name: must NOT have fewer than 1 characters'] }

const result2 = validateInput({ name: 'Alice' }, schema);
// { valid: true, errors: [] }

Versioning

Light Process follows Semantic Versioning (semver).

Format

format
MAJOR.MINOR.PATCH-PRERELEASE
PartMeaningExample
MAJORBreaking API changes1.0.0 -> 2.0.0
MINORNew features (backwards-compatible)0.1.0 -> 0.2.0
PATCHBug fixes (backwards-compatible)0.1.0 -> 0.1.1
PRERELEASEPre-release tag0.1.0-alpha.0

Pre-1.0 (current)

While the major version is 0, the API is not considered stable. Minor version bumps may include breaking changes.

Release lifecycle

lifecycle
0.1.0-alpha.0   First alpha - core features, may have bugs
0.1.0-alpha.2   Second alpha - bug fixes from alpha.1
0.1.0-beta.1    Feature-complete, testing phase
0.1.0-beta.2    Bug fixes from beta.1
0.1.0-rc.1      Release candidate - final testing
0.1.0            Stable release
0.1.1            Patch - bug fix
0.2.0            Minor - new features
1.0.0            First major - stable API commitment

Pre-release ordering

npm and semver sort pre-releases correctly:

order
0.1.0-alpha.0 < 0.1.0-alpha.2 < 0.1.0-beta.1 < 0.1.0-rc.1 < 0.1.0

Installing pre-releases

bash
# Install latest stable (skips pre-releases)
$ npm install light-process

# Install specific pre-release
$ npm install light-process@0.1.0-alpha.0

# Install latest including pre-releases
$ npm install light-process@next

Current version

Check package.json for the current version - it is the single source of truth.

What's included

What's not stable yet