Inside our Workflow Engine

Inside our Workflow Engine

EthanEthan
Technical
Author
Date
Jan 20, 2026
Description
Exploring the intricacies of Flint's workflow engine
Published
Published
Slug
workflows
Tags
Technical
Type
Blog
Publish Changes
The workflow engine is the heart of Flint - where triggers, actions, and conditions connect to automate complex business processes. Here's how we built it.

Definitions

A workflow is a directed acyclic graph (DAG). Every workflow starts with a trigger, which points to a step, which points to another step, and so on. The data structure is deceptively simple:
interface WorkflowDefinition { trigger: { type: string; then: { type: "sequential"; step: string }; }; steps: Record<string, { type: string; parameters: Record<string, unknown>; then?: SequentialThen | ConditionalThen; }>; }
Conditions branch the graph. When a step returns { branch: "approved" }, the engine looks up then.branches.approved to find the next step. This makes if/else, switch statements, and approval flows all use the same underlying mechanism.
We validate the graph at save time, checking for cycles, dangling references, and invalid CEL expressions, so runtime execution can trust the structure.

Nodes

Every trigger, action, and condition is a "node" with the same general interface:
interface NodeDefinition<TInput, TOutput> { name: string; input: z.ZodType<TInput>; output: z.ZodType<TOutput>; handler: (ctx: NodeContext<TInput>) => Promise<TOutput>; }
Nodes are pure functions with typed inputs and outputs. The NodeContext provides services - file storage, logging, pause/resume capabilities, without the node needing to know about the broader execution environment.
Because of this scalable node registry system adding a new action is trivial:
export const httpCall = defineNode({ name: "http_call", input: z.object({ url: field.text("URL").build(), method: field.select("Method", ["GET", "POST", "PUT", "DELETE"]).build(), }), output: z.object({ status: field.number("Status").build(), body: z.any(), }), handler: async ctx => { const response = await fetch(ctx.input.url, { method: ctx.input.method }); return { status: response.status, body: await response.text() }; }, });
The schema does double duty: runtime validation and UI generation. The platform reads node schemas to render configuration panels automatically.

Schema-Driven UI Generation

Here’s where it gets more interesting: the same (extended) Zod schemas that validate runtime data also drive the UI.
When you drag an HTTP Call action onto the canvas, the platform doesn’t rely on a hardcoded form for that action. Instead, it reads the node’s input schema, walks the Zod structure, and generates the form fields dynamically. A field.text("URL") becomes a text input. A field.select("Method", ["GET", "POST"]) becomes a dropdown. A z.array() renders an “add item” control with nested fields.
The field helper attaches UI metadata to schemas (labels, placeholders, descriptions) without sacrificing type inference. Under the hood, it’s still just Zod. The practical result is that adding a new node, even with complex configuration, typically doesn’t require frontend changes: define the schema and the UI follows.
Outputs work the same way. When you configure a downstream step and need to reference data from a previous step, the variable picker reads that step’s output schema and renders a tree of available fields. Types carry through the whole system.

Expressions & Variables

Workflows need to pass data between steps. We use CEL, which is Google's own expression language for policy evaluation, best known for being used by Kubernetes.
When you write {{steps.extract.vendor_name}} in a step parameter, the engine:
  1. Extracts all {{...}} zones from the string
  1. Parses each as CEL syntax
  1. Evaluates against a context containing trigger output and all completed step outputs
  1. Replaces zones with results (or returns raw values for pure expressions)
CEL also supports basic data manipulation (filter(), map(), etc.), making it easy to ensure all inputs are in the correct format.
The parser validates expressions at save time, so typos in variable references fail before the workflow runs.

Meta Types

Workflows deal with more than strings and numbers. Files, documents, templates - these are richer values that need consistent handling across the system.
To support that, we built Meta Types. A file isn’t just a pointer like { download_url: "..." }. It’s a meta-typed value with identity and behavior the system understands.
  • A file renders as a preview with download link
  • A document exposes its fields in the variable picker
  • A template-backed value gets a full editing flow in interventions
Each component just asks the registry "what is this?" and gets the right behavior back.

Execution

When a trigger fires, the engine:
  1. Identifies all nodes connected to the trigger
  1. Creates a run record with status running
  1. Executes the trigger node
  1. Walks the graph step by step, resolving expressions and executing handlers
  1. Stores each step's output for downstream expression resolution
  1. Marks the run completed (or failed / paused)
Errors bubble up and mark the run as failed. But one special error triggers a different path: the engine serializes all state, stops, and waits for a signal to resume. More on that in a future post.

Thank you for taking the time to read my little breakdown of our workflow system. I’ll likely write a mini part 2 on pausing workflows. How we snapshot state, resume safely, and what that enables (interventions, delays, and other human-in-the-loop steps).