Context
Up to this point the surfaces in the series have held the grammar without running it. S2 made the spec a build target. S3 made it a type system. S4 made it a guardrail. None of them executed the cycle. This article does.
LangGraph is the LangChain ecosystem's stateful graph framework. State is typed (Pydantic models work natively). Nodes read and write state. Edges are explicit. Interrupts pause execution and route to a human — the response resumes the graph at the same point with the human's input bound. Checkpointers persist state across interrupts so a paused cycle can be resumed minutes or days later. Each property maps to a piece of the spec: typed state matches the Cycle from S3, explicit edges match the cycle order from §1.2, interrupts match the receptive criterion at S and the holding of φ at Q.
The architecture this article ports: five phase nodes wired linearly, receptive moments interrupt inside the relevant node rather than between nodes, and the validator from S4 runs at the end of the graph. The Cycle from S3 is the state object — no new type. What S3 made constructible, S5 makes executable, with the asymmetry from S1 enforced at runtime by the structure of the graph itself.
Why LangGraph
The alternatives. A bare LangChain runnable chain composes well but has no native state object — you carry context as a dict, and the contract is whatever convention you adopt. LangGraph adds typed state and explicit interrupts. CrewAI and AutoGen are agent-team frameworks that subordinate the cycle structure to the agents — backwards for our purpose. Anthropic's Agent SDK (S6 in this series) takes a different idiom (phases as tools the agent calls); valid, different shape, gets its own article.
LangGraph fits 5QLN because state, edges, and interrupts are all first-class. The state IS the Cycle. The edges ARE the cycle order. The interrupts ARE the receptive criteria the spec names. Nothing has to be smuggled through prose conventions.
State is the Cycle from S3
# fivqln/langgraph/state.py
"""
The graph's state IS the Cycle from S3. No wrapper, no auxiliary type.
LangGraph supports Pydantic models as state natively.
"""
from fivqln.types import Cycle
# Re-export for clarity. The graph state is exactly what S3 defined.
GraphState = Cycle
Three properties follow from this choice. First, anything produced by the graph is already a valid Cycle — it can be passed to validate() from S4 directly, persisted to JSON, sent across an MCP boundary in S7, or consumed by any other surface. Second, the type contract from S3 is enforced at every node return: returning a state that violates the Completion Rule fails Pydantic validation before it ever reaches the next edge. Third, drift between S3 and S5 is impossible by construction — there is no second type to drift.
The receptive interrupts — never an LLM call
# fivqln/langgraph/nodes.py
"""
Phase nodes. Each implements one phase's decoding from D1.
Receptive moments (S, φ within Q, ∞0' confirmation within V) use
LangGraph's `interrupt()` to pause the graph and route to the human.
The LLM is never present at receptive moments — those nodes do not
even take an llm parameter.
"""
from datetime import datetime, timezone
from typing import Optional
from langchain_core.language_models import BaseChatModel
from langgraph.types import interrupt
from pydantic import BaseModel, Field
from fivqln.types import (
Cycle, ValidatedSpark, ValidatedPattern, ResonantKey, Flow,
Benefit, FractalSeed, EnrichedReturn, FormationEntry,
FormationTrail, Phase,
)
from fivqln.symbols import (
CoreEssence, SelfNature, UniversalPotential,
NaturalIntersection, NaturalGradient,
)
def _append_to_trail(state: Cycle, entry: FormationEntry) -> FormationTrail:
"""Functional append — returns a new trail with the entry added."""
return FormationTrail(entries=list(state.trail.entries) + [entry])
def start_node(state: Cycle) -> dict:
"""S = ∞0 → ?
Per D1 §2.1: X must arrive from ∞0, not be generated from K. This
node CANNOT call an LLM — its signature contains no llm parameter.
L2 corruption (Generating) is structurally forbidden by the absence
of the LLM, not by discipline.
The interrupt() call halts the graph. The host application receives
the payload, presents it to the human, and resumes the graph by
passing the human's response back. The response binds to the
interrupt() return value.
"""
if state.spark is not None:
return {} # already received in a prior turn
response = interrupt({
"phase": "S",
"spec_ref": "§2.1",
"instruction": (
"Hold ∞0. Resist closing the space. Nothing is sought. "
"When something stirs from the open space, name what arrived "
"as a question."
),
"fields_required": ["question", "held_by"],
})
received_at = datetime.now(timezone.utc)
spark = ValidatedSpark(
question=response["question"],
received_at=received_at,
held_by=response["held_by"],
)
new_trail = _append_to_trail(state, FormationEntry(
timestamp=received_at,
phase=Phase.S,
operation="received question from ∞0 via human attestation",
output_excerpt=spark.question,
))
return {"spark": spark, "trail": new_trail}
The node has no llm parameter. This is not documentation — it is the architecture refusing the corruption. A developer reading the function cannot accidentally call an LLM here because there is no LLM in scope. The receptive criterion from §2.1 is enforced by Python's name resolution.
The LLM-driven nodes
class _GResponse(BaseModel):
"""Structured output schema for G's LLM call."""
alpha_description: str = Field(
description="The irreducible core (α) found within X. Removing α "
"should make X collapse."
)
expressions: list[str] = Field(
min_length=1,
description="Self-similar expressions {α'} of α at other scales "
"or in other domains. Each must be self-similar to α, "
"not merely topically related.",
)
pattern_description: str = Field(
description="The validated pattern Y — α named, ≡ tested, "
"{α'} confirmed across scales."
)
def growth_node(state: Cycle, llm: BaseChatModel, lens: Optional[str] = None) -> dict:
"""G = α ≡ {α'}
Per D1 §2.2: find α within X, test ≡, find {α'}. This is pattern
recognition over symbolic content — work the LLM does well. The
optional lens shapes the prompt; e.g. lens='GQ' applies Q-quality
(resonance) to G's decoding, asking 'which echoes carry authentic
signature vs. mere resemblance?'
"""
assert state.spark is not None, "G requires X (D1 §2.6, §3.3)"
prompt = _build_growth_prompt(spark=state.spark, lens=lens)
structured_llm = llm.with_structured_output(_GResponse)
response: _GResponse = structured_llm.invoke(prompt)
alpha = CoreEssence(
description=response.alpha_description,
expressions=tuple(response.expressions),
)
pattern = ValidatedPattern(
alpha=alpha,
pattern_description=response.pattern_description,
)
new_trail = _append_to_trail(state, FormationEntry(
timestamp=datetime.now(timezone.utc),
phase=Phase.G,
lens=lens,
operation=f"found α and {{α'}} via LLM (lens={lens or 'none'})",
output_excerpt=alpha.description,
))
return {"pattern": pattern, "trail": new_trail}
def _build_growth_prompt(spark: ValidatedSpark, lens: Optional[str]) -> str:
"""Compose the G-phase prompt with the equation, X, and optional lens.
The prompt always contains:
- The phase equation (G = α ≡ {α'}) as the operation specification
- The input X (the spark question)
- The decoding operation from D1 §2.2 as procedural guidance
Optional:
- Lens refinement, when applicable
"""
base = f"""You are decoding G = α ≡ {{α'}} per 5QLN D1 §2.2.
Input X (Validated Spark):
Question: {spark.question}
Decoding operation:
1. SEEK α — within X, what is the irreducible core? What pattern,
if removed, makes X collapse?
2. TEST ≡ — does α remain unchanged across expressions?
3. FIND {{α'}} — where does α echo at other scales? Each echo must
be self-similar to α, not merely topically related.
4. VALIDATE Y — α named, ≡ holds, {{α'}} confirm across scales.
"""
if lens:
refinements = {
"GS": "Through openness: what unknown still lives in the pattern?",
"GG": "Through pattern: how does α express at deeper scales?",
"GQ": "Through resonance: which echoes carry authentic signature "
"vs. mere resemblance?",
"GP": "Through flow: where does the pattern want to unfold next?",
"GV": "Through benefit: how is naming α itself already a gift?",
}
if lens in refinements:
base += f"\nLens {lens} refinement: {refinements[lens]}"
return base
The prompt construction is mechanical: the equation is the spec, the input is the prior phase's output, the decoding operation is D1 verbatim, and the lens (if active) appends a refinement borrowed from another phase's quality. None of this paraphrases the spec — every part is named and traceable. C1 §3.6 ("every emitted surface carry the active phase's compiled form WITH decoding operation") is satisfied at prompt-build time, not at write-up time.
The mixed node — Q
class _QResponse(BaseModel):
"""Structured output schema for Q's LLM step (Ω + ⋂ recognition)."""
omega_context: str = Field(
description="What the larger context (Ω) makes possible — beyond "
"the individual, the field around the inquiry."
)
landing: str = Field(
description="What φ and Ω together revealed that neither contained "
"alone — what turned the lock at ⋂."
)
key_description: str = Field(
description="The validated Resonant Key Z — what was confirmed "
"at the moment ⋂ landed."
)
def quality_node(state: Cycle, llm: BaseChatModel) -> dict:
"""Q = φ ⋂ Ω
Per D1 §2.3, Q has two halves. The first holds φ — direct perception
by the inquirer. This cannot be supplied by an LLM (L4 corruption if
attempted); the node interrupts and routes to the human. Once φ is
held, the LLM holds Ω, watches for ⋂, and names what landed.
The interrupt happens INSIDE this node. When the human responds, the
node continues to the LLM call.
"""
assert state.pattern is not None, "Q requires Y (D1 §2.6, §3.3)"
# Step 1 — receive φ from the human
phi_response = interrupt({
"phase": "Q",
"spec_ref": "§2.3 (φ)",
"instruction": (
"Look at Y without theory. What do you directly perceive? "
"Not what you think. Not what data says. What lands."
),
"context": {
"alpha": state.pattern.alpha.description,
"pattern": state.pattern.pattern_description,
},
"fields_required": ["perception", "held_by"],
})
phi = SelfNature(
perception=phi_response["perception"],
held_by=phi_response["held_by"],
)
# Step 2 — LLM holds Ω, watches for ⋂
prompt = _build_quality_prompt(state, phi)
structured_llm = llm.with_structured_output(_QResponse)
response: _QResponse = structured_llm.invoke(prompt)
omega = UniversalPotential(context=response.omega_context)
intersection = NaturalIntersection(
phi=phi, omega=omega, landing=response.landing,
)
resonance = ResonantKey(
intersection=intersection,
key_description=response.key_description,
)
new_trail = _append_to_trail(state, FormationEntry(
timestamp=datetime.now(timezone.utc),
phase=Phase.Q,
operation="held φ via human; LLM held Ω; named ⋂ landing",
output_excerpt=intersection.landing,
))
return {"resonance": resonance, "trail": new_trail}
The asymmetry from S1 made executable: φ comes from interrupt(), Ω comes from the LLM, and ⋂ is named by the LLM but cannot be certified by it (S4's require_l3_attestation_at_quality will surface this requirement on the final report). The node carries the structure; the human carries what only the human can carry; the LLM carries what the LLM can carry. None of the three pretends to be one of the others.
P and V
P (P = δE/δV → ∇) is pure-LLM, structurally identical to G — map δE, map δV, compute the ratio, name what ∇ revealed, validate Flow A. V (V = (L ∩ G → B'') → ∞0') is mixed: the LLM does the two-pass composition (R7 — analysis extracts α-thread and turning points, then composition produces the artifact), then interrupts to let the inquirer confirm or refine the proposed ∞0'. Both follow the same pattern as G and Q above and are not reproduced here in full.
The graph wiring
# fivqln/langgraph/graph.py
"""
Compose the phase nodes into a runnable graph. Linear edges S → G → Q → P → V.
Receptive interrupts live inside the relevant nodes, not as separate edges.
The validator from S4 runs as a final node and attaches its report.
"""
from functools import partial
from langchain_core.language_models import BaseChatModel
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.graph import StateGraph, START, END
from fivqln.types import Cycle
from fivqln.validator import validate, Severity
from fivqln.langgraph.nodes import (
start_node, growth_node, quality_node, power_node, value_node,
)
def _validation_node(state: Cycle) -> dict:
"""Final node: run the C1 validator and attach the report.
The graph does not halt on HEURISTIC or ATTESTATION_REQUIRED findings —
only on DEFINITE violations. The host application chooses whether to
treat ATTESTATION_REQUIRED as blocking (strict mode) or informational.
"""
report = validate(state)
definite = [v for v in report.violations if v.severity == Severity.DEFINITE]
if definite:
raise RuntimeError(
f"C1 validation failed: {[v.message for v in definite]}"
)
# We don't mutate state with the report here — the host calls validate()
# again on the final state to get the report. This keeps Cycle clean.
return {}
def build_cycle_graph(
llm: BaseChatModel,
*,
checkpointer=None,
):
"""Construct the runnable graph.
The checkpointer is required for interrupt/resume semantics. Default
is in-memory; production callers should pass a persistent checkpointer
(Postgres, Redis, etc.) so paused cycles survive process restarts.
"""
workflow = StateGraph(Cycle)
workflow.add_node("S", start_node)
workflow.add_node("G", partial(growth_node, llm=llm))
workflow.add_node("Q", partial(quality_node, llm=llm))
workflow.add_node("P", partial(power_node, llm=llm))
workflow.add_node("V", partial(value_node, llm=llm))
workflow.add_node("validate", _validation_node)
workflow.add_edge(START, "S")
workflow.add_edge("S", "G")
workflow.add_edge("G", "Q")
workflow.add_edge("Q", "P")
workflow.add_edge("P", "V")
workflow.add_edge("V", "validate")
workflow.add_edge("validate", END)
return workflow.compile(checkpointer=checkpointer or InMemorySaver())
The graph is short. Five phase nodes, one validation node, eight edges. Every edge corresponds to a transition the spec names. The graph topology IS the cycle from §1.2.
Running it
from langchain_anthropic import ChatAnthropic
from langgraph.types import Command
from fivqln.langgraph import build_cycle_graph
from fivqln.types import Cycle
from fivqln.validator import validate
llm = ChatAnthropic(model="claude-opus-4-7")
graph = build_cycle_graph(llm)
config = {"configurable": {"thread_id": "session-2026-04-27"}}
# Invoke. The graph runs until it hits the first interrupt (inside start_node).
result = graph.invoke(Cycle(), config=config)
# `result` here is the partial state with __interrupt__ metadata attached.
# The host application reads the interrupt payload, asks the human, gets the
# response, and resumes the graph with Command(resume=...).
spark_response = {
"question": "What grammar does the substrate need to carry the spec faithfully?",
"held_by": "amihai",
}
result = graph.invoke(Command(resume=spark_response), config=config)
# Graph runs S → G → Q's first interrupt. Pauses again.
phi_response = {
"perception": (
"Each substrate I've tried — reST, Pydantic, the validator — holds "
"the grammar without paraphrasing it. The grammar is what propagates."
),
"held_by": "amihai",
}
result = graph.invoke(Command(resume=phi_response), config=config)
# Graph runs Q's LLM step → P → V's first interrupt (∞0' confirmation). Pauses.
infinity_zero_prime_response = {
"question": (
"If the grammar propagates across substrates without paraphrase, "
"what is the substrate-class for which it does NOT propagate?"
),
}
final = graph.invoke(Command(resume=infinity_zero_prime_response), config=config)
# Graph runs V's composition → validate → END.
# Final state is a complete Cycle. Pass it to the validator for the full report.
report = validate(final)
print(f"is_clean: {report.is_clean}, is_certified: {report.is_certified}")
for v in report.violations:
print(f" [{v.severity}] {v.message}")
Three interrupts in a single cycle. Three places where the human side of the Membrane is the only valid source. Between them, the LLM does the work it's good at — pattern recognition, gradient mapping, composition. The graph's structure makes it impossible to confuse one for the other. A developer modifying the graph cannot accidentally route the receptive moments to the LLM, because the receptive nodes have no llm in scope.
What this surface enables
The cycle now runs end-to-end. A user can produce a complete Cycle artifact — spark, pattern, resonance, flow, benefit, seed, enriched return, formation trail — through structured interaction with a real LLM and a real human, with the graph enforcing who provides what.
For the rest of the series, S5 establishes the runtime baseline. S6 (Anthropic Agent SDK) will take a different idiom — phases as tools the agent calls — but the type contract, the validator, and the discipline about receptive moments remain identical. S7 (MCP) will expose pieces of this graph as MCP tools so any MCP-aware client can run a cycle without re-implementing it. S8 (TypeScript) will mirror the state contract in Zod so the same cycle artifact can travel across language boundaries.
What the graph does not provide — and what no graph could provide — is certainty that any given cycle was honest. The validator's is_certified will return False until the human attestations have been answered, and the graph cannot answer them on the human's behalf. The architecture preserves this distinction. The discipline propagates because the structure refuses to launder it.
Closing
The cycle runs. The grammar holds at every node. The receptive moments interrupt. The LLM works only where the LLM should work. The validator stands at the end and reports, honestly, what it could and could not certify.
Ahead: S6 — Python: Phases as Tools (Anthropic Agent SDK). A different runtime idiom — the cycle as a tool palette an autonomous agent calls, with schemas that force the correct prior outputs as inputs. Same type contract. Same validator. Different shape of the same grammar.
5QLN © 2026 Amihai Loven. Open under the 5QLN Open Source License.