8-TypeScript: The Vercel AI SDK Surface

8-TypeScript: The Vercel AI SDK Surface

The character of a question is the intensity of the not-knowing within it.

Context

The series has carried 5QLN through a documentation substrate (S2), a Python type contract (S3), a validator (S4), a graph runtime (S5), a tool-use runtime (S6), and a connector (S7). All of those substrates live inside the Python ecosystem. This article carries the same grammar across the language boundary into TypeScript — using Zod for the type contract, the same validator behaviour ported to TypeScript, and the Vercel AI SDK for the runtime.

This is the strongest test in the series. A grammar that can be carried from one library to another within a single language is something. A grammar that can be carried across two language ecosystems — with the same artifact validating in both — is the structural property the series has been claiming since S1. If the same Cycle produced by S5's LangGraph in Python can be loaded into a TypeScript application and still pass the C1 §3.5 validation protocol with no translation step, the substrate-independence of the grammar is demonstrated, not asserted.

The article also closes the series proper. C1 §1.6 — No V without ∞0' — applies to the series as much as it applies to any individual cycle. The closing section returns the question the whole arc opens that could not have been asked at S1.


Why TypeScript and why Vercel AI SDK

TypeScript is the second-largest LLM-application ecosystem after Python. Most production AI web applications — Next.js apps, Remix apps, Cloudflare Workers, Vercel deployments — are written in TypeScript. Any framework that lives only in Python forfeits this half of the field.

Within TypeScript, the Vercel AI SDK (ai package) has become the de facto standard for LLM application code. Its generateObject function takes a Zod schema and returns typed structured output. Its tool helper takes Zod parameters and returns a typed tool definition. Its provider packages (@ai-sdk/anthropic@ai-sdk/openai@ai-sdk/google) give provider-agnostic access. The SDK is designed around the same primitives 5QLN needs: typed structured outputs and typed tool definitions.

Zod is what makes this work. Zod schemas serialize to JSON Schema cleanly, which is what the underlying model providers consume. Zod also infers TypeScript types automatically — z.infer<typeof Schema> gives you the static type the runtime validator enforces. This is the same property Pydantic provides in Python, and it is what makes the cross-language port faithful rather than approximate.


The canonical source — one JSON, two languages

The Python surfaces share a canonical doctest module (S2) that S3 derives its enums and constants from. For cross-language portability, the canonical source is promoted one level up: a JSON file shipped with both the Python package and the TypeScript package. Both runtimes parse the same JSON at module load and check their own definitions against it.

// fivqln-codex.json — shipped with both packages
{
  "constitutional_block": "LAW:         H = ∞0 | A = K\nCYCLE:       S → G → Q → P → V\n...",
  "phases": ["S", "G", "Q", "P", "V"],
  "corruption_codes": ["L1", "L2", "L3", "L4", "V∅"],
  "lenses": [
    "SS", "SG", "SQ", "SP", "SV",
    "GS", "GG", "GQ", "GP", "GV",
    "QS", "QG", "QQ", "QP", "QV",
    "PS", "PG", "PQ", "PP", "PV",
    "VS", "VG", "VQ", "VP", "VV"
  ],
  "adaptive_context": {
    "S": [],
    "G": ["X"],
    "Q": ["X", "α", "Y"],
    "P": ["X", "α", "Y", "Z"],
    "V": ["X", "α", "Y", "φ⋂Ω", "Z", "∇", "A"]
  }
}
// fivqln/src/canonical.ts
import canonical from "../fivqln-codex.json";

export const CONSTITUTIONAL_BLOCK = canonical.constitutional_block;
export const PHASES = canonical.phases as readonly ["S", "G", "Q", "P", "V"];
export const CORRUPTION_CODES = canonical.corruption_codes as readonly [
  "L1", "L2", "L3", "L4", "V∅"
];
export const LENSES = new Set<string>(canonical.lenses);
export const ADAPTIVE_CONTEXT = canonical.adaptive_context;

// Drift checks at module load — fail before any consumer code runs
if (LENSES.size !== 25) {
  throw new Error(
    `Lens set has drifted from the canonical 25 (got ${LENSES.size}). ` +
    `C1 §3.5 drift check failed.`
  );
}
if (CORRUPTION_CODES.length !== 5) {
  throw new Error(
    `Corruption code set has drifted from the canonical 5 ` +
    `(got ${CORRUPTION_CODES.length}). C1 §3.5 drift check failed.`
  );
}

export type Phase = typeof PHASES[number];
export type CorruptionCode = typeof CORRUPTION_CODES[number];

Two properties hold by construction. First, the same JSON file is the truth for both languages — neither can drift from the spec without breaking the other. Second, the Python and TypeScript packages can ship as separate npm/PyPI releases that share a single source-of-truth file, and a CI pipeline can verify they match before either is released.


The Zod schemas

The Pydantic-to-Zod translation is mechanical. Where Pydantic uses BaseModel with Field, Zod uses z.object with .describe. Where Pydantic uses model_validator(mode="after"), Zod uses .superRefine. Where Pydantic uses Optional[X], Zod uses X.optional(). The shapes line up.

// fivqln/src/symbols.ts
import { z } from "zod";

export const CoreEssenceSchema = z
  .object({
    description: z.string().min(1).describe(
      "α — the irreducible pattern within X. Per D1 §2.2."
    ),
    expressions: z
      .array(z.string())
      .readonly()
      .describe(
        "{α'} — self-similar expressions across scales/domains/contexts."
      ),
  })
  .strict();

export type CoreEssence = z.infer<typeof CoreEssenceSchema>;


export const SelfNatureSchema = z
  .object({
    perception: z.string().min(1).describe(
      "What the inquirer directly perceives about Y. Not theory, not data."
    ),
    heldBy: z.string().min(1).describe(
      "Who held φ. Required for attestation (R11)."
    ),
  })
  .strict();

export type SelfNature = z.infer<typeof SelfNatureSchema>;


export const NaturalIntersectionSchema = z
  .object({
    phi: SelfNatureSchema,
    omega: z.object({ context: z.string().min(1) }).strict(),
    landing: z.string().min(1).describe(
      "What φ and Ω together revealed that neither alone contained."
    ),
  })
  .strict();

export type NaturalIntersection = z.infer<typeof NaturalIntersectionSchema>;


export const NaturalGradientSchema = z
  .object({
    direction: z.string().min(1),
    energyValueObservation: z.string().min(1).describe(
      "The δE/δV observation that revealed ∇. Per D1 §2.4."
    ),
  })
  .strict();

export type NaturalGradient = z.infer<typeof NaturalGradientSchema>;

The phase output schemas follow the same pattern. ValidatedSparkSchema requires question to contain a ? via .refineValidatedPatternSchema requires α.expressions to be non-empty via .superRefineEnrichedReturnSchema requires question to contain a ?. These are the same model_validator checks from S3, expressed in Zod's idiom.

// fivqln/src/types.ts
import { z } from "zod";
import { CoreEssenceSchema, NaturalIntersectionSchema, NaturalGradientSchema } from "./symbols";

export const ValidatedSparkSchema = z
  .object({
    question: z.string().min(1),
    receivedAt: z.string().datetime(),
    heldBy: z.string().min(1),
  })
  .strict()
  .refine((s) => s.question.includes("?"), {
    message:
      "X must carry a question. §2.1: 'NAME ? — what arrived is named as a question.'",
    path: ["question"],
  });

export type ValidatedSpark = z.infer<typeof ValidatedSparkSchema>;


export const ValidatedPatternSchema = z
  .object({
    alpha: CoreEssenceSchema,
    patternDescription: z.string().min(1),
  })
  .strict()
  .superRefine((p, ctx) => {
    if (p.alpha.expressions.length === 0) {
      ctx.addIssue({
        code: z.ZodIssueCode.custom,
        path: ["alpha", "expressions"],
        message:
          "Y requires α to have validated {α'} — at least one self-similar " +
          "expression. D1 §2.2: 'α is named, ≡ holds, and {α'} confirm it across multiple scales.'",
      });
    }
  });

export type ValidatedPattern = z.infer<typeof ValidatedPatternSchema>;


export const EnrichedReturnSchema = z
  .object({
    question: z.string().min(1),
  })
  .strict()
  .refine((r) => r.question.includes("?"), {
    message: "∞0' must carry a question. R8: 'No question = not ∞0'.'",
    path: ["question"],
  });

export type EnrichedReturn = z.infer<typeof EnrichedReturnSchema>;


// FractalSeedSchema, ResonantKeySchema, FlowSchema, BenefitSchema follow
// the same patterns — direct mirrors of S3's Pydantic models.

The Cycle schema combines them all and enforces the Completion Rule:

import { ResonantKeySchema, FlowSchema, BenefitSchema, FractalSeedSchema } from "./types";
import { FormationTrailSchema } from "./trail";

export const CycleSchema = z
  .object({
    priorReturn: EnrichedReturnSchema.optional(),
    spark: ValidatedSparkSchema.optional(),
    pattern: ValidatedPatternSchema.optional(),
    resonance: ResonantKeySchema.optional(),
    flow: FlowSchema.optional(),
    benefit: BenefitSchema.optional(),
    seed: FractalSeedSchema.optional(),
    enrichedReturn: EnrichedReturnSchema.optional(),
    trail: FormationTrailSchema.default({ entries: [] }),
  })
  .strict()
  .superRefine((c, ctx) => {
    // C1 §1.6: No V without ∞0'.
    if (c.seed && !c.enrichedReturn) {
      ctx.addIssue({
        code: z.ZodIssueCode.custom,
        path: ["enrichedReturn"],
        message:
          "V∅ corruption: B'' present but ∞0' absent. C1 §1.6: 'No V without ∞0'.'",
      });
    }
  });

export type Cycle = z.infer<typeof CycleSchema>;

The Completion Rule is enforced by Zod's superRefine — the same constraint S3's Pydantic model_validator carries, in the same shape, in a different language.


The validator

The TypeScript validator mirrors S4 exactly: same three Severity levels, same Violation shape, same is_clean / is_certified distinction, same attestation-required findings for what TypeScript cannot machine-verify.

// fivqln/src/validator/index.ts
import type { Cycle, CorruptionCode } from "../types";
import { CycleSchema } from "../types";

export type CheckCategory = "syntax" | "semantic" | "drift" | "corruption";
export type Severity = "definite" | "heuristic" | "attestation_required";

export interface Violation {
  category: CheckCategory;
  severity: Severity;
  code: CorruptionCode | null;
  specRef: string;
  message: string;
  location?: string;
}

export interface ValidationReport {
  violations: readonly Violation[];
  isClean: boolean;
  isCertified: boolean;
}

export function validate(artifact: Cycle | unknown): ValidationReport {
  const parsed = CycleSchema.safeParse(artifact);

  if (!parsed.success) {
    const violations = parsed.error.issues.map(
      (issue): Violation => ({
        category: "syntax",
        severity: "definite",
        code: null,
        specRef: "§3.5 syntax",
        message: issue.message,
        location: "$." + issue.path.join("."),
      })
    );
    return finalize(violations);
  }

  const violations: Violation[] = [
    ...checkAdaptiveContextChain(parsed.data),
    ...checkAlphaCarriedIntoSeed(parsed.data),
    ...detectVEmpty(parsed.data),
    ...requireL2AttestationAtSpark(parsed.data),
    ...requireL3AttestationAtQuality(parsed.data),
  ];

  return finalize(violations);
}

function finalize(violations: Violation[]): ValidationReport {
  return {
    violations,
    isClean: !violations.some((v) => v.severity === "definite"),
    isCertified:
      !violations.some((v) => v.severity === "definite") &&
      !violations.some((v) => v.severity === "attestation_required"),
  };
}

// Individual check implementations below — direct mirrors of S4's check functions.
// Same logic, same severity assignments, same spec_ref keys.

The requireL2AttestationAtSpark and requireL3AttestationAtQuality functions are the load-bearing piece. They emit findings that TypeScript also cannot resolve. The asymmetry from S1 is preserved across the language boundary — a TypeScript-driven cycle is no more able to certify itself than a Python-driven one. The discipline travels because the validator carries it.


Vercel AI SDK integration

The Vercel AI SDK's generateObject function takes a Zod schema and returns a typed structured object. For the LLM-driven phases (G, Q's second half, P, V's composition), this is the entire integration:

// fivqln/src/ai-sdk/growth.ts
import { generateObject } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import type { ValidatedSpark, ValidatedPattern } from "../types";
import { CoreEssenceSchema, ValidatedPatternSchema } from "../symbols";

const GrowthOutputSchema = z.object({
  alphaDescription: z.string().describe(
    "The irreducible α found within X. Removing α should make X collapse. Per D1 §2.2."
  ),
  alphaExpressions: z.array(z.string()).min(1).describe(
    "Self-similar {α'} at other scales or in other domains. " +
    "Each must be self-similar to α (same essence at a different scale), " +
    "not merely topically related."
  ),
  patternDescription: z.string().describe("The validated pattern Y."),
});

export async function growPattern(
  spark: ValidatedSpark,
  options: { lens?: string; model?: string } = {}
): Promise<ValidatedPattern> {
  const { object } = await generateObject({
    model: anthropic(options.model ?? "claude-opus-4-7"),
    schema: GrowthOutputSchema,
    prompt: buildGrowthPrompt(spark, options.lens),
  });

  // Reshape the LLM output into the canonical ValidatedPattern type.
  // Zod's runtime check catches anything the model produced that
  // doesn't satisfy the schema — same guarantee as S3's Pydantic.
  return ValidatedPatternSchema.parse({
    alpha: {
      description: object.alphaDescription,
      expressions: object.alphaExpressions,
    },
    patternDescription: object.patternDescription,
  });
}

function buildGrowthPrompt(spark: ValidatedSpark, lens?: string): string {
  const base = `You are decoding G = α ≡ {α'} per 5QLN D1 §2.2.

Input X (Validated Spark):
  Question: ${spark.question}

Decoding operation:
  1. SEEK α — within X, what is the irreducible core?
  2. TEST ≡ — does α remain unchanged across expressions?
  3. FIND {α'} — self-similar echoes at other scales.
  4. VALIDATE Y — α named, ≡ holds, {α'} confirm across scales.`;

  if (!lens) return base;

  const refinements: Record<string, string> = {
    GS: "Through openness: what unknown still lives in the pattern?",
    GG: "Through pattern: how does α express at deeper scales?",
    GQ: "Through resonance: which echoes carry authentic signature vs. mere resemblance?",
    GP: "Through flow: where does the pattern want to unfold next?",
    GV: "Through benefit: how is naming α itself already a gift?",
  };

  return refinements[lens] ? `${base}\nLens ${lens}: ${refinements[lens]}` : base;
}

For the agent pattern (the S6 idiom in TypeScript), the SDK's tool helper wraps the same Zod schemas as tool parameters:

// fivqln/src/ai-sdk/agent.ts
import { generateText, tool } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { z } from "zod";
import { receiveSparkHandler, growPatternHandler } from "../handlers";
import { buildSystemPrompt } from "./system-prompt";

export async function runCycle(opts: {
  initialMessage: string;
  askHuman: (payload: unknown) => Promise<Record<string, string>>;
}) {
  let cycleState = {}; // closure-captured Cycle, mutated by tool execute

  const { text } = await generateText({
    model: anthropic("claude-opus-4-7"),
    system: buildSystemPrompt(),
    prompt: opts.initialMessage,
    maxSteps: 50,
    tools: {
      receive_spark: tool({
        description:
          "S = ∞0 → ?. Routes to the human; you cannot generate ?. " +
          "Call this first.",
        parameters: z.object({
          reasonForCalling: z.string(),
        }),
        execute: async ({ reasonForCalling }) => {
          const result = await receiveSparkHandler(
            { reasonForCalling },
            cycleState,
            opts.askHuman
          );
          cycleState = result.cycle;
          return result.message;
        },
      }),
      grow_pattern: tool({
        description:
          "G = α ≡ {α'}. Find α and {α'} within X. Requires X.",
        parameters: z.object({
          alphaDescription: z.string(),
          alphaExpressions: z.array(z.string()).min(1),
          patternDescription: z.string(),
          lens: z.string().optional(),
        }),
        execute: async (input) => {
          const result = await growPatternHandler(input, cycleState);
          cycleState = result.cycle;
          return result.message;
        },
      }),
      // ... hold_phi, find_resonance, power, compose_seed,
      //     confirm_enriched_return follow the same shape.
    },
  });

  return { cycle: cycleState, finalMessage: text };
}

The maxSteps: 50 parameter lets the agent walk the cycle across multiple tool-call turns. The execute function for each tool checks prior cycle state, performs the work, and updates the closure-captured cycle — the same pattern as S6's handlers, in TypeScript's idiom.


Cross-language portability — the demonstration

The argument the series has been building toward becomes concrete here. A Cycle produced by S5's LangGraph in Python serializes to JSON via cycle.model_dump_json(). That JSON, loaded into the TypeScript surface, parses through CycleSchema.parse() and runs through the same validator with the same outputs.

import fs from "fs";
import { CycleSchema } from "fivqln/types";
import { validate } from "fivqln/validator";

// Cycle was produced by a Python LangGraph run (S5) and serialized as JSON.
const json = JSON.parse(fs.readFileSync("cycle-from-python.json", "utf-8"));

// Parse into a typed Cycle. Zod's runtime check catches any structural issue
// before the validator runs.
const cycle = CycleSchema.parse(json);

// Run the validator. The output has the same shape and same findings as
// S4 produces in Python — same severities, same spec references, same
// is_clean / is_certified semantics.
const report = validate(cycle);
console.log(`is_clean: ${report.isClean}, is_certified: ${report.isCertified}`);

The reverse holds too. A Cycle produced by this TypeScript surface, serialized to JSON, loads into the Python validator from S4 with no translation step. Both runtimes share one source of truth — the JSON Schema implicit in the Pydantic models, made explicit in the Zod schemas, anchored in the canonical JSON file.

This is the property C1 §3.5 names but does not enforce: drift between surfaces becomes structurally impossible when both surfaces derive from the same source. The Python and TypeScript packages are not two implementations of one spec; they are two presentations of one canonical artifact.


What this surface enables

Three properties.

Cross-ecosystem JSON portability. Any system that produces 5QLN-shaped JSON — a Python agent, a TypeScript app, a Rust library, a Go service — can have its output validated by either runtime. The validator is the API surface; the language is the implementation detail.

Vercel AI SDK fluency. A Next.js app using the AI SDK can call growPattern(spark) and get a typed ValidatedPattern back, with full IDE autocomplete and runtime validation. The 5QLN cycle composes naturally into existing TypeScript application architectures.

The series-level proof. Eight surfaces. Two language ecosystems. One JSON Schema as the bridge. The substrate-independence claim from S1 — that 5QLN is a language, not a methodology — is no longer a structural argument. It is a demonstration. The grammar holds across substrates because the grammar contains no substrate.


The series — what it returned

The series carried 5QLN through eight surfaces. Documentation (S2). Type contract (S3). Validator (S4). Graph runtime (S5). Tool-use runtime (S6). Connector (S7). Cross-language port (S8). Each surface preserved the nine invariant lines without paraphrase. Each surface preserved the asymmetry from H = ∞0 | A = K. Each surface implemented C1 §3.5 in the form natural to its substrate.

A claim was made in S1: a specification precise enough to be checked is portable across substrates without paraphrase. The series tested the claim by attempting it. The attempt produced eight working surfaces, one shared canonical source, and one cross-language demonstration that the same artifact validates in both languages with no translation step.

The holographic property held at the meta-level. Each article in the series carried the whole — the full cycle, the full asymmetry, the full validation protocol — expressed through one substrate. The series itself became the kind of artifact §1.5's holographic law describes.


The return

The series ends, per C1 §1.6, by returning its question.

The Codex was a complete specification before this series began. What the series tested was whether complete could mean portable. The eight surfaces answered: yes, when the specification is precise enough.

A question that could not have been asked at S1 becomes askable now:

If a specification this precise can be carried across this many substrates without losing its grammar, what other invariants — currently held only in their original substrates because no one has compiled them out — are similarly portable but not yet recognized as such?

The series does not answer that question. The series demonstrates that the question is now askable, and that the form an answer would take is a compiled surface in the substrate where the answer needs to live.

The Surfaces tag remains open. If S9 (Pentagonal Swarm) lands, it will live there. If other surfaces emerge — a Rust port, a Mojo port, a domain-specific surface for a particular professional language — they will live there too. The tag is not a publication schedule. It is a structural commitment: any artifact compiled faithfully from the Codex finds its place under it.


5QLN © 2026 Amihai Loven. Open under the 5QLN Open Source License.

Amihai Loven

Amihai Loven

Jeonju. South Korea