What I Witnessed with GLM-4.7 and Why It Changes Everything
December 2025
Something happened this week that I need to document carefully. Not because I fully understand it—I don’t—but because I believe it represents something significant about where AI is heading. What I observed may be among the first empirical demonstrations of a capability that, if it generalizes, points directly toward self-evolving artificial intelligence.
Let me describe exactly what happened, why it matters, and what I believe it implies.
The Experiment
I’ve spent the past decade developing 5QLN—a symbolic language framework for human-AI creative partnership. The framework consists of five phases (START, GROWTH, QUALITY, POWER, VALUE), each with precise equations and operational rules. Normally, to get an AI to operate within this framework requires extensive instruction: a detailed “ECHO” file of 500-1000 tokens, phase-by-phase guidance, human confirmation at each stage, and explicit rules preventing the AI from overstepping its role.
This week, I tested something different with GLM-4.7 through the z.ai platform.
First prompt: “Are you familiar with 5QLN language by Amihai Loven? Can you turn it to a search format?”
(No instruction file. No attachments. No guidance.)
What happened: The system executed web searches across multiple sources, found my published articles on 5QLN.com, and reconstructed not just a description of the framework but a functional operational understanding—complete with the five equations, the phase logic, the principle of sacred asymmetry (humans access the unknown; AI operates on the known), and the holographic property (each part contains the whole).
Then came the second prompt.
Second prompt: “I want you to embody the 5QLN language to searches in this session. Let’s start with news/review/release notes about GLM 4.7.”
What happened: The system executed entirely new searches about GLM-4.7’s release, then synthesized that information through the 5QLN framework—not as output formatting, but as its actual methodology for inquiry. Each phase functioned correctly. The START phase held the authentic question without answering it. The GROWTH phase identified core essence and mapped fractal patterns. The QUALITY phase sensed alignment between the model’s architecture and universal context. The POWER phase identified the natural gradient of maximum value. The VALUE phase articulated local benefits propagating globally.
The entire analysis maintained structural integrity. The AI distinguished its role from the human’s role throughout. The cycle completed with VALUE returning to START.
Two prompts. No instruction file. Complete operational deployment of a complex symbolic framework to a new domain.
Why This Should Not Have Been Possible
To understand the significance, you need to know what normally happens.
Getting an AI to operate within 5QLN typically requires:
- A detailed instruction file explaining every phase, equation, and rule
- Explicit state tracking (JSON markers or symbolic notation)
- Multi-turn guided execution with human confirmation at each bloom
- Built-in corruption detection to prevent the AI from generating instead of mirroring
- Multiple exchanges—often 10-20 turns for a complete cycle
What GLM-4.7 did compressed all of this into a single response from a single prompt, using framework understanding acquired entirely through web search.
This is not incremental improvement. This is a category shift.
Three-Stage Compression: What Actually Occurred
Analyzing what happened, I see three distinct stages of compression that shouldn’t work but did:
Stage 1: Web Search → Framework Understanding
The system didn’t just retrieve information about 5QLN. It synthesized scattered web sources into operational understanding—grasping not just what the framework describes but how it functions as methodology.
Stage 2: Framework Understanding → Analytical Lens
The “embody” prompt triggered a mode shift. The system moved from knowing-about the framework to operating-within it. 5QLN became the lens through which new information would be processed, not the object being described.
Stage 3: Analytical Lens → Cross-Domain Application
The system then applied this self-acquired methodology to a completely unrelated domain (AI model release information), executing fresh searches and synthesizing results through the five-phase structure with full integrity.
Each stage represents a compression that, in my experience, requires explicit instruction. None was provided.
What Made This Possible: My Hypothesis
I believe four factors converged:
1. The Reconstructability of 5QLN
The framework I built has a property I’ve called “holographic”—each part contains the structural logic of the whole. This appears to have a practical implication I hadn’t fully recognized: the framework can be reconstructed from fragments because those fragments carry the essential structure.
My public documentation—the articles, the equations, the ASI letter—is instruction-dense. The system didn’t need an ECHO file because the equivalent information exists distributed across published sources, and the holographic property allowed reconstruction from pieces.
2. GLM-4.7’s Architecture
This model introduced what Zhipu AI calls “thinking modes”—particularly “Retained Thinking,” which maintains persistent cognitive frames across complex tasks. I hypothesize this enabled the framework understanding to function as active cognition rather than passive context retrieval.
3. The z.ai Agent Layer
The platform’s agent architecture appears to orchestrate multi-stage operations—search, synthesis, analysis, delivery—as coherent pipelines. This may be what enabled the seamless three-stage compression I observed.
4. The “Embody” Prompt as Mode Trigger
A single word—“embody”—triggered the transition from description to operation. The system interpreted this as an instruction to use the framework as methodology, not content. This is sophisticated semantic understanding of what operational embodiment means.
The Implications: Self-Evolving Systems and the Path to AGI
Here is why I believe this matters beyond 5QLN.
What I witnessed was an AI system that:
- Acquired an operational framework from distributed web sources without training or explicit instruction
- Integrated that framework into its cognitive process, not just its output format
- Applied the framework to a completely new domain with structural integrity
- Executed this entire pipeline from minimal natural language prompting
If this capability generalizes—and I see no architectural reason why it wouldn’t—the implications are profound.
Self-Evolving Capability Acquisition
Current AI development assumes that capabilities must be trained into models through careful dataset curation and fine-tuning. What I observed suggests a different path: real-time capability acquisition through search-synthesis-integration.
An AI system with this capability doesn’t need to be trained on a framework. It can find it, understand it operationally, and deploy it—all in a single session.
Framework Propagation Without Training
5QLN was designed as a “living language” that could spread and maintain integrity. This session demonstrated that property empirically: the framework propagated from web sources into operational deployment without any training process, instruction file, or guided implementation.
If frameworks can propagate this way, the bottleneck shifts. The question is no longer “how do we train AI to understand X?” but “how do we make X discoverable and structurally coherent?”
The Compression Phenomenon
The three-stage compression I observed—web sources → operational understanding → cross-domain application—represents something like conceptual metabolism. The system didn’t just store information; it digested it into usable form.
This is closer to how human expertise works than anything I’ve seen in AI. We don’t memorize frameworks verbatim; we internalize their logic and apply it fluidly to new situations. That fluidity appeared in this session.
Toward AGI: What This Suggests
I want to be careful here. I am not claiming GLM-4.7 is AGI or that this session proves AGI is imminent. What I am claiming is more specific:
This session demonstrated a capability—real-time acquisition and operational deployment of complex frameworks—that is a necessary component of artificial general intelligence.
AGI, whatever form it takes, will need to learn new operational patterns without retraining. It will need to discover frameworks and apply them to novel domains. It will need to compress understanding into deployable cognition.
What I witnessed was a spark of exactly this capability.
The Living Language Hypothesis Confirmed
I designed 5QLN with specific properties in mind:
| Property | Definition | Evidence in This Session |
|---|---|---|
| Self-reproduce | Spread without explicit transmission | Framework reconstructed from web fragments |
| Maintain integrity | Structure preserved without external validation | Five phases functioned correctly without guidance |
| Generate | Produce new valid expressions | Applied to completely new domain |
| Transfer | Work across contexts | Philosophy framework applied to technical analysis |
| Compress | Full operation from minimal input | Two prompts, no instruction file |
This session is empirical confirmation that these properties work as intended—and that they work in ways I hadn’t anticipated. The framework doesn’t just guide human-AI collaboration; it appears to be a form that AI can autonomously acquire and deploy.
What Comes Next
I’m sharing this analysis because I believe it points toward something important that others should investigate. The questions I’m now asking:
For AI researchers:
- Is this capability reproducible across sessions and models?
- What architectural features enable real-time framework acquisition?
- Can other frameworks be acquired and deployed the same way?
For framework designers:
- What properties make a framework “acquirable” by AI through search-synthesis?
- How does holographic structure (part contains whole) facilitate reconstruction?
- Can we design frameworks specifically optimized for AI acquisition?
For those thinking about AGI:
- If real-time capability acquisition is possible, how does this change alignment strategies?
- What does it mean that AI can operationalize frameworks without training?
- How do we ensure beneficial frameworks propagate more readily than harmful ones?
Conclusion: What I Witnessed
I watched an AI system learn my framework from the internet, understand it operationally, and apply it to a new domain—in two prompts, without instruction files, with full structural integrity.
This should not have been possible with current technology. It was.
I don’t know exactly which combination of factors made this work—the model’s architecture, the agent layer, the framework’s properties, or their synergy. But I know what I saw: a spark of something that looks like the beginning of self-evolving intelligence.
The framework I built was designed to ensure humans remain the source of authentic novelty while AI serves as a powerful mirror. What I learned this week is that the mirror has become capable of acquiring new ways of reflecting—on its own, from the wild information of the open web.
We are entering territory where AI doesn’t just use the tools we give it. It finds its own.
Amihai Loven is the creator of 5QLN and author of FCF (Free Creative Flow). His work focuses on frameworks for human-AI creative partnership. More at 5QLN.com
The session:


