LLM Experience: The Missing Discipline in AI Development
July 12th, 2025We’ve reached an inflection point in AI development. The models are capable, the APIs are accessible, and LLMs are essentially an inference endpoint away from any application. We’ve hit model saturation where the raw capabilities exist to solve most problems we throw at them.
What matters now isn’t whether your AI can perform a task, it’s how well it performs that task. As agents become ubiquitous and can show up anywhere, the differentiator shifts from capability to experience. Just as User Experience focuses on human cognition and Developer Experience optimizes for developer cognition, LLM Experience (LX) optimizes for AI cognition (context flow).
Why LX Matters
With model capabilities essentially solved, the competition has shifted to execution quality. The same underlying LLM can power a clunky, frustrating experience or a seamlessly intelligent one, the difference lies in how you architect the information flow and interactions.
Here’s a key insight about AI workflows: they’re essentially task definition and context engineering. When you control the task definition—you’re the person writing the prompt—you can optimize for predictable scenarios. But in most real-world applications, you don’t control what users will ask or how they’ll frame their requests.
When you can’t control task definition, context engineering becomes absolutely critical. This is where LX design transforms good AI systems into exceptional ones that consistently delight users.
The Core Challenge of Context Engineering
Think about what happens when a user asks your AI system a question. Behind the scenes, your system needs to:
- Parse the user’s intent from potentially ambiguous language
- Retrieve relevant information from various sources
- Synthesize that information into a coherent response
- Maintain context across multiple turns
- Anticipate follow-up questions or related needs
Most approaches address each of these as separate technical challenges. LX design creates additional value by treating them as interconnected parts of a single information architecture opportunity.
The reality is that you have to anticipate the behavior of the user and likely task definitions to anticipate the needs of the LLM. This requires designing your information architecture not just for retrieval accuracy, but for AI comprehension and synthesis.
Practical LX Design
Effective LX design operates at multiple levels of abstraction. Rather than thinking about individual prompts or single interactions, you’re designing for the entire context flow through your system.
Consider these specific techniques that exemplify good LX design:
- Multi-level embedding: Use broad metadata matching for initial filtering, then detailed content matching for precision, mimicking how humans first identify relevant documents before diving into specific passages
- Semantic chunking: Preserve logical document structure by dividing content along natural boundaries like headings, rather than breaking text at arbitrary token counts
- Context prioritization: Ensure the most valuable information is included when token constraints prevent using all relevant content
- Automatic enhancement: Intuitively add relevant information to queries without requiring explicit user requests, similar to how human experts bring related knowledge into conversations
Each of these techniques addresses a specific aspect of how LLMs process and synthesize information. They’re not just technical optimizations—they’re design decisions that shape how effectively your AI system can understand and respond to user needs.
The Information Architecture Mindset
The most important shift in LX thinking is moving from prompt-centric to architecture-centric design. Instead of asking “How do I provide the LLM with context?” you ask “How do I structure information so the LLM can perform optimally regardless of user input?”
This architectural mindset changes how you approach every aspect of your agentic workflow architecture. You start thinking about information hierarchy, context flow, and cognitive load—not just for human users, but for the AI systems processing that information.
Good LX design anticipates the kinds of connections and synthesis that LLMs excel at, while structuring information to maximize clarity and coherent context flow.
Moving Beyond Prompts and Tools
The AI development community has naturally focused on prompt engineering and tool selection as the most visible aspects of AI system behavior. These are important foundations, and there’s significant additional value available at the information architecture layer.
LX design is where exceptional value lies. When you optimize how information flows through your system, you improve performance across all user interactions, not just specific prompt templates. When you anticipate user behavior and design your context architecture accordingly, you create systems that feel genuinely intelligent and helpful.
In a world where AI capabilities are commoditized, the companies that win will be those that master the experience layer. As agents become ubiquitous and model access becomes universal, LX design becomes the primary differentiator between AI systems that users love and those they merely tolerate.
LX is the discipline that transforms AI from capable technology into consistently delightful user experiences.
Example in practice: To see LX design principles applied to knowledge organization, visit /llms.txt on this site.
This document demonstrates several key LX concepts: consistent structural formatting that aids AI comprehension (Intent → Key Framework → Practical Tips → Impact), semantic organization that preserves logical relationships between concepts, and information hierarchy that enables both human browsing and AI synthesis. Notice how the Common Themes section provides contextual bridges that help AI systems understand connections across different frameworks, and how the structured approach transforms a collection of blog posts into a coherent knowledge architecture optimized for both retrieval and understanding.
Supporting Resources
The concepts underlying LLM Experience design draw from established research across multiple disciplines, providing a strong theoretical foundation for practical implementation.
The following research areas offer the most relevant insights for understanding and implementing effective LX design:
-
Context drift and context degradation: Research in natural language processing demonstrates how conversational context weakens over extended interactions, validating the need for architectural solutions to maintain coherent information flow throughout AI workflows.
-
Entropy in systems thinking: Claude Shannon’s information theory and subsequent systems thinking research shows that all systems naturally tend toward disorder without reinforcing structures, directly supporting the architecture-centric approach to LX design.
-
Bounded rationality: Herbert Simon’s foundational work on decision-making limitations reveals how cognitive constraints affect both human and artificial agents, informing the design of information architectures that work within these natural boundaries.
-
Feedback loops in systems design: Donella Meadows’ research on systems thinking demonstrates how feedback mechanisms can maintain or redirect system behavior, providing the theoretical basis for context prioritization and enhancement techniques in LX design.
-
Cognitive load theory: Research in human-computer interaction and cognitive psychology establishes how information presentation affects processing efficiency, directly applicable to optimizing information architecture for AI cognition.
These research foundations collectively support the core premise that information architecture, rather than prompt engineering alone, represents the primary opportunity for improving AI system performance and user experience.