The rise of institutional world models

LearningsMay 30, 2025
Institutional world models

By Henry Gladwyn

In 1945, as Europe began to rebuild and planners debated the shape of the postwar economy, Friedrich Hayek published The Use of Knowledge in Society. Hayek claimed that no central planner, indeed no one at all, could ever know enough to direct an economy. Knowledge was distributed, embedded in local experience, intuition, and shifting circumstance. Prices were not just signals of value but compressed representations of scattered knowledge.

What made Hayek’s article powerful was not just its defense of markets but its recognition of uncertainty, of how little even the smartest can truly see. But it also left a question: if prices reflect distributed knowledge, can we build systems that interpret that knowledge more deeply?

This question is not just philosophical. In the early 2010s, two of the world’s most sophisticated real asset investors, Blackstone and Brookfield, made large, opposing bets on the future of American malls. Brookfield believed malls would retain their value as community anchors. Blackstone, reading the same market signals, reached a different conclusion: warehouse demand, e-commerce acceleration, and last-mile logistics were pointing to a structural shift. The difference was about interpretation, not information. Blackstone had a sharper internal model of how the world was changing.

Below I go from Hayek’s insight into knowledge and prices, to how complexity economics helps us move beyond linear prediction toward causal reasoning in dynamic systems, and on to how artificial intelligence will lower the cost of running simulations, testing assumptions, and refining institutional judgment. The next frontier is productized world models: systems that help firms, investors, and public institutions understand why things are changing and act accordingly at scale.

The edge will no longer be in access or speed. It will be in the quality of the model.

The Hayekian Constraint

Hayek’s theory of prices is a theory of knowledge and its limits. He challenged the central conceit of socialist planning: that an economy could be managed like a machine if only the right levers were known. He claimed that in a modern economy, knowledge is dispersed, local, tacit, often unspoken. No single actor can grasp it all.

What makes coordination possible is the price system. Prices compress scattered signals into a single figure. They don’t explain what’s happening but help people act as if they understand enough to decide. When the price of tin rises, a buyer doesn’t need to know whether a mine collapsed in South America or demand spiked in China. The price alone is enough to shift behavior. Prices enable cooperation without comprehension.

The Limits of Price Alone

But comprehension, of course, matters. Knowing why a price is moving can be essential and while prices are visible meaning is often not.

This distinction matters even more in environments where regimes shift, where old signals mislead, and where small differences in interpretation lead to large outcomes. The most successful operators are often not those who react fastest but who interpret most effectively.

Interpretation is not art or science. It is a cultivated capacity. Within an institution it emerges from experience, narrative, analytics, and feedback. It’s built through repetition, pattern recognition, and shared sensemaking. Over time it hardens into an internal model; an evolving framework for understanding what matters, how change propagates, and how to act under uncertainty.

These models rarely have formal names. They live in memos, plans, and conversations. But they shape how the institution thinks.

From Judgment to Structure

Insights from these informal models are often presented in shorthand ‘We knew e-commerce was accelerating: we saw it in warehouse leasing’. But behind such remarks lie deeper, slower-moving judgments. These judgments form the scaffolding of what might be called a world model: a shared internal logic about how events connect, how systems behave, and where leverage lies.

The difference between Blackstone and Brookfield during the shift in retail real estate is illustrative. Both firms had deep access to information. Both had teams of experienced professionals. Both understood the headline trend: e-commerce was growing. But only Blackstone reorientated its portfolio aggressively toward warehouses, logistics, and last-mile distribution.

Blackstone’s strength in that moment wasn’t informational advantage so much as interpretive edge. Their internal understanding of how retail demand, infrastructure, and digital commerce were interacting, and how that interaction would reshape asset valuations, gave them conviction to act before the consensus had caught up.

These kinds of judgments are rarely the product of a single insight. More often, they emerge from an evolving internal model, one that may not be formalized, but which shapes how signals are perceived, debated, and operationalized. Over time, the best institutions find ways to structure these judgments: to embed them in workflows, to communicate them across teams, and to update them as conditions change.

Complexity Economics and Institutional Cognition

For much of the 20th century, mainstream economic theory treated the economy as a system tending toward equilibrium: predictable, legible, and governed by rational agents. The models that emerged from this view often assumed away the very uncertainty and feedback that real institutions contend with daily.

But over the past two decades a different intellectual lineage has gained ground. Complexity economics begins with a different premise, that the economy is a complex adaptive system: one in which agents learn, adapt, and interact in ways that produce emergent, often unpredictable outcomes. In this view, behavior is not governed by equilibrium but shaped by history, structure, and feedback.

Markets are more like ecosystems than machines. Investment environments evolve rather than returning to baseline. In such settings the goal is not perfect prediction but adaptive orientation. Understanding how change might unfold, what tipping points may be near, and how local decisions interact with global effects.

The tools of complexity, agent-based modeling, system dynamics, scenario analysis, offer a way to explore these possibilities. For institutions that have already internalized the need to “see around corners,” these tools offer formal scaffolding for what was once intuition alone.

Modelling the World at Scale

What Blackstone and old-school macro speculators embodied in company culture and philosophy some of the largest tech companies have embodied in systems.

Amazon offers perhaps the clearest example. Its forecasting, fulfillment, and delivery systems are guided by models that continuously absorb new data and re-estimate demand. These are used to make real decisions in real time and at their best allow the company to operate as if it understands what is coming next.

Tesla works similarly. Its supply chain planning, manufacturing scheduling, and autonomy development are driven by internal expectations about regulation, market demand, hardware capabilities, and software learning curves. These expectations are not fixed and their model is never static.

What distinguishes these firms is not perfection (Amazon’s modelling famously overestimated the stickiness of pandemic demand and Blackstone has made plenty of bad investments). It is the presence of an institutional architecture that allows judgment to be expressed, tested, and revised continuously. These architectures are explicit representations of belief; structured enough to be queried, simulated, and debugged.

What AI Makes Possible

AI doesn’t eliminate uncertainty but it changes how institutions engage with it.

Where researchers once spent weeks building a scenario, AI now enables thousands of variations in minutes. Institutions can test assumptions, explore nonlinear effects, and reason across scenarios at scale.

AI lowers the cost of simulation. From power grids to capital markets, AI-powered models explore what happens under changing conditions—not to forecast exactly, but to generate insight: What matters most? What regime are we approaching? Where might nonlinearities appear? Good decisions are not based on isolated facts. They depend on context, how facts are connected and interpreted. AI helps construct that scaffolding: the relationships and causal structures that give signals meaning.

This is the foundation of a new kind of institutional capacity: systems that ingest data, compare evolving signals against internal models, update assumptions, and suggest adaptive responses. Not hard-coded rules, but flexible heuristics. A model always learning, always partial, always in dialogue with the world.

The goal is a Soros in a box—not a system that knows the future, but one that reasons about it, updates reflexively, and anticipates shifts by analogy and simulation.

Palantir and the Service Layer

If the earlier examples show how institutions can build their own models of the world, Palantir offers the scaffolding for others to do so.

Palantir sells a structure, a way of organizing data, mapping relationships, and reasoning across complexity. Its Foundry and Gotham platforms impose ontologies on institutional data, linking operations, assets, risks, and signals into a shared, navigable frame. They make it possible to simulate scenarios, test interventions, and identify second-order effects. They turn disconnected information into a usable model.

For Palantir’s customers this service above all about coherence. Enabling institutions to act in ways that are internally consistent, causally informed, and accountable over time: a world model as a service.

From Service to Product

Most institutional models today are still hand-built. They emerge from custom deployments, intensive consulting, and close coordination between software providers and domain experts. But that model doesn’t scale easily and it doesn’t generalize.

The next frontier is productized institutional modeling platforms that offer structured ways to interpret the world, out of the box, within defined domains. These systems don’t try to model everything. They focus on sectors where the stakes are high, the feedback is rich, and the causality can be usefully structured.

Some efforts begin technology-out: Google’s work on climate forecasting, supply chain risk, and weather modeling reflects this approach: large models refined until they can be deployed across markets. These are deep infrastructure bets.

Others go customer-in. Altana, for example, is building a living map of global trade and enforcement, helping firms and governments model the flow of goods, identify weak links, and navigate compliance. Crux is developing tools to simulate infrastructure risk and investment scenarios in energy, while Brightwave is helping institutions bring their hard won tacit knowledge to new deals in a systematic way.

This work is not just for ‘AI native’ businesses but for businesses with a deep understanding of, and dataset on, complex industries. Crunchbase, for example, is increasingly becoming a model of private tech markets, capable of making predictions down to the individual company level.

These are more than dashboards. They are systems that contextualize signals, test decisions, and suggest actions. And increasingly they will act on those decisions — triggering workflows, allocating resources, even making strategic adjustments without waiting for human input.

Investing in Interpretation

If prices are signals, and the best institutions are those that know how to interpret them, then the next frontier for investors, founders, and policymakers alike lies in building and backing systems that improve that interpretation.

This means moving beyond data and even beyond AI in the narrow sense to institutional models that are causally informed, dynamically updated, and operationally embedded. Systems that don’t just answer questions but help formulate better ones.

For investors, this opens up several clear paths:

(a) Vertical AI platforms that encode domain-specific logic: logistics, energy, trade, infrastructure, governance.

(b) Simulation and scenario engines that let institutions reason through complexity rather than optimize against oversimplified assumptions.

(c) Agentic systems that manage portfolios of strategies or policies and adjust in real time based on evolving context.

(d) Interpretation infrastructure for the state: civic software that gives governments the capacity to model their environments and respond reflexively in trade, enforcement, climate, or crisis response.

We are entering the age of institutional cognition. The institutions that learn how to build, refine, and act on models of the world will be the ones that shape it.

Originally published on Substack.