Confusing this with a spontaneous market discovery risks preserving in socialism what was never the true engine of technical change in capitalism. Benanav hopes that a multicriteria approach—the continuous reweighting of efficiency, ecology, care, and free time—will generate the kind of dynamic responsiveness that older forms of socialism lacked.
The technologies of capitalism should not be considered merely as tools that socialism could use better. Capitalism does not eliminate this plurality; it refuncionalizes it, orienting development down the singular path of valorization. This kind of socialism would treat AI as something plastic enough to accommodate uses, values, and social forms that only emerge when they are deployed.
Benanav offers socialism as an answer to a question that capitalism never formulates: how should we democratically balance competing values? Repressed possibilities do not disappear; they persist as latent potentials, available to be rediscovered under different social conditions. Applied to AI, this means that the task is not just to regulate or redistribute technologies whose basic form is taken for granted, but to explore the trajectories that capitalist development foreclosed.
In his scheme, values originate outside of production—in democratic deliberation or the Free Sector—and are then applied to technology via Investment Councils and control bodies. Capitalist innovation is intertwined with state power, imperial hierarchies, and legal engineering. The bet of an AI socialist society would be that the generative functions that neoliberals assign to the market—experimentation, discovery, the capacity to make worlds from ideas—can now pass through another medium.
More democracy in the workplace, more participatory technology assessment, more inclusive governance councils—all of this assumes we already know what we value and only need broader input on trade-offs. But structurally, it still assumes a one-way flow: the Demos and the Free Sector generate priorities, and then Investment Councils and economic institutions implement them. These rely on a "Data Matrix," an open system of statistics and modeling, governed democratically, that tracks flows, maps ecological and social limits, and makes trade-offs visible: if we decarbonize at such a rate, build so much housing, and reduce the work week by such an amount, this is what happens.
A socialism that only redistributes the fruits of capitalist technologies will always be chasing a world made elsewhere. Firms cannot hoard surpluses or decide the long-term direction of the economy; they compete on performance according to democratically chosen metrics, not returns to private shareholders. But if socialism is to be more than capitalism with friendlier dashboards—if it is truly a project to collectively remake material life, not just redistribute its outcomes—it has to answer a harder question: can it offer a better way of coexisting with this technology than capitalism does?
Writing in the 1980s, he observed that late capitalism had already de-differentiated the spheres: high and low culture mix, and the commercial logic saturates everything, from exhibitions to molecular gastronomy. They were ridiculed not because diversity is a bad goal, but because it appeared as a static parameter to be met, rather than a transformation emerging from changed social practices.
Take the railroads, nuclear power plants, or language models: if capitalism uses them badly, socialism promises to finally orient them toward the common good. What drives the cross-contamination between domains, the invention of new desires and capacities, and the fusion of imagination and matter? What could language models become if they were not designed around imperatives of corporate monetization and risk management?
But AI exposes a circularity that no democratic procedure can resolve: the values with which we would govern these systems are themselves formed through our encounters with these ever-changing systems. It never answers the question that capitalism does formulate: where does creativity come from, beyond assembly halls and concert halls?
Ecological sustainability, work quality, free time, and care are treated as distinct goods that cannot be crushed into a single index. Even when they recognize that needs are historically shaped, they forget that capacities are too. The AI matters less because it is the most important technology or a guaranteed route to emancipation or disaster than because it exposes flaws in socialist thinking that were easier to ignore when the paradigm was the steam engine or the assembly line.
Under those conditions, a socialism that treats technology as a finished script and politics as the art of directing it will always be too late. An AI-ready socialism cannot retreat to a clean division of labor in which politics decides and technology provides. A network of artists and archivists could build specialized models for endangered languages and regional cultures, tailored to materials that their communities actually value.
The idea is not that these examples are the answer, but that an AI-ready socialism would institutionalize the capacity to try out such arrangements, inhabit them and modify or abandon them, and do so at scale, with real resources. And its promise is clear: the market is the vehicle through which human capabilities are expanded, as consumers discover new tastes and entrepreneurs build new worlds.
If socialism wants to respond to capitalism on its own terms, it needs a rival vehicle for world-making, not just the democratized administration of an economy whose creativity happens elsewhere. What they shared, he argued, was the conviction that politics is simply "the care and feeding of the economic apparatus"; they disagreed only on which apparatus.
Institutions would not just balance criteria; they would leave room for recalcitrant projects that still don't fit into any recognized metric and perhaps never will. The unresolved question, then, is not whether socialism can socialize AI while keeping its underlying machinery intact. But this misreads the sources of capitalism's power. Beneath that is a familiar Weberian image of modernity as a set of differentiated spheres—the economy here, science there, politics elsewhere—touched up with a bit of Habermas, who adds that we can coordinate them through communicative discourse.
Socialists rarely questioned this image. With AI, these separations are especially difficult to defend. What is missing is an account of how values emerge from within production and design themselves; of how, around a technology like AI, the distinction between a "functional economy" and "free creativity" becomes so porous it breaks.
Gillian Rose, whose early work investigated how post-Kantian thought sundered Hegel's "ethical life" into lifeless dualisms—values versus facts, norms versus institutions—later called this terrain "the broken medium": the zone where means and ends, morality and legality, are worked out in concrete contexts, not applied from the outside. Treating technology as a purely instrumental sphere that politics directs from outside is not naive; it blinds us to where power resides today.
At this point, a reasonable concern arises: wouldn't anything else just imply chaos? Benanav, for all his sophistication, works within this mold: the Demos and Investment Councils set criteria; firms and Technical Associations implement them; technologies are instruments. AI doesn't quite fit this scheme. You can't slot it into a single sphere and manage it from the outside.
This technology is at once tool, medium, cultural form, epistemic instrument, and site of value formation, as Raymond Williams once described television, but with much less stability. Those older machines could at least be described, albeit incorrectly, as relatively stable uses of which were largely fixed at the moment of design. This technology is both tool and medium, cultural form, epistemic instrument, and site of value formation, as Raymond Williams once described television, but with much less stability. Those older machines could at least be described, albeit incorrectly, as relatively stable tools whose uses were largely fixed at the moment of design. This technology, by contrast, is constantly changing before our eyes. "Generative" is not just a marketing word; it names a real instability.
For socialists, this instability poses a specific problem. Even the dominant definition of AI—closed, general-purpose models in distant data centers, accessed via chat—condenses a series of capitalist decisions about scale, ownership, opacity, and user dependence. Such a system not only responds to existing social relations; it crystallizes them and presents them back as common sense.
In this, AI matters. Jameson spent decades mapping that de-differentiation in culture—film, literature, architecture—but left the economy strangely untouched. With AI, these separations are especially difficult to defend. We ask, "With what criteria should we shape this?" while the thing itself is shaping the beings who must answer the question. This phenomenology matters. It makes it harder to postpone "the question of technology" (to use Heidegger's phrase in a register he would not have recognized).