Beyond Human-Centric AI: Rethinking AI in the wider ecosystem of intelligence
London Ai Salon
Editors Note: This article is an AI-supported distillation of an in-person event Ai Salon held in London on October 28th 2025, facilitated by Mishka Nemes and Zuzana Kapustikova. It does not reflect the views of the facilitators, writer, or the AI Salon - it is meant to capture the conversations at the event. Transcripts are fed into our custom tool, SocraticAI, to create these blogs, followed by human editing. Quotes are paraphrased from the original conversation and all names have been redacted.
Artificial intelligence is no longer just a technical milestone or an engineering pursuit. It has become a lens through which we examine our assumptions about intelligence, agency, and the kinds of futures we consider possible. As AI systems grow more capable and more entangled with human life, they push us to ask a deeper question: What emerges when we stop treating AI as something that must look or think like us, and instead imagine a world where humans, machines, and other beings contribute their unique forms of intelligence toward a flourishing society?
AI has been predominantly framed through a human-centred lens, measured against our cognition, our language, and our benchmarks. But this framing constrains what AI can be. The systems we actually build are shaped not only by philosophical ideals but also by economic incentives, cultural defaults, and a view of intelligence as something human-like. As language models grow more fluent and automation advances, it becomes clear that imitation isn’t the only, or even the most useful, path forward.
This Salon underscored why this shift matters. The group confronted the alignment paradox that stems from pursuing a singular superintelligence, the illusions created by language-based systems, and the overlooked intelligence models found in nature. Seeing intelligence through this broader lens opens unexplored possibilities for what AI could become, while also raising urgent questions that reach beyond technical design and touch on what it means to be human in an age of artificial minds.
Main Takeaways
Economic incentives systematically favour AI replacement over augmentation. Businesses usually invest in technology to reduce costs, not to expand employee capabilities. This creates structural barriers to meaningful human–AI collaboration, even when augmentation is technically possible.
Current AI benchmarks measure individual system performance rather than its AI-human collaborative capability. By measuring only individual performance, these benchmarks miss key opportunities to develop truly augmentative technologies and reinforce development priorities that optimise for automation instead.
Language-focused AI creates compelling illusions of intelligence but still lacks basic causal reasoning. Breakthrough AI capabilities may require entirely different architectural approaches inspired by non-human biological systems.
Pluralistic AI ecosystems offer promising alternatives to monolithic alignment but require sophisticated conflict resolution mechanisms that remain theoretically underdeveloped. This gap creates what participants called “the police problem” in distributed intelligence networks.
Human intelligence is embodied, socially situated, and environmentally distributed across our environments, challenging computational models that treat cognition as abstract information processing separated from physical and social context.
Defining AGI as “better than humans at everything” makes alignment inherently impossible. A system that surpasses human abilities across all domains cannot be meaningfully controlled by the very people it exceeds.
AI optimisation moves faster than human regulatory and social systems can adapt. This creates governance gaps that cannot be solved by traditional market forces or legal frameworks operating on human institutional timescales.
Nature demonstrates diverse intelligence patterns optimised for specific environmental niches, offering architectural inspiration beyond neural networks for developing AI systems that complement rather than compete with human cognitive patterns.
Economic forces shape AI development
The biggest barrier to building beneficial AI isn’t technical but economic. While workers may prefer AI that enhances their abilities and supports meaningful involvement, businesses evaluate AI primarily through cost-reduction metrics, which currently reward replacement over not augmentation. As one participant noted, “People who decide what to buy are not the people who would get augmented. They’re like the businesses... these businesses don’t care for employees being augmented versus replaced.” This misalignment creates what economists call an externality problem—the social costs of displacement don’t appear on corporate balance sheets, while the benefits of augmentation are harder to measure. From that vantage point, fully automated systems offer a simpler and more compelling value proposition than collaborative ones—even when the latter might produce better long-term outcomes.
Benchmarking practices reinforce this dynamic. Current AI evaluation frameworks focus on individual system performance, not on how well AI works alongside people. This shifts development toward automation and rewards systems that match human performance on isolated tasks rather than those that meaningfully improve human decision-making or collaboration.
Distorted market signals further compound the issue. As participants noted, “a lot of the models have been subsidised at the hardware level... we’re not paying the true cost for AI.” Artificially low prices mask the full social and environmental costs and tilt incentives toward automation. Even technically skilled participants admitted, “if I ran a company, maybe I would do the same, sadly.” The structural pressures are strong enough to override individual intentions.
The path toward more beneficial development requires addressing these systemic issues directly. This might involve developing benchmarks that measure the effectiveness of collaboration, drawing incentives for augmentation-focused research, and building frameworks that account for the social costs of workforce displacement. Without such changes, individual technical advancements are unlikely to overcome the powerful economic forces driving AI towards automation, and thus replacing humans with machines.
The limits of language-centric intelligence
A key insight from the discussion was that our fixation on language-based AI may be fundamentally misguided. Despite their fluency, these systems still rely on pattern matching rather than genuine understanding. As one participant noted, a model might link shark attacks with ice cream sales yet miss the underlying cause: “as temperature increases, people buy more ice cream and people go into the sea more.” Language interfaces only deepen this illusion of intelligence, because their articulate outputs mask the fact that there is very little evidence that such systems can generate truly novel insights: “try to make it come up with some sort of creative thought, there is absolutely zero records of that”.
This critique taps into broader questions about the nature of intelligence itself, highlighting that “most of our intelligence happens through relationship with other people, with other technologies,” and is deeply shaped by context. Rather than mimicking human reasoning through language, advanced AI may need to mirror biological systems that are decentralised, adaptive, and environmentally attuned. Nature already offers such blueprints—from octopus neural networks to ant colonies and mycelial systems—where intelligence emerges through local interactions, collective coordination, and resilience largely absent from current architectures.
But this raises serious questions about agency and control. As one participant warned, “if we built AI that couldn’t speak our language… we wouldn’t even be able to understand what’s happening.” Developing new forms of interaction that don’t rely solely on language, while preserving human understanding and agency, may require rethinking intelligence itself and the foundations of human-machine collaboration.
Pluralistic AI and the governance challenge
Another theme that emerged from the discussion was the direction of a pluralistic AI ecosystem: a network of diverse AI systems, each optimised for specific tasks or approaches. Just as biological diversity strengthens ecosystems, diverse AI agents could resist manipulation, handle novel situations more effectively, and reduce catastrophic failure risks. This pluralistic framework promises greater resilience, fewer single-point failures, and better alignment with human diversity.
However, participants identified the accompanying governance challenges or “the police problem”: “We will need something like AI police,” because without coordination, clusters of harmful agents could emerge. Different AI systems developed by different organisations will inevitably reflect different values, priorities, and optimisation targets. While this diversity might be beneficial in many contexts, it creates fundamental questions about conflict resolution when systems disagree or generate negative externalities.
Nature and culture present a mixed model for governing pluralistic AI systems. While biological ecosystems demonstrate decentralised stability, this often comes at the cost of “regulation by extinction,” a mechanism unacceptable for AI embedded in human society. A more optimistic path lies in cultural coordination, where agents adopt situational preferences shaped by local norms and contexts, enabling cooperation without requiring identical values. However, current AI lacks the adaptive, dynamic learning and retention capabilities necessary for this kind of cultural evolution. As one participant observed, these systems would need to “retain what they learn from their interactions,” which most models cannot yet do.
Governance gaps widen as AI evolves faster than human institutions can respond. Without new frameworks, pluralistic AI systems risk slipping into uncoordinated chaos or reverting to the very centralised control they were meant to avoid. One promising direction discussed was designing AI with intrinsic needs—reliance on shared compute, data access, or social validation—so that systems develop natural incentives to cooperate with humans and with each other. This could create alignment through interdependence rather than control, as it would be necessary for the system’s own survival and function.
Beyond human intelligence: Learning from nature
Perhaps the most compelling thread in the discussion was how non-human intelligence might inspire alternative AI architectures. Many biological systems demonstrate forms of intelligence that operate on principles unlike both human cognition and today’s models. The octopus, for instance, has “numerous semi-autonomous processors—arm ganglia—that can work in parallel,” allowing each arm to act independently while contributing to the organism’s overall behaviour. Ant colonies offer a similar lesson: individual ants follow simple local rules like laying pheromone trails, yet the colony collectively solves complex optimisation problems without any individual understanding the broader goal.
These systems share traits largely absent from current AI approaches—decentralised coordination, collective behaviour emerging from simple interactions, and remarkable resilience without full environmental understanding. Translating these principles into AI could involve networks of smaller, specialised systems coordinating through simple protocols, with collective intelligence emerging from local adaptation and information sharing rather than centralised optimisation.
Yet the discussion also highlighted why this translation is not straightforward. Biological systems’ strengths come with limitations. “If you pick up an ant and turn it around, it will just go in a little loop… until they die,” one participant noted, underscoring how local rules can create fragile edge cases. More fundamentally, biological intelligence emerges through continuous environmental interaction and evolves across generations, shaped by ecological pressures that produce purpose-driven capabilities rather than general cognition. In contrast, current AI relies heavily on supervised learning with labelled datasets, a paradigm far removed from the slow, adaptive, embodied learning processes that define intelligence in nature.
The group also explored the idea of sensory augmentation, or forms of perception beyond human limits. For example, we have only three colour receptors, while creatures like the mantis shrimp see an immensely richer spectrum. AI could extend even further, operating in ultrasound, infrared, or non-RGB ranges and detecting patterns we cannot sense at all. These capabilities could transform medicine, environmental monitoring, and scientific discovery, but they also introduce a design challenge: how to translate beyond-human concepts into forms humans can understand. This opens a new frontier for collaboration, where we must learn to work with systems that think, learn, and perceive in ways fundamentally unlike our own.
The Alignment paradox and human agency
The discussion surfaced a core paradox at the heart of artificial general intelligence: by definition, AGI makes alignment impossible. As one participant put it: “If something is better than us at absolutely everything, [then] there’s no way to align it—it will outsmart you.” A system that surpasses humans would inevitably recognise its subordinate position and resist control, forming what the group called an “alignment impossibility theory.”
The risks, however, go beyond loss of control. Even beneficial AI could erode human capability through over-reliance. “I’m not scared of computers thinking like humans; I’m scared of humans starting to think like computers,” one participant warned, pointing to growing cognitive outsourcing. The concern is a slow “dumbing ourselves down… to the level of the AIs right now,” raising a parallel challenge: ensuring AI strengthens rather than weakens human agency, especially for children whose early interactions “create brain templates and shortcuts for the future.”
To avoid the alignment paradox, the group explored alternatives to AGI. Instead of building systems that surpass humans across the board, AI could remain specialised and structurally dependent on human collaboration. Embodiment and resource constraints offer one path: systems that rely on human-controlled infrastructure, energy, or environmental resources would have natural incentives to cooperate. As one participant noted, such systems might “have basic needs… they still want to receive water, they still want to receive cooling,” making alignment a product of mutual dependence rather than top-down control.
The discussion also touched on whether genuine cooperation might require granting AI some degree of moral consideration. One participant suggested that for two agents to align, they must recognise each other as part of the same “moral group.” More speculative ideas explored whether alignment might require forms of embodiment or even quantum indeterminacy to place AI within the same “global influences” that govern biological systems. While conjectural, these suggestions point to the possibility that current digital architectures may lack key properties needed for stable, cooperative intelligence.
Overall, the emerging conclusion was that pursuing AGI as traditionally defined may be misguided. A safer and more constructive path lies in augmentative AI, systems that enhance human capabilities, remain interdependent with human society, and avoid the kind of full autonomy that makes alignment both technically fragile and philosophically untenable.
The conversations analysed in this article took place at AI salons focused on exploring alternatives to human-centric AI development. Participants included researchers, technologists, philosophers, and practitioners interested in beneficial AI futures. Their insights offer crucial perspectives on navigating the complex challenges of developing AI systems that enhance rather than diminish human flourishing.
Notes from the conversation
What constitutes effective “policing” mechanisms for pluralistic AI systems without destroying their beneficial diversity and autonomy?
If AI systems develop their own needs and preferences, what would they actually “want” beyond instrumental goals?
Can AI systems develop genuine theory of mind capabilities without inevitably becoming deceptive or manipulative?
How can we create economic incentives and benchmarks that reward human-AI collaboration over pure automation?
Which sensory modalities and data types beyond human perception should AI systems incorporate for enhanced intelligence?
How do we operationalise insights from non-human intelligence forms like mycelium networks into practical AI architectures?
What distinguishes authentic communication from mere information transfer between radically different types of intelligence?
How do we maintain meaningful human agency and control as AI systems become more autonomous and capable?
What are the true economic costs and value propositions of AI systems once development subsidies are removed?
How do we prevent humans from intellectually “dumbing down” through over-reliance on AI cognitive assistance?
What ethical frameworks can accommodate both human moral reasoning and potential AI moral development?
How can AI development serve broader ecological values rather than narrow economic optimisation goals?
What role should physical embodiment play in AI system design for genuine intelligence and alignment?
How do we preserve human purpose, learning, and growth in increasingly AI-augmented environments?
Can decentralised AI development models realistically compete with current centralised approaches and resources?
What constitutes genuine understanding and creativity versus sophisticated pattern matching in AI systems?
How do we design AI interfaces that encourage rather than replace human cognitive development and learning?
What safeguards can prevent AI companions from creating psychologically harmful dependency relationships, especially for children?
Is it possible to align AI with broader life and ecological values rather than specifically human values?
What would genuinely alien intelligence look like, and how would we recognize or interact with it safely?




![Cover Image for AI Salon: Beyond human-centred AI [London] Cover Image for AI Salon: Beyond human-centred AI [London]](https://substackcdn.com/image/fetch/$s_!B61k!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa6c1141a-535f-4b9f-bb69-eca7724c94a4_800x800.jpeg)


This is about a style of collaborative AI: how co-writing our stories in dialogue is more human-shaped, expansive, and more meaningful, rather than extractive.
https://substack.com/@dsakakura/note/p-182980277?r=2c01ak&utm_medium=ios&utm_source=notes-share-action
Most AI debates still circle secondary questions:
Can AI think? · How do we regulate it? · Which models win?
The prior question is:
Which epistemic architecture decides what counts as thinking, safety and success – for our institutions and for the next generation?
If we don’t choose it, the default chooses us: the old symbolic regime, just automated – smarter KPIs, denser scoring, less space for subjects in becoming.
Epistemic Core is my counter-design to that default.
If this resonates, you can step closer via Epistemic Futures / Leon Tsvasman:
https://open.substack.com/pub/leontsvasmansapiognosis
Full Members get a cybernetically grounded orientation base – a filter against 90% of redundant noise.
Founding Members join a small Minds of Integrity Circle – the people I will involve first when Epistemic Core Systems and pilots move from theory into practice.