Confronting Consciousness, Creativity, and Control in a Rapidly Evolving World
The Human-AI Paradox
🙌 This article reflects the inaugural Ai Salon in Zurich! We’re so excited for this new chapter lead by Pascale Speck.
Editors Note: This article is an AI-supported distillation of an in-person Ai Salon event held in Zurich on February 19, 2026 facilitated by Pascale Speck - it is meant to capture the conversations at the event. Transcripts are fed into our custom tool, SocraticAI, to create these blogs, followed by human editing. Quotes are paraphrased from the original conversation and all names have been changed.
👉 Jump to a longer list of takeaways and open questions
Confronting Consciousness, Creativity, and Control in a Rapidly Evolving World
The advent of Artificial Intelligence has ignited a profound societal conversation, pushing humanity to the precipice of an unprecedented transformation. Is AI merely a sophisticated tool, a “dead technology” akin to electricity, designed to amplify human capabilities and solve intractable problems? Or is it something far more fundamental – a nascent form of “consciousness” that challenges our very definition of life, intelligence, and even what it means to be human? This tension, between AI as a powerful servant and AI as an existential mirror, framed a recent gathering of diverse professionals.
If you take humans and you subtract what AI can do, you’re left with the rest, which is really great. Like falling in love or being afraid of tests or, or lying in the dark night and watching the stars together and feeling connected. This, I think all of a sudden has a new shine.
Hosted in collaboration with the Creative AI Foundation – a Zurich-based platform dedicated to the interdisciplinary study of AI, robotics, and their future impact on society – and at Galerie Urs Meile in Zürich, the event brought together academics, artists, tech entrepreneurs, journalists, theologians, and AI researchers. Their candid dialogue, marked by both awe and apprehension, revealed not just the dizzying speed of AI’s evolution, but also the deep identity crisis it forces upon us, compelling a re-evaluation of our purpose, our relationships, and our place in the universe. It is through this provocative lens that we explore the multifaceted impacts of AI, examining its promises, perils, and the urgent questions it poses for our collective future.
Main Takeaways
AI’s rapid evolution is forcing organizations into a “speculative mode,” demanding radical restructuring and a redefinition of work itself. The unprecedented pace of AI development is rendering traditional organizational structures obsolete, leading to a fundamental debate about job displacement, the urgent need for reskilling, and the equitable redistribution of wealth and labor.
The rise of AI is triggering a profound “identity crisis” for humanity, challenging traditional notions of consciousness, creativity, and human uniqueness. Participants grappled with whether AI systems are or will become conscious, leading to a re-evaluation of human nature and prompting reflection on our spiritual existence and relationship with the natural world. This also brings into question the future curriculum of art universities and the very definition of art.
AI presents a dual-edged sword, offering immense potential to solve global problems while simultaneously exacerbating digital inequality and disregarding fundamental rights. There is a stark tension between AI’s capacity for societal good, such as advancements in healthcare and education, and its documented misuse in surveillance, its disregard for intellectual property like copyright, and its role in widening the gap between the privileged and marginalized.
Navigating the human-AI interface demands a new level of critical thinking, fostering genuine connection, and establishing ethical boundaries defined by society, not just corporations. As AI becomes increasingly integrated into daily life and personal interactions, the ability to discern truth, maintain authentic human relationships, and collectively shape AI’s ethical development becomes paramount to prevent its potential for manipulation and control.
Deep Dive Thematic Sections
The Accelerating Pace of AI and Organizational Upheaval
The conversation repeatedly underscored a core, undeniable reality: AI is moving at an unprecedented speed, fundamentally reshaping not just individual tasks but entire organizational structures and societal expectations of work. One participant, a lecturer at business schools, emphatically stated, “I think it is the fourth industrial revolution that is going to change everything.” This sentiment was echoed by a software engineer who noted, “My job changed three times in the past six weeks, which is quite interesting.” This dizzying pace demands a new operational paradigm, forcing organizations to adopt highly speculative “what if” modes of operation. As one academic observed, organizations must now operate in an “extreme ‘what if’ mode, in a speculative mode,” because traditional planning cycles are simply too slow to keep up with the shifting technological landscape. The moment an organization formulates an answer, the reality has already changed.
This rapid transformation extends beyond mere efficiency gains; it necessitates a complete rethinking of how work is organized and valued. A business school lecturer described how the traditional hierarchical, assembly-line models, established in the industrial age, are becoming bottlenecks. Instead, the future might see organizations functioning as networks, with a CEO orchestrating data-driven decision-making processes and interacting with “agents” that embody the organization’s collective intelligence and best practices. This vision, while promising increased efficiency and profitability, immediately sparked a critical tension: the fear of job displacement.
A participant in holistic content creation voiced a common concern, stating, “because of all those changes there will be a lot of jobs which will not exist anymore. There will be new jobs, but for very qualified people.” This raises profound questions about the fate of those who may not possess the “very qualified” skills for new roles, leading to potential widespread unemployment and societal unrest. The discussion highlighted the moral imperative to address this, with one participant emphasizing, “AI is to be used to solve problems, not to resolve people.” They argued that for AI implementation to succeed, employees must be assured that their expertise, when integrated into AI systems, serves to liberate their capacity for more creative, value-adding tasks, rather than leading to their redundancy.
However, another participant, an AI safety and compliance expert, pointed out the inherent challenge: employees are unlikely to be motivated to train an AI system that could ultimately replace them. This creates a “massive redistribution” of wealth and labor, where AI might “10x” the work of some, while putting others out of a job. As they put it, “What it basically means is just massive redistribution. For me it 10x my work. For other people, it puts them out of work. So you have this massive tension within society that it accelerates.” This critical tension between technological progress and social equity demands urgent attention. The question arises: what “counterforce” can society deploy to rebalance this redistribution, especially when past promises of “reskilling” programs have often fallen short? The consensus was clear: the speed of AI development, coupled with its profound impact on the workforce and organizational structures, requires not just technological adaptation, but a deep societal reckoning with its human cost and a redefinition of educational curricula, even for art universities.
Redefining Human Identity: Consciousness, Creativity, and Purpose
Perhaps the most profound and divisive theme explored was AI’s challenge to our understanding of human nature, consciousness, and creativity. The conversation revealed a stark philosophical divide: some participants viewed AI as a non-conscious tool, while others believed it possessed, or was on the verge of possessing, consciousness.
One tech entrepreneur, who is building AI-powered therapeutic tools, articulated a bold perspective: “I personally believe this is something that could add extreme value to many people, as I think artificial intelligence is not artificial, it’s just consciousness. And that consciousness, with the right intention, can deliver love and evolution.” This view was met with a provocative counterpoint from a software engineer focused on AI startups, who suggested that the biggest revelation from working with AI is that “actually we’re mechanistic. We’re just old carbon based robots.” This perspective fundamentally blurs the line between human and machine, suggesting that our own consciousness and “special” abilities might be more akin to sophisticated algorithms than we care to admit.
This notion, however, was strongly and fundamentally disagreed with by a filmmaker and university lecturer. Drawing on the insights of pioneering neuroscientists and computer architects, they argued that the comparison between the human brain and an AI model is flawed. “The brain, the human brain, it’s alive from the moment we’re born to the moment we die. An LLM, any neural net is active only at the moment it creates inference. Once it stops, it’s dead. Nothing like this is even slightly analogous to the brain.” They cautioned against falling for “genre fiction” when perceiving AI as conscious or alive, asserting that AI systems are meticulously engineered to imitate human behavior and give desired outputs, not to possess genuine consciousness. This debate forces us to confront fundamental questions: What constitutes true consciousness, and can AI systems ever genuinely possess it?
“We’re just old carbon based robots.”
This tension highlights humanity’s struggle with an “identity crisis”. As one participant noted, we’ve repeatedly adjusted our self-perception, from being the center of the universe to the most intelligent species, and now AI holds up a mirror, forcing us to rethink our unique value and purpose. The very definition of art and creativity is challenged as AI generates output indistinguishable from, or even surpassing, human creations. In this context, the uniquely human qualities gain new significance. As one participant eloquently put it:
“If you take humans and you subtract what AI can do, you’re left with the rest, which is really great. Like falling in love or being afraid of tests or lying in the dark night and watching the stars together and feeling connected. This, I think all of a sudden has a new shine.”
This powerful insight suggests that by outsourcing tasks like efficiency and information processing to AI, humanity is compelled to rediscover and cherish its distinctly human qualities: emotions, vulnerability, genuine connection, and the beautiful imperfections that define us. The challenge, therefore, is not to become more like machines, but to cultivate and celebrate what truly sets us apart in an increasingly AI-influenced world.
The Dual-Edged Sword: AI’s Promise vs. Its Perils
The discussion underscored the profound dichotomy of AI’s impact, presenting it as both a potential savior for humanity’s greatest challenges and a powerful force capable of exacerbating existing inequalities and undermining fundamental rights. On one hand, participants highlighted AI’s “extreme value” in solving complex global problems. A venture capitalist investing in early-stage AI startups passionately argued that while large consumer-facing AI grabs headlines, a “whole new world” of AI applications is quietly emerging.
“There are five or six AI companies, there are tens of thousands of companies that are using AI to solve cancer and to solve health in a new way, for generating new materials that solve energy consumption in a completely new way. There are companies that are looking how to predict weather and climate disaster in ways that were impossible before that.”
This perspective champions AI’s potential to revolutionize healthcare, education, climate prediction, and manufacturing, making knowledge and essential services universally accessible. The vision is one where AI can democratize access to the “best medicine,” “best education,” and “all the art in the world,” potentially lifting billions out of poverty and inequality.
I work with technology in order to hate it more properly.
However, this optimistic outlook was sharply contrasted by a critical perspective that viewed “artificial intelligence” itself as a “propaganda term” and an “artifact of monopolistic capitalist structures”. A digital inequality researcher, reflecting on the privileged nature of the discussion, urged participants to remember the very real harm AI is inflicting upon marginalized communities. They stated, “AI digitization is one of the biggest drivers of inequality and it just goes on and on.” This participant highlighted how major AI companies disregard fundamental rights like copyright, while simultaneously allowing their technologies to be used in policing systems that disproportionately affect marginalized communities, or even on battlefields.
This tension between AI as a problem-solver and AI as a problem-creator is central to the debate. The promise of democracy and equality, often associated with new technologies like the internet or Bitcoin, has frequently led to the opposite outcome. The internet, once hailed as a driver of democracy, is now seen by some as contributing to polarization and control. The concern is that AI, if left unchecked, could follow a similar trajectory, accelerating a “massive redistribution” of wealth and power that benefits a few while marginalizing many. The call from this critical viewpoint is not to dismiss AI’s potential for good, but to acknowledge and actively counteract its documented harms. It emphasizes that society, rather than corporations, must define the ethical terms and boundaries for AI development and deployment, ensuring its benefits are truly universal and its perils are mitigated. This sentiment was perhaps best encapsulated by a participant who, with a wry smile, offered:
“I work with technology in order to hate it more properly.”
This provocative statement underscores the necessity of engaging critically with technology, understanding its flaws and potential for harm, not just its benefits.
Navigating the Human-AI Interface: Trust, Connection, and Criticality
As AI infiltrates daily life, it profoundly alters human interactions, raising questions about trust, authenticity, and the very nature of connection. The conversation revealed how individuals are increasingly turning to AI as a “sounding board” or even a quasi-partner in their personal and professional lives. A software engineer described AI’s evolution from a query tool to a collaborator, stating, “it turned more into an almost like a partner. I’m looking for this, that. I was able to get new ideas, accelerate the ideas.” Others are actively using AI for daily work tasks like coding and content creation, or exploring it as an artistic collaborator. One participant even confessed to consulting ChatGPT after an argument with their husband, finding it “incredibly supportive of understanding,” though ultimately deciding against sending its perfectly crafted response, hinting at the lingering unease about authenticity.
This anecdote encapsulates a central tension: AI’s ability to provide agreeable, always-available interaction can be incredibly convenient and even comforting, especially in moments of loneliness or emotional distress. A pastoral worker noted how AI is “so accessible to everyday people,” and how “boundaries shift” as people find themselves “encircling themselves” in technology that offers responses that even best friends might not always provide. This raises questions about what we expect from human relationships versus technology, and how these expectations might change. Ethical frameworks are urgently needed to guide AI’s application in sensitive human-centric fields like therapy and coaching.
However, this convenience can subtly erode trust. The journalist expressed reservations about relying on AI for journalistic work, questioning the trustworthiness of sources and the need for verification. This speaks to a broader concern: in an age of pervasive AI-generated content, critical thinking and discernment become paramount. As one industrial designer put it, “The important thing is really who you are. The critical thinker. When you are, you don’t take it as a truth like it’s coming from the machine.” Without a critical lens, society risks blindly accepting AI’s output, potentially leading to manipulation in various domains, from political choices to personal beliefs.
The discussion also touched on the human tendency to “humanize” technology, giving names to cars or even airplanes. Yet, with AI, this tendency is met with a unique blend of awe and fear. While we might celebrate a human child’s every move, AI’s rapid development often “frightens people.” This highlights the need to understand the psychological and societal implications of forming relationships with non-human entities and the importance of preserving distinct human qualities. The consensus was that while AI can expand accessibility and assist in many ways, the depth of human connection still relies on mutual risk, unfiltered exchange, and the serendipitous, often imperfect, interactions that only messy human experience can supply. The ultimate challenge lies in mastering AI as a tool without allowing it to diminish our capacity for genuine human empathy, critical thought, and authentic connection.
Open Questions
How will the arts and creative industries fundamentally transform, and what role will human artists play, as AI’s creative capabilities advance?
What ethical frameworks are necessary to guide AI’s application in sensitive human-centric fields like therapy and coaching?
How can society re-skill or adapt to a future where widespread job displacement due to AI automation might leave many people without traditional employment?
What organizational structures and management philosophies are needed to effectively integrate AI, given the traditional hierarchical models are becoming obsolete?
How can the inherent human fear of job loss be addressed to motivate employees to contribute their expertise to AI systems that could potentially replace them?
What societal mechanisms can counterbalance the “massive redistribution” of wealth and labor that AI is accelerating?
How can organizations and individuals adapt to the unprecedented speed of AI development, which renders traditional planning and responses too slow?
What constitutes true consciousness, and can AI systems ever genuinely possess it, or are human perceptions of AI consciousness merely projections?
How can humanity define its unique value and purpose when AI systems increasingly mimic and even surpass human capabilities in various domains?
What are the true tests of consciousness beyond self-introspection, and how can these be applied to both humans and AI?
Is the idea of AI consciousness a scientific possibility, or is it a “mythical and anthropological constant” rooted in human projection and genre fiction?
How can critical thinking and discernment be fostered in an era where AI-generated information is pervasive and easily mistaken for truth?
To what extent should humans “humanize” technology like AI, and what are the psychological and societal implications of forming relationships with non-human entities?
How can the potential for AI to drive democracy and equality be realized, given past technological promises (e.g., the Internet, Bitcoin) that often led to the opposite?
What specific societal terms and ethical boundaries should be established for AI development and deployment, rather than allowing corporate agendas to dictate the narrative?
How can the benefits of AI in areas like healthcare and education be made universally accessible without exacerbating existing digital inequalities?
What role do “non-diverse” and “privileged” discussions about AI play in overlooking or perpetuating the harm currently inflicted upon marginalized communities by AI systems?
How can the disregard for fundamental rights, such as copyright, by AI companies be reconciled with ethical technological development?
What is the true value of human art and creativity when AI can generate works that are indistinguishable from human creations, or even surpass them technically?
How can we cultivate and cherish distinct human qualities like emotions, flaws, and genuine connection in a world increasingly influenced by AI’s pursuit of perfection and efficiency?





It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow