HumanX Ai Salon - Governance & Control
Editors Note: Our second Ai Salon at HumanX! This blog article is an AI-supported distillation of an in-person event held at HumanX 2025 in Las Vegas on March 11, 2025 facilitated by
, , and - it is meant to capture the conversations at the event. Quotes are paraphrased from the original conversation and all names have been changed.👉 Jump to a longer list of takeaways and open questions
🎧 AI-generated podcast derived from this blog using NotebookLM
At the Crossroads: AI Governance in a World of Competing Interests
Introduction
As artificial intelligence advances at an unprecedented pace, a fundamental question looms large: how do we effectively govern a technology that is transforming our world faster than we can adapt our social, economic, and political structures? At a recent gathering of AI professionals, entrepreneurs, researchers, and ethicists, this question sparked a wide-ranging conversation about governance, control, and the future of AI.
The discussion revealed profound tensions between competing frameworks of governance—between those who see governance merely as risk mitigation and those who view it as a comprehensive decision-making system; between aligning AI with values versus objectives; between technical solutions and social frameworks. What became clear is that we stand at a critical juncture where the decisions made today about AI governance will shape not just the technology itself, but the very fabric of human society for generations to come.
Main Takeaways
Governance extends beyond risk mitigation and should be understood as a comprehensive framework for decision-making that weighs both benefits and risks
A fundamental tension exists between values and objectives in AI systems, with optimization for narrow objectives potentially undermining broader human values
AI development is concentrated among homogeneous groups with limited consideration of diverse perspectives, creating significant blind spots in how systems are designed and deployed
International governance faces major challenges due to competing national interests and different value systems, with rights-based and risk-based approaches in fundamental conflict
Market incentives currently reward rapid development over responsible governance, with metrics focused on competition and dominance rather than societal benefit
Social media's negative societal impacts offer a concerning precedent for AI, which is more complex and potentially more consequential
Business models that capitalize on user "intimacy" and personal data may pose particular governance challenges requiring specific attention
Defining Governance: Beyond Risk Mitigation
The conversation began with participants grappling with the very definition of AI governance. Rather than the narrow corporate understanding of governance as risk mitigation and compliance, several participants advocated for a broader conceptualization that encompasses the entire decision-making framework around AI.
"I've been thinking about governance like whether governance is just control," one participant with a background in AI governance research reflected. "Governance is so often associated in the enterprise with risk and compliance. Basically the idea that there's the innovation arm, the things pushing forward the benefits, and then governance is the thing that mitigates the risks... But that can't be the whole of governance because someone, something needs to make decisions which are whether the benefits outweigh the risks." The conclusion was that the reality is that the “governance function” within an enterprise is often just a subcomponent of the full governance structure. This broader understanding of governance was echoed by others who emphasized that meaningful governance must ultimately "impact the development and deployment of AI systems" rather than being mere "window dressing."
However, it wasn’t clear what governance should drive towards in general. A critical tension emerged between governance based on values versus objectives. One participant with expertise in anthropology pushed back an an “objective”-based definition of governance: "objectives of organizations change and they change rapidly. Values tend to be a little bit more static. And I'm wondering if governance is about reflecting our values, not our objectives."
This distinction raises profound questions about whose values shape AI development. As one participant pointedly asked: "If you talk about values, do you mean Mark Zuckerberg's or Elon Musk's values? Even within the U.S., there's a bit of monopoly of five companies," each with different approaches and value systems. There was some worry that a focus on “values” means “my values”, which some saw as either indefensible or at least impractical in the face of a pluralistic world. Those skeptical of the values framing pointed to governance focusing on ensuring AI systems met “objectives” as more focused and a necessary, if not sufficient, condition for AI systems meeting our higher goals as a society.
The definitions ultimately matter and highlighted how governance is not a neutral technical process but rather a deeply value-laden enterprise that reflects power dynamics in society. This understanding sets the stage for considering the various tensions and challenges in AI governance.
The Tension Between Profit and Safety
The conversation repeatedly circled back to a fundamental tension at the heart of AI development: the drive for competitive advantage and profit often conflicts with the imperative for careful, responsible development. While many companies employ ethics teams and publicly commit to responsible AI, the reality on the ground tells a different story.
"I asked the lead AI product designer for a frontier model company what they're working on in AI, and she said immediately: 'to dominate,'" shared one participant. This sentiment was echoed by another who observed, "It's a pissing match... It's not like they needed more money. It was just this competition between the big powers."
This competitive dynamic creates a scenario where those who might prefer to move more cautiously feel compelled to keep pace. As one founder confided to a participant: "I'm building this, but I'm in that train too and I know where this is going. I don't know what else to do. This is happening if I do it or not."
The parallels to earlier technological revolutions were not lost on participants. Several pointed to social media as a cautionary tale, noting how profit-focused optimization for engagement led to significant societal harm that wasn't anticipated or addressed until it was deeply entrenched.
"If we could go back to 2006 and ban any business model that optimizes for engagement or attention, what a different world we would live in," one participant reflected. The parallel they raised would be "if in this moment, we were to ban any business model that capitalizes on intimacy" with AI.
One participant shared a particularly striking conversation with a founder developing attention-capturing technology: "I use the analogy of a rape drug because it kind of bypasses your critical thinking skills... you're playing a game and feeding auditorily these advertisements." When challenged about the ethics, the founder acknowledged the problem but felt powerless to change course: "This is the system of the capitalist system and this train is, we know where this is going, this is going to destruction... I'm building this, but I'm in that train too."
Here lies the governance challenge: how can we create systems that allow for innovation and competitive markets while ensuring responsible development? Some participants suggested that a third-party certification system, similar to B Corps or humanely-raised product labels, could provide market-based incentives for responsible AI. Others were less optimistic that voluntary measures would be sufficient without regulatory backing.
The Challenge of International Governance
The conversation highlighted the particular complexity of developing meaningful international governance for AI. While technological development is global, governance approaches and underlying values vary dramatically between regions and countries.
"There are stark differences in the Global North and Global South in how they perceive the fundamental values that they're trying to protect, and those are actually not in sync," noted one participant with experience in international governance. They explained the distinction between "risk-based approaches" common in Western countries and "rights-based approaches" more prevalent elsewhere, suggesting these differing philosophical foundations make international consensus difficult.
The geopolitical dimensions are impossible to ignore. One participant mentioned seeing "the Chinese ambassador urging the United States to like, partner with them on international governance" while American leadership was "saying exactly the opposite." This creates a complex international landscape where competitive dynamics between nations further complicate governance efforts.
The question emerges: in the absence of international agreements, what governance approaches are most likely to succeed? Some participants argued that we need to focus on corporate and national governance in the near term, while others maintained that without international frameworks, competitive pressures would inevitably push development toward less-regulated environments.
"Unfortunately, until we have international governance, I really struggle to see where any kind of governance is going to be truly effective," a participant reflected. "Because as we just saw, China's also creating very powerful models, and anything we do in the US they could also do, maybe with less safety."
Control and Responsibility in an AI-Driven World
As AI systems become more autonomous and complex, the question of control and responsibility becomes increasingly urgent. Participants wrestled with what meaningful human oversight looks like in a world where AI systems might make hundreds of interconnected decisions per minute.
Who's going to be responsible when things go wrong? You create AI and it does something in the world that was not intentional... are we going to start holding AI responsible for it?
"AI is going to make decisions, and it's not just one agent, it's several agents that will work together and make hundreds of decisions probably in a minute. How do you do traceability?" asked one participant with experience in regulated industries. The same participant noted that in medical device development and cybersecurity, clear lines of responsibility are required, yet AI systems make this increasingly difficult.
This leads to a fundamental governance question: "Who's going to be responsible when things go wrong? You create AI and it does something in the world that was not intentional... are we going to start holding AI responsible for it? The creators?"
The conversation revealed a spectrum of concerns regarding control, from near-term misuse to long-term existential risks. One participant who works with frontier AI companies described feeling caught "between two worlds" - one focused on immediate harms from misuse, particularly by state actors and criminals, and another concerned with potential loss of human control as systems become more advanced.
"We're now not talking in years, we're talking in days until things happen," warned one participant, expressing concern about self-replicating AI systems potentially leading to an intelligence explosion beyond human control. "It's the scenario of a fast takeoff" where AI systems could self-replicate until we reach "an artificial superintelligence explosion" that leads to "loss of human control."
Others focused on more immediate governance needs, arguing that businesses require practical frameworks for managing AI systems that make consequential decisions. "Businesses need to have governance on AI decisions... How do you trace that? Who is the person in the organization that's responsible?"
The discussion highlighted how current governance approaches may be inadequate for the pace and nature of AI advancement. Traditional regulatory models designed for slower-moving technologies may not be sufficient when "the ball is still spinning. We don't know exactly where it's going to land."
Need for Diverse Perspectives
A recurring theme throughout the conversation was how the homogeneous nature of AI development creates significant blind spots. Participants expressed concern that both technical development and governance discussions lack sufficient diversity of perspectives.
"One of the big flaws in AI development so far is how lopsided it's been. AI development is a white man's game... a wealthy men's game, and it's a California game," observed one participant, highlighting how the concentration of development among similar demographic groups limits consideration of diverse impacts.
This homogeneity extends beyond demographics to disciplinary backgrounds. Participants noted a disconnect between technical expertise and humanities perspectives that could help contextualize AI's social impacts. As one participant with a background in both areas explained: "There is this hubris that comes from 'I went to the best schools, I'm so smart'... not being open to diversity of understanding this human problem."
The participant elaborated on how this manifests in Silicon Valley: "I think we get lost. What we think in San Francisco matters... but what they care about elsewhere is different. The world they want to live in is different." This participant advocated for creating infrastructure to bring diverse perspectives to those building and governing AI systems.
Several participants argued that historical examples demonstrate the importance of diverse viewpoints when navigating technological transitions. "When unemployment rates got really high in Germany, we got the Nazi regime coming into power because of that crisis. But the US had the Great Depression and took a different route because there were different philosophical ideas there," noted one participant, suggesting that having a diversity of ideas available during moments of crisis shapes how societies respond.
The conversation also touched on how AI might affect employment patterns differently than previous technological revolutions. "This time the difference is going to be it's coming for the jobs of the doctors and lawyers and people who... have power position. They are more privileged. It's not going for the blue collar first, it's going for the top of the strata." This observation led to the hope that because the disruption may hit more privileged groups first, there might be a more powerful response than if it had started "from the bottom of the society."
The Potential for Pattern Breaks and Alternative Models
Several participants discussed the idea that meaningful governance change might require what one called a "pattern break"—a crisis significant enough to shift incentives but not so catastrophic as to be irreversible.
"My hope is something goes wrong in the near future, painful enough but not catastrophic enough to break the pattern so that everybody's like 'oh shit' and we need to do something differently," one participant reflected. "And by that, when the pattern breaks, there's going to be a period of plasticity and whatever ideas and models and approaches are available to pull from easily that's already there is going to take root and determine the trajectory from that point on."
This perspective informed discussions about alternative governance models that could be developed now, even if they aren't immediately adopted at scale. Several participants described projects they had worked on to create third-party certification systems for AI, similar to "B Corps" or "certified humane" product labels.
"Can we do something similar that is consumer facing, that people find value in?" asked one participant who had worked on a "certified humane AI" initiative. Another described an "AI Assurance Agency" project that would serve as "a system of checks and balances on the frontier AI companies."
While participants acknowledged the challenges in making such certification systems effective without regulatory backing, they suggested these approaches could help create market incentives for responsible AI development: "It would be pretty cool to see an organization's governance policy that actually incorporated... what you were doing with the residual human capital that you were creating by applying these tools."
Conclusion
The conversation revealed both profound concerns and cautious hopes about the future of AI governance. While participants acknowledged the significant challenges posed by competitive pressures, international tensions, and the rapid pace of technological change, many also expressed belief that meaningful governance remains possible.
As one participant noted, drawing historical parallels: "We see how disruptive technologies like the printing press created new armies, new causes, new evils, and new champions as well... We have less time and more concentration of powers in this technological development than we've ever seen before, but we have more enlightened people than we've ever had as well."
The path forward likely involves a combination of approaches: technical tools for risk management, corporate governance frameworks, national regulations, international coordination where possible, and perhaps most importantly, a broadening of who participates in AI development and governance discussions.
We have less time and more concentration of powers in this technological development than we've ever seen before, but we have more enlightened people than we've ever had as well.
What's clear is that the question of AI governance is not merely technical but deeply social and political. It involves fundamental questions about values, responsibility, and the kind of future we want to build. As AI continues to transform our world at an accelerating pace, finding answers to these questions becomes not just important but essential.
Notes from the Conversation
Governance is often viewed as risk mitigation rather than as a comprehensive decision-making framework that balances benefits and risks
There's a significant disconnect between AI safety conversations and profit-driven development
Different regions have fundamentally different approaches to AI governance (rights-based vs. risk-based approaches)
The concentration of AI development among homogeneous groups (described as "a white man's game" and "a California game") limits consideration of diverse perspectives
Current market incentives don't sufficiently reward responsible AI development and governance
International governance faces major challenges due to competing national interests and different value systems
The question of who bears responsibility when AI makes harmful decisions remains largely unresolved
There's growing concern about the risk of "loss of control" as AI systems become more advanced and potentially self-replicating
The pace of AI advancement is significantly outstripping governance mechanisms
Third-party certification (similar to B Corps) could be a potential model for responsible AI development
Business models that capitalize on "intimacy" with AI may pose particular risks
The unemployment impact from AI might hit white-collar jobs first, unlike previous technological revolutions
There's a philosophical tension between optimizing for values versus optimizing for objectives in AI systems
Social media's negative impacts provide a concerning precedent, as AI is more complex with potentially greater unintended consequences
Some participants see a potential crisis or "pattern break" as possibly necessary to create meaningful change in governance
Technical skills and humanities perspectives need better integration in AI development and governance
Current AI systems may be "numbing" us to potential risks by providing convenient services without transparency
The governance discussion often focuses on either existential risks or immediate harms, with difficulty bridging these perspectives
There's tension between the desire for transparency and the competitive pressures of AI development
Different stakeholders (founders, corporations, users, governments) have vastly different understandings of what "good governance" means
Open Questions
How can governance keep pace with the exponential advancement of AI technology?
What would meaningful international governance look like given geopolitical tensions and competing values?
Who should have authority to govern global AI development?
How can smaller companies and diverse perspectives be included in AI governance discussions?
What metrics should be used to evaluate "good" AI governance?
How can we incentivize companies to invest in governance when market pressures favor speed and functionality?
What is the appropriate role of regulation versus self-governance in the AI industry?
How do we address the potential impact of AI on employment without stifling innovation?
What responsibility do AI developers have for unintended consequences of their technology?
How can we ensure AI governance reflects diverse global values rather than just Western or Silicon Valley perspectives?
What happens when AI control moves further from direct human oversight (with agents and autonomous systems)?
How do we build governance that can adapt to rapidly changing capabilities?
Would consumers and businesses actually value third-party certification for AI ethics and safety?
Should we ban or restrict certain AI business models that might be particularly harmful?
What level of transparency should be required in AI development and deployment?
How do we determine who is liable when an AI system causes harm?
Can AI governance be effective if it doesn't involve a more diverse range of developers?
How might AI governance change under different political environments?
What will happen when AI systems make hundreds of decisions per minute - how can we trace responsibility?
Is it possible to create governance structures that balance innovation with appropriate safeguards?