Editors Note: This blog article is an AI-supported distillation of an in-person event held in San Francisco on 2024-06-23 facilitated by and an Ai Salon participant. It does not reflect the views of the facilitators, writer, or the Ai Salon - it is meant to capture the conversations at the event. Quotes are paraphrased from the original conversation.
A recent essay by Leopold Aschenbrenner has reignited discussions about AGI timelines, projecting that AGI could be achieved as soon as 2027. This bold claim has elicited a range of responses from experts in the field, highlighting the complex and often contentious nature of AI progress and its implications for society. The Ai Salon recently held a discussion inspired by this ideas.
Top line takeaways
The definition and measurement of artificial general intelligence (AGI) remain contentious and poorly defined.
There is significant debate over the feasibility and timeline of achieving AGI, with some experts projecting rapid progress while others urge caution.
The geopolitical implications of AI development, particularly the perceived race between the US and China, are shaping both the narrative and policy discussions around AI.
👉 Jump to a longer list of takeaways and open questions
The Elusive Definition of Intelligence
One of the central tensions in the AGI debate is the very definition of intelligence itself. As one participant in our discussion pointed out,
I don't think you can write a paper like that and skip the definition of what you mean by intelligence. And I think that he has reduced intelligence to mathematics, which is a closed system, and he equates intelligence with skills.
This critique gets at the heart of a fundamental issue in AI research: the tendency to define intelligence narrowly in terms of specific benchmarks or tasks. While current AI systems excel at certain cognitive tasks, they still lack the general problem-solving abilities and contextual understanding that characterize human intelligence.
Another participant highlighted the multifaceted nature of human intelligence:
We study something called multiple intelligences in humans, and there are seven to eight different intelligences. And, you know, some of it is kind of aesthetic, some of it is based on your... How you work with your hands. Some of it is art related, language related, numeric.
This broader conception of intelligence raises questions about whether current AI approaches, focused primarily on processing and generating text or solving well-defined problems, can truly lead to AGI. It also suggests that our measures of AI progress may be too narrow, failing to capture important aspects of intelligence such as emotional understanding, creativity, and adaptability (see our writeup on a previous salon on Common Sense for more on this theme).
The Feasibility and Timeline of AGI
The essay's prediction of AGI by 2027 was met with skepticism by many in our discussion. While acknowledging the impressive progress in AI capabilities, participants questioned whether simply scaling up current approaches would lead to AGI.
To begin, the essay's ambitious projections for data and computational requirements were met with significant skepticism. Participants questioned the feasibility of building the massive infrastructure needed for "trillion-dollar compute clusters" within the suggested timeframe, especially given regulatory and environmental constraints in democracies. As one participant noted:
We're going to say, we will declare we will build hundreds of gigawatts of power in a relatively compressed time frame. That was just entirely... fantasy to assume it could be built. In our democracy, there are places where you could imagine things being built much more quickly and where they are built much more quickly today.
This highlights the practical challenges of scaling AI capabilities to the levels proposed in the essay. Leopold discusses counter arguments to these arguments, however. For instance, he argues that the Chines government has added the equivalent energy capacity of the entire U.S. over the last 10 years. Fundamentally, his argument rests on the consequential nature of AI for global conflict, which presents a strong incentive for nations to recruit unprecedented resources. This wasn’t particularly convincing to the participants, however, and the point about democracies in particular finding it difficult to move that quickly was not challenged.
Additionally, concerns were raised about potential data scarcity, with some research suggesting high-quality training data could run out within four to five years if current trends continue. Leopold discusses how research into synthetic data or reinforcement learning and self-play could ultimately create the necessary data. Still, solving the data bottleneck is currently a research venture and therefore is difficult to fully predict. These issues underscore the complex interplay between technological advancement, resource availability, and societal constraints in the pursuit of AGI.
Separate from scaling, Leopold discussed the importance of “unhobbling” - the general point that we will be able to develop supportive scaffolding around the models (like programming environments, internet search, agentic loops, etc.) that will unlock gains beyond mere scaling. Many have written about this (including yours truly) and believe that even if the capabilities of foundation models were frozen today we would still see incredibly gains in productivity and performance over the next decade due to these kinds of advancements.
How fast will these advancements be made however? Leopold treats unhobbling as a fairly easy set of features to develop, especially within the digital world necessary for AI research (the physical world and robotics is another story). Some participants did not share this intuition however and see “unhobbling” as a tricky problem with many unanticipated issues. As one noted:
I don't dispute that... you could continue scaling the models and they're just going to improve. I think the kind of formula he lays out, which is based on more compute, that's plausible. More data is probably also plausible for the next 5-10 years. But these kind of unhobbling gains, I think that's, you know, that is harder to predict.
This is the heart of the AGI debate. While some believe scaling is all you need (and they have a decent amount of evidence over the last few years to bolster their claims!), others insist that unhobbling will be a significant challenge. Some participants went further, claiming that certain cognitive domains may require algorithmic advances beyond transformers (see the ARC challenge as an example of this kind of argument and corresponding set of tasks):
There's still a lot of evidence that some of these AI systems, despite scaling, struggle on some of those tasks. So I think I agree with the overall message [that scaling will bring greater reasoning capability], but I think the essay treats it as a foregone conclusion.
Geopolitical Implications and the AI Race
The essay's framing of AI development as a race between the US and China sparked significant discussion and debate among the participants. This geopolitical narrative, while influential in shaping policy discussions, was met with considerable skepticism and criticism.
One participant, who had co-authored a paper on the topic, argued against exaggerating China's AI capabilities. Their research, which examined 26 Chinese large language models (LLMs), concluded that concerns about model weight leaks significantly advancing China's AGI efforts were likely overblown. However, some Chinese models, notably Qwen are topping leaderboards (though HuggingFace only assesses openly available models so doesn’t evaluate models like GPT4o or Claude 3.5). This highlights the importance of grounding geopolitical narratives in empirical evidence rather than speculation or fear. It should be noted that
The motivations behind promoting the "China threat" narrative in AI development were also scrutinized. Some participants suggested that this framing is sometimes used strategically by organizations to argue against certain forms of AI regulation, potentially prioritizing rapid development over safety and ethical considerations. While that didn’t seem to be the timbre of Leopold’s essay, it is possible that the information informing his perspectives were influenced by these goals. The political context in the US was also seen as a potential motivating factor influencing the essay's tone. Some speculated that the focus on national security and the China threat might be anticipating a potential change in US administration, crafting arguments that would resonate with a different political landscape.
The geopolitical framing was also noted to have real-world consequences for individuals. Reports of increased scrutiny and background checks for Chinese nationals working at AI companies underscore the human impact of these narratives. Several participants emphasized the need for a more global perspective on AI development. The essay's focus on US and China was seen as potentially overlooking important developments and perspectives from other parts of the world, which could lead to different trajectories of AI progress.
Overall, while acknowledging the geopolitical dimensions of AI development, many participants were skeptical of the "AI race" framing. They emphasized the need for a more nuanced, globally-informed approach to AI progress and governance, one that balances innovation with ethical considerations and doesn't rush development at the expense of safety and careful deliberation.
Aside: Thoughts about how directly AI may support war was a topic in a previous Ai Salon discussion no National Power & War. Unsurprisingly, in that discussion participants outlined many areas where AI could impact war, both directly in combat and through strategic decision making and surveillance.
Automating AI research
A heart of Leopold’s argument is that AI research is one of the easier jobs to automate and that it will be the most consequential for solving many of the problems discussed like data limits and unhobbling. The argument is that scaling (and some unhobbling) will allow the creation of automated researchers at the level of good human researchers by 2027, and this will create the flywheel for self-improvement. Robotics and other advancement can then follow.
In theme with the general skeptical tenor of the conversations, participants weren’t as convinced here. They likened science to art and made arguments that AI may not be able to do creative research:
Part of the work of any scientist is thinking creatively about new ways of doing things. You can make the same claim about art as well. Like, what made Picasso special was that he invented a completely new way of representing something. It doesn't seem to me like these models could do that because their understanding of the world is fundamentally based on what they have seen.
This topic wasn’t discussed in great detail and thus missed out on fully grappling with one of the more consequential parts of Leopold’s argument. We assume that AI research is being automated as fast as possible within large AI companies, so we will just have to wait and see!
Notes from the conversation
The essay lacks clear definitions for key concepts like AGI and intelligence, relying on narrow conceptions focused on skills and benchmarks.
There's debate over whether current AI progress will continue linearly or face unpredictable breakthroughs and obstacles.
The essay's timeline predictions (e.g. AGI by 2027) are seen as overconfident by many participants.
Concerns were raised about AI systems' inherent biases based on training data and potential for discrimination.
The feasibility of trillion-dollar compute clusters and massive energy requirements was questioned.
There's uncertainty around whether AI can truly be creative or just recombine existing ideas in novel ways.
The economic and social impacts of AI replacing jobs are a major concern, especially the transition period.
Geopolitical competition, especially between the US and China, is shaping the AI race narrative.
The essay was critiqued for not adequately addressing downstream societal implications of advanced AI.
Alignment of AI with human values is complex, raising questions of whose values and how to implement them.
Current AI capabilities are seen as impressive but still limited compared to general human intelligence.
There are doubts about whether simply scaling up current approaches will lead to AGI.
The potential for AI to exacerbate existing inequalities and power imbalances was noted.
Participants stressed the importance of interdisciplinary perspectives in shaping AI development and policy.
The role of regulation in AI development was debated, weighing innovation against safety concerns.
Questions were raised about data scarcity and whether synthetic data could address this limitation.
The essay's focus on US and China was seen as potentially overlooking important global perspectives.
Concerns about AI safety and existential risk were mentioned but not deeply explored.
The group discussed whether current benchmarks and tests adequately capture intelligence and capabilities.
There was recognition of both the potential benefits and risks of advanced AI systems.
Questions
How do we define AGI in a way that captures the full complexity of human intelligence?
Can we predict AI progress accurately, or are we prone to both hype and unfounded fears?
How do we balance the potential benefits of AI advancement with the risks and societal disruptions?
Can AI systems truly be aligned with human values, and if so, whose values should take precedence?
How can we address the inherent biases in AI systems trained on human-generated data?
What are the long-term implications of AI potentially replacing large segments of the workforce?
How do we ensure global equity in AI development and prevent it from exacerbating existing inequalities?
Can current AI approaches lead to true creativity and innovation, or are they fundamentally limited?
How do we balance the need for AI regulation with fostering innovation and technological progress?
What are the geopolitical implications of the AI race, and how can we prevent it from escalating conflicts?
How do we ensure democratic oversight and public input in the development of transformative AI systems?
Can we develop AI systems that are truly interpretable and accountable for their decisions?
How do we address the potential concentration of power in the hands of those controlling advanced AI?
What are the ethical implications of creating AI systems that may surpass human intelligence?
How do we prepare society for the potential rapid changes brought about by advanced AI?
Can we develop robust methods for testing and verifying the safety of increasingly complex AI systems?
How do we balance open collaboration in AI research with concerns about security and misuse?
What are the philosophical implications of AI systems that can match or exceed human cognitive abilities?
How do we ensure that AI development considers long-term consequences beyond immediate applications?
Can we create a global governance framework for AI that is both effective and widely accepted?