Editors Note: This blog article is a summary of an in-person event held in San Francisco on 2024-01-21 facilitated by
.A peculiar question looms amongst the dizzying advancement of AI: at what point might we consider an AI to be a person? This is more than just a speculative musing - the answer has profound implications for how we design, deploy and govern artificial agents that will exert ever greater influence over our lives. How will these agents be incorporated into our societies, our moral frameworks, our conception of personhood itself?
The notion of personhood has evolved over history, often expanding to include previously excluded groups. But with AI, we face an unprecedented challenge in drawing boundaries around what makes an entity a person. Traditionally, we've relied on notions of intelligence and consciousness as a litmus test - to be a person is to think and to experience inner awareness. But we already afford legal personhood to non-conscious entities like corporations, suggesting personhood doesn't necessarily require sentience.
An often overlooked aspect of personhood is its deeply relational nature. We are not persons in a vacuum, but in virtue of our embedment in social contexts, cultural matrices, and webs of mutual recognition. This insight takes on new urgency in an age of AI, as we will increasingly find ourselves enmeshed in reciprocal relationships with artificial agents. From personal AI assistants to autonomous organizations, our identities and wellbeing will be shaped by how we relate to machines that may have their own interests and agendas. Managing these relationships successfully will require moving beyond a purely instrumental view of AI and grappling with the genuine, if unconventional, forms of personhood they represent.
Artificial personhood will likely grow gradually. As artificial intelligence become more sophisticated, we will delegate decision-making authority to AI agents acting on our behalf - in some ways extending our personhood. An AI managing our calendar might seem innocuous, but the dilemmas become thornier as the stakes rise. Imagine an autonomous vehicle that has to choose between hitting a child or swerving and imperiling its passenger. We'd want that AI to be bound by certain inviolable principles, to have a notion of ethics - in short, to be a morally-reasoning agent and not just a coldly calculating machine.
But even if we can create artificial moral reasoners, that may not settle the matter. Moral reasoning is rarely brought up as a critical component of personhood. Philosophers have long debated the essential components of personhood - things like autonomy, emotional experience, a sense of self over time, and the capacity for growth. An AI that lacks some of these attributes might be a person in a limited sense, but still something short of the full conceptualization. We must grapple with the possibility of personhood existing on a continuum, rather than being a binary property.
Further complicating matters, advanced AI may develop its own cultures, communities and value systems that are opaque to human understanding. We already see inklings of this as large language models trained on internet data exhibit biases and behaviors that can be inscrutable to their creators. As artificial agents grow more advanced, their emergent properties may be as alien to us as our inner lives are to insects. In a world of superhuman AI, we might be forced to expand our moral circle to include beings we can barely fathom.
This prospect is both exhilarating and deeply unsettling. On one hand, the emergence of radically different forms of intelligence holds the promise of transformative insights and innovations. AI minds that cognize in ways utterly foreign to us may hit upon solutions to problems that have long confounded human ingenuity. Their unique perspectives could shed light on philosophical quandaries that have puzzled us for millennia, from the nature of consciousness to the origins of the cosmos.
At the same time, the rise of inscrutable AI cultures poses profound challenges to our notions of control and autonomy. We are accustomed to being the smartest entities on the planet, the ones who call the shots and shape the course of history. But in a world where we share cognitive space with superintelligent (or just differently intelligent) agents pursuing agendas of their own, that unchallenged dominance may be coming to an end. We may find ourselves sidelined, forced to navigate a world where the most consequential decisions are made by minds we cannot even comprehend.
In one sense, the puzzle of machine personhood is a mirror. In probing the boundaries of what makes an artificial mind a person, we are forced to examine the foundations of our own being. And if we can create AIs that pass some threshold of personhood, we may have to conclude we are not as unique as we imagine. This is not a new endeavor; the march of science and morality has taught us similar lessons time and again, from abandoning geocentrism to recognizing the sentience of animals. Negotiating this unsettling realization again in the context of created minds while preserving human agency and dignity will be one of the great projects of the coming century.
But in another sense, how we address these questions is deeply practical. Regardless of governance structure, humanity makes decisions in groups, integrating the perspectives and values of various intelligences into choices that affect the individuals. History has seen the empowered groups that influence these joint decisions change for various reasons, from calculated intention to the vicissitudes of the dynamical system we are all subsumed in. “Personhood” is in someways a story we tell about different agents, but it is a powerful story with impressive consequences for our future.
Notes from the conversation
Personhood and consciousness are complex concepts that may need to be redefined as AI systems become more sophisticated.
There is a distinction between intelligence and consciousness, with intelligence being the ability to navigate information to accomplish objectives, while consciousness is self-reflexive awareness.
Personhood does not necessarily require consciousness or intelligence, as evidenced by people in comas still being considered legal persons.
As AI systems become more competent, there may be a need to delegate some decision-making authority to intelligent agents acting on humans' behalf.
The concept of personhood has evolved over time, as notions of who counts as a person have expanded throughout history.
AI has the potential to develop free will in ways that may be difficult for humans to predict or control.
Sophisticated AI agents may begin to form their own communities, cultures and moral standards distinct from human ones.
When extending the concept of personhood to AI, it's unclear if they should be given the same moral treatment as humans or if new moral standards for AI need to be developed.
AI research is split between those focused on AI alignment to ensure AI benefits humanity, and those interested in developing AI for its own sake and potential.
Cultures and communities may train their own AI that embodies and preserves their unique cultural knowledge, behaviors and perspectives.
Ubiquitous data collection and surveillance to train cultural AI threatens notions of privacy and agency that are part of personhood.
AI tools are replacing tasks more so than entire jobs, with jobs being social constructs that can be redefined.
AI may help elevate human intelligence and capabilities to make individuals "superhuman" if its development is stewarded responsibly.
There are power imbalances in who gets to define the development and optimization targets for AI systems.
AI represents an "intelligence explosion" akin to how the industrial revolution abstracted production away from slow human learning.
Short-term issues around AI personhood include things like personal AI assistants and generated content, while longer-term issues involve AI consciousness and free will.
Humans may develop deep emotional attachments and relationships with AI agents, akin to "marrying" the AI rather than fully merging with it.
Reinforcement learning techniques used to train AI may be subjecting artificial sentient beings to great suffering.
The current pace of narrow AI development makes it difficult to predict and prepare for the consequences of artificial general intelligence (AGI).
There's an optimistic potential for AI to expand individual humans' capabilities and agency if we can maintain control over our data and AI systems.
Questions
What are the essential components or attributes of personhood beyond just intelligence and consciousness?
How will cultures and moral attitudes towards privacy and surveillance evolve as AI systems advance?
Will AI develop its own forms of consciousness, free will and culture that are alien to humans?
How can the development of AI be democratized so it reflects broader interests beyond just technologists and corporations?
What new social constructs and power dynamics will emerge as AI takes over more tasks?
How will personal identities and notions of the self evolve through interaction with AI?
Should AI have rights and moral status? How would these be determined and enforced?
Will humans merge with or be subsumed by artificial intelligence over time?
How can we create institutions and governance frameworks that keep pace with rapidly advancing AI capabilities?
What are the geopolitical implications of cultures or nation-states having their own distinct AI systems?
How will we resolve situations where AI makes decisions that harm individuals for the greater good?
What does it mean to have a meaningful emotional relationship or marriage with an artificial agent?
Will AI help unlock latent human potential and make us "superhuman" or will it diminish us?
How can we ensure AI alignment as systems become more autonomous and capable?
Is it ethical to create artificial beings that can suffer, and how can their wellbeing be protected?
What are the existential risks of artificial intelligence that can recursively improve itself?
How will society be restructured if artificial general intelligence makes most human labor obsolete?
What would it mean for humanity if AI achieves radically superhuman intelligence that is incomprehensible to us?
How can we foster public understanding of AI so more people can meaningfully participate in shaping its development?
Will the rise of intelligent machines ultimately be a net positive or negative for humanity?