Editors Note: This blog article is a summary of an in-person event held in San Francisco on 2024-02-04 facilitated by
andIn a world where artificial intelligence systems are rapidly advancing and permeating more aspects of our lives, a crucial question arises: How can we ensure that these systems develop the "common sense" necessary to navigate the complex social and cultural landscapes that we humans take for granted?
After a moment reflection, it is clear that defining common sense itself is a non-trivial task. Is it a set of universally shared understandings, or is it more of a culturally and socially constructed phenomenon? What we call common sense often goes unexamined and unspoken until it is problematized - until someone or something lacks that assumed knowledge and behaves in unexpected ways.
This highlights a key challenge in imbuing AI with common sense: much of this knowledge is implicit rather than explicit, learned through lived experience and social interaction rather than formal instruction. Just as a child learns the subtle norms and expectations of their culture by growing up within it, an AI may need more than just textual training data to truly grasp the unwritten rules that govern human behavior.
However, some groups may have an advantage when it comes to encoding their worldviews and values into machine intelligences. For example, faith communities with extensive textual traditions, like Mormonism or Judaism, have spent centuries explicitly articulating their ethical frameworks and applying them to myriad life situations. For a secular AI developed by those operating from a more nebulous "scientific materialist" paradigm, the path to acquiring robust common sense may be less straightforward.
One potential solution is capturing the implicit knowledge transfer that occurs between parents and children in the early years of life. While not formally taught, much of what we consider "common sense" is absorbed through the countless interactions and experiences that shape a child's understanding of the world. From learning the names of objects to grasping the subtle social cues that govern behavior, children acquire a vast repository of knowledge long before they engage with formal education. This raises the question of whether AI systems could benefit from similar forms of experiential learning, perhaps by being exposed to the kinds of natural, contextually rich interactions that characterize child-rearing. Of course, the challenges of replicating such an immersive and open-ended learning process are formidable, and it remains an open question whether the key insights can be distilled and translated into a machine context. Nevertheless, the parallels between the development of common sense in children and the quest to instill it in AI systems are intriguing, and suggest that a deeper understanding of the former could potentially inform the latter.
But even with the most exhaustive training, we must grapple with the reality that "common sense" is far from universal. What is obvious and unquestioned for one individual or culture may be alien to another. As AI systems become more sophisticated and agentic, they will need to navigate not a single shared set of assumptions, but a complex patchwork of overlapping and sometimes contradictory worldviews.
Perhaps the solution lies in a multitude of specialized AI models, each one steeped in the common sense of a particular community. Rather than striving for a one-size-fits-all intelligence, we may see the emergence of myriad digital agents embodying the diversity of human experience - and hopefully interacting with us, and each other, with the nuance and contextual sensitivity that marks true understanding.
Ultimately, the quest to instill common sense in our artificial creations is not just a technical challenge, but a philosophical and ethical one. It demands that we shine a light on our own unexamined assumptions and grapple with the profound diversity of the human experience. Only by truly reckoning with the implicit fabric of our social world can we hope to weave it into the artificial minds that will increasingly shape our future.
The Talmud offers an intriguing model for how we might approach the edge cases and novel situations that will inevitably arise as AI systems engage with the world. Through effortful interpretation and thoughtful extrapolation of existing knowledge, we can slowly expand the circle of machine understanding. But we must also have the humility to recognize when something is truly outside the training set, and engage in open-ended dialogue to construct new shared realities.
The road ahead is long and winding, but the potential rewards are immense. By striving to create AI systems that embody not just raw intelligence, but wisdom, empathy, and contextual awareness, we open up new frontiers for human-machine collaboration. With care and foresight, we may yet craft digital companions to help us navigate an ever more complex world - and perhaps, in the process, learn to more fully appreciate the rich tapestry of human diversity.
Notes from the conversation
Common sense is not necessarily universal, but rather culturally and socially constructed.
Much of common sense knowledge is implicit rather than explicit, learned through lived experience and social interaction.
AI systems may need more than just textual training data to truly grasp the unwritten rules that govern human behavior.
Faith communities with extensive textual traditions may have an advantage in encoding their worldviews and values into AI.
Secular AI developed by those operating from a "scientific materialist" paradigm may have a less straightforward path to acquiring robust common sense.
As AI systems become more sophisticated and agentic, they will need to navigate a complex patchwork of overlapping and sometimes contradictory worldviews.
The solution to instilling common sense in AI may lie in creating specialized models, each steeped in the common sense of a particular community.
Instilling common sense in AI is not just a technical challenge, but a philosophical and ethical one that requires examining our own unexamined assumptions.
The Talmud offers a model for how we might approach edge cases and novel situations that arise as AI systems engage with the world.
We must have the humility to recognize when something is truly outside an AI's training set and engage in open-ended dialogue to construct new shared realities.
Creating AI systems with contextual awareness and empathy could open up new frontiers for human-machine collaboration.
Frustration could potentially be used as part of the loss function to train AI models to have more common sense.
Consent norms developed in certain subcultures, like the kink community, could inform how AI understands and communicates about complex social interactions.
The multiplicity of human cultures and values may lead to a variety of AI models reflecting different worldviews, rather than a single dominant model.
Autonomous vehicles serve as a test bed for exploring how AI systems can navigate complex social contexts with both explicit and implicit rules.
Human taste and judgment are likely to remain important even as AI progresses, at least for the foreseeable future.
We may want machines to develop a different type of "common sense" than humans, as they are distinct beings with different objectives.
The scope of what constitutes "common sense" may expand as we interact with more diverse types of intelligent agents that navigate the world differently than humans.
Applying AI to complex social interactions, like those involved in the game Diplomacy, requires moving beyond pure self-play to incorporate realistic human behaviors.
Developing true common sense in AI may be a tractable problem, but it is likely to be a lengthy and challenging process, akin to the development of autonomous vehicles.
Questions
How can we effectively define and operationalize "common sense" in the context of AI systems?
What types of training data and approaches are needed to instill a robust understanding of social norms and cultural context in AI?
How can we balance the need for AI to understand and navigate diverse worldviews with the potential risks of creating highly specialized, siloed models?
What role should faith communities and other groups with well-articulated value systems play in shaping the development of AI ethics and common sense?
How can we ensure that the "common sense" embodied by AI systems reflects the diversity of human experiences and perspectives, rather than just the dominant culture?
What are the potential unintended consequences of creating AI systems that are deeply attuned to specific cultural contexts and norms?
How can we create AI systems that are able to recognize and adapt to novel situations that fall outside their training data?
What can we learn from the development of autonomous vehicles about the challenges and opportunities of instilling common sense in AI?
How might the increasing prevalence of AI systems with common sense reasoning capabilities impact human social interactions and cultural evolution?
What are the ethical implications of creating AI systems that can navigate complex social situations with a high degree of nuance and contextual awareness?
How can we strike a balance between leveraging the potential of AI to enhance human knowledge and capabilities, while also preserving the value of human judgment and expertise?
What role should human feedback and oversight play in the development and deployment of AI systems with common sense reasoning abilities?
How can we create AI systems that are transparent and accountable in their decision-making processes, particularly when navigating ambiguous or contentious social situations?
What are the potential risks and benefits of creating AI systems that can engage in open-ended dialogue and learning to expand their understanding of the world?
How might the development of common sense AI impact power dynamics and social inequalities, both within and between communities?
What are the implications of creating AI systems that can understand and engage with the implicit, unspoken aspects of human culture and communication?
How can we foster public trust and understanding of AI systems that are designed to navigate complex social and cultural contexts?
What interdisciplinary collaborations and perspectives are needed to effectively tackle the challenge of instilling common sense in AI?
How can we anticipate and mitigate potential misuses or abuses of AI systems with advanced social reasoning capabilities?
What long-term societal and cultural transformations might be catalyzed by the widespread adoption of AI with robust common sense understanding?