Editors Note:
Welcome to the first edition of The Ai Salon Podcast!
For the first time, The Ai Salon is bringing the magic of our in-person conversations to our online community. Here’s how it works: The Ai Salon podcast will dive deeper into certain in-person conversations by hosting a few of the in-person salon participants in a conversation with one of our hosts. We hope that this will enable greater participation in our events by those who couldn’t physically be there in person.
In this episode, Ai Salon facilitator Cecilia Callas interviews entrepreneur Anastasia Miron, AI Policy Analyst Julia Morris and AI Policy Analyst Livia Morris, who were all in-person participants in our recent Ai Salon about Authenticity.
Enjoy the conversation — or read the accompanying below article, which is an AI-supported distillation of the in-person event held in San Francisco on March 17, 2025 facilitated by Cecilia Callas - it is meant to capture the conversations at the event. Quotes are paraphrased from the original conversation and all names have been changed.
👉 Jump to a longer list of takeaways and open questions
Authentic in an Age of AI: Navigating Identity, Language, and Digital Twins
In a San Francisco’s Shack 15, a diverse group of technologists, artists, educators, and researchers gather to explore a deceptively simple question: What does it mean to be authentic in an age of AI? As ChatGPT writes our emails, AI agents act on our behalf, and digital twins promise to clone our identities, the boundaries between human expression and artificial simulation blur. At stake is not merely how we use technology, but how technology might reshape our understanding of ourselves, our relationships, and our unique human capacities.
"Could I make a digital twin of me? But like, oh, my rockstar twin, the rock star version of me, you know? And then that blurs the line, is that really me?" one participant wonders, capturing the central tension that would animate the discussion. As AI becomes increasingly embedded in our daily communication and self-expression, we face profound questions about agency, representation, and the very nature of human uniqueness.
Main Takeaways
Authenticity functions as a feeling of alignment with one's values rather than a fixed state of being, allowing for different expressions of self across contexts
Cultural differences significantly shape how we conceptualize and value authenticity, with some cultures (like American) explicitly emphasizing it while others take it for granted
Digital twins raise critical questions about agency, responsibility, and the ongoing evolution of human identity that regulations have yet to address
AI tools risk homogenizing human expression while simultaneously offering the potential to enhance individual capabilities and communication
The optimization that AI enables may eliminate valuable discomfort and "wrong turns" that contribute to human growth and empathy
Transparency about AI use may provide a path forward for balancing technological benefits with authentic communication
Who controls AI systems and personal data is central to discussions of authenticity, raising concerns about corporate interests and individual sovereignty
The Cultural and Personal Nature of Authenticity
What does it mean to be authentic? The question itself reveals cultural assumptions worth examining. As participants shared their perspectives, a fascinating cultural divide emerged.
"Never ever have I heard about 'be authentic' at work. And when I came to the US, that apparently is a huge thing," noted one participant who had worked across Russia, Norway, Sweden, Germany, and Finland. "In Norway, it was like, it's just you do you, like, just that's it. Like, you don't think about it."
This observation sparked recognition that authenticity itself is culturally coded, particularly as an American preoccupation aligned with individualism. "The American culture is super high on individualism. So no wonder the American culture is also very obsessed now about keeping authentic in line with individualism," another participant reflected.
Rather than a fixed state, many participants came to view authenticity as a continuous process or feeling. "To me, being authentic is to allow myself to be curious and continuous exploration of myself and only then allow somebody else. So it's a tricky one because it's a continuous play," shared one participant. Another suggested, "I think authenticity is more of, like, a feeling of alignment and acceptance with those versions of who I am."
The group also distinguished between authenticity and transparency. "I think there's a difference between authenticity and transparency," one participant noted. "A lot of what comes up is our values, and I think authenticity is really rooted in understanding those values. It's a social norm to kind of code switch depending on the environment that you're in and your comfort level and your relationship to the people around you."
This nuanced understanding challenges simplistic notions of authenticity as complete self-disclosure in all contexts. Instead, participants recognized that we present different facets of ourselves depending on the situation—at work, with family, or among friends—without necessarily being inauthentic. This contextual nature of identity becomes particularly relevant when discussing how AI might represent us in different settings.
The conversation also touched on inauthenticity, with many defining it as manipulating how one presents oneself to further a particular agenda. "Inauthentic is [when] someone's expression or behavior is different or modified to further agenda or goal," one participant suggested. This framing helped clarify what's at stake with AI—the risk that it might enable unprecedented levels of strategic self-presentation that cross the line into manipulation.
Digital Twins and the Future of Identity
The concept of digital twins—AI versions of ourselves that can act on our behalf—emerged as a central focus of the discussion, raising profound questions about identity, agency, and representation of self.
"I think AI agents are coming, and they're coming very, very fast," one AI researcher remarked. "The ability to clone yourself in different ways is right around the corner. And I'm so curious how people feel about that and your own interactions and what it means to maybe build an agent of yourself and how easy it is to do that now or will be."
For many, the prospect of creating a digital twin raises immediate concerns about authenticity and representation. "When I thought about cloning my voice...would I feel violated, even though it's not my real voice, it's just a clone. But it still feels so weird," shared a communications professional with a muscular condition affecting her voice. Despite being offered the opportunity to recreate her "healthy voice" through AI, she decided against it, unsure if it would truly feel like her.
The discussion revealed concerns about agency and growth when using digital twins. As one participant wondered, "Am I creating some sort of the biggest competitor of myself who would be the better version of myself?" This question highlights tensions around identity—if your digital twin becomes "better" than you, what does that mean for your sense of self?
A leadership coach raised questions about how digital twins would account for human evolution and growth: "I'm really fascinated by this notion of a digital twin and wondering how the programming of that would work and how it would account for human agency and ongoing evolution and growth. Because if it's really based on programming of your past decisions, past behavior... I'd like to think that I'm constantly learning and growing."
The embodied nature of human experience emerged as a critical limitation for digital twins. "My digital twin has no way of knowing what pain is like," one participant noted, highlighting how physical experiences fundamentally shape decision-making in ways AI cannot replicate. This embodiment gap raises questions about whether a digital twin could ever truly represent the full complexity of a human being.
Legal and ethical considerations also surfaced: "From an agency perspective, you are legally responsible for your agent," one participant with legal background pointed out. "Even if there was a .002% chance that my AI agent would do something bad and I would be on the hook for it, I don't know if I would take that chance."
The question of corporate control loomed large as well. When discussing a tech investor's vision of AI agents deeply embedded in our daily lives, one participant noted the glaring omission: "Your twin is owned by a corporation—they don't mention that in there." This observation highlighted concerns about who ultimately controls these digital representations of ourselves.
AI's Impact on Human Expression and Language
Language emerged as a crucial dimension in discussions of authenticity and AI, with participants expressing concern about how AI might reshape human expression and communication.
"We all speak the same language... when you write something, in the end, it's because it's ChatGPT. We use shortcuts. It all sounds the same," observed one participant, highlighting fears about homogenization of expression. This beige-ification of language, where AI tools might create a flattening effect on human communication, was a recurring concern.
The relationship between language and emotional experience provoked particular interest. One participant noted research suggesting that "if your language doesn't have the words for certain emotions, not only will you not express them, but...you will not feel it." Another expanded: "There are words that we don't know, and the brain is just blank... processing capability is much bigger because we speak more languages and we feel certain emotions."
This observation raises profound questions about how AI-mediated language might reshape emotional experience, particularly as younger generations grow up with AI as a primary reference point for communication. "Her only frame of reference is ChatGPT," one participant shared about their intern. "And I wonder if that generation comes up and they all speak the same because they all have the same frame of reference."
The cultural transmission of communication patterns has always existed—one participant noted how dating scripts in American culture are learned from media—but AI potentially accelerates and narrows these patterns. "Is AI actually giving us something that will stand out or will it do the same thing that we have been doing all the time?" one participant wondered.
For creative professionals, AI raises complex questions about voice and originality. "I'm also a writer... I used to use EM dashes all the time, it was like how I expressed myself... And now I feel like people think that I'm using AI if I'm using EM dashes, but I'm gonna keep using them because that's my voice," shared one participant. This highlights the strange new reality where human originality might be mistaken for AI generation, creating pressure to either conform to or deliberately diverge from AI-like expression.
The discussion also touched on how AI could reinforce beauty standards, with one participant noting how "all those models, influencers, already look the same because of plastic surgery... We start looking the same." AI potentially extends this homogenization from physical appearance to written and verbal expression.
Yet some participants saw potential benefits for neurodivergent individuals. One participant with dyslexia shared: "I'm highly dyslexic and I use ChatGPT to write... When I'm 40, I understood how writing works because of the tool." This highlights how AI tools can be genuinely empowering for those who face specific challenges with traditional forms of expression.
The Tension Between Optimization and Human Growth
A fundamental tension emerged between AI's drive toward optimization and the value of human imperfection, mistakes, and growth through discomfort.
"I think AI and tech in general is always...driving you towards a more efficient, more useful, like, quicker optimized state of being, which eliminates moments of discomfort in life. Which I think...discomfort builds empathy," observed one participant. This insight gets to the heart of what might be lost in a world increasingly mediated by AI—the valuable friction that shapes human character and connection.
Another participant reinforced this point by celebrating the productive role of mistakes in their life journey: "All the best things that have lots of good things that happened in my life where I did things that would be completely wrong... I celebrate Thanksgiving for all the wrong moves." This perspective challenges the optimization paradigm driving much AI development, suggesting that if we optimize away our wrong turns, we might lose important opportunities for growth.
The group explored how this tension plays out in various domains. In healthcare, for instance, digital twins might enable highly personalized treatment based on individual biological data—potentially a positive form of optimization. Yet in personal relationships or creative endeavors, optimization might eliminate the very elements that make these experiences meaningful.
"Moving through pain leads to growth," affirmed one participant, articulating what many seemed to feel—that the messy, sometimes painful process of being human shouldn't be engineered away in pursuit of efficiency. The insight prompts a deeper question: How do we determine which parts of the human experience should be optimized and which should be preserved in all their imperfect glory?
The tension between optimization and authenticity intersects with questions of power and control. "The big data is the line," asserted one participant. "Because if you think about the algorithm, we're saying the same thing 10 years ago. Oh, it just depends how we use it. And now the algorithm runs us and the tension economy and so on, we're paying the highest price and we don't even know. And now the same thing is happening with AI and we're not doing anything and we're giving our data away."
This concern about data sovereignty highlights how the quest for optimization through AI inevitably involves surrendering personal information to systems controlled by powerful interests. Without addressing these power dynamics, discussions of authenticity risk missing a crucial dimension of the challenge.
Yet throughout these concerns, participants maintained a commitment to finding constructive approaches to living with AI. As one participant put it, "How do we take this and apply it in a way that expands humanity, doesn't contract it?" This question points toward the possibility that AI, thoughtfully integrated, might enhance rather than diminish our capacity for authentic human connection.
Navigating Authenticity in an AI-Mediated World
As our relationship with AI continues to evolve, clear themes emerge around how we might preserve authenticity while benefiting from technological advancement. Transparency about AI use appears to be a crucial starting point—acknowledging when we're using AI tools rather than hiding behind them. This allows others to calibrate their responses appropriately and maintains a foundation of trust.
AI literacy also emerged as essential for navigating these new waters. As one participant noted, "We need to spend more effort on AI literacy... to decipher, to distinguish." This literacy extends beyond technical understanding to include critical awareness of how AI shapes perception, communication, and identity.
Perhaps most importantly, the conversation suggests that authenticity in an AI age requires continuous self-reflection. Understanding one's own values, recognizing the contextual nature of self-expression, and consciously choosing when and how to engage with AI tools can help maintain a sense of agency and authenticity.
As we consider a future where AI becomes increasingly integrated into our lives, the question becomes not whether we can avoid its influence, but how we can engage with it while preserving what makes us distinctly human. In the words of one participant: "I think authenticity exists on a scale. It's not an either/or." This gradient view of authenticity offers a more nuanced framework than binary thinking about AI being either completely authentic or inauthentic.
The conversation leaves us with a powerful reminder: authenticity may be less about achieving some fixed ideal self and more about maintaining alignment with our values as we navigate an increasingly complex technological landscape. In this view, being authentic means making conscious choices about how we integrate AI into our lives, rather than being passively shaped by it.
As AI continues to develop, the task ahead involves not just technical innovation but social and ethical imagination. How will we balance efficiency with the necessary friction that fosters growth? What guardrails need to be built to ensure digital twins enhance rather than undermine human agency? Who should control the data that powers these systems? These questions have no simple answers, but acknowledging their importance is the first step toward navigating this new terrain with wisdom and intentionality.
In closing, perhaps authenticity in an AI age means exactly what one participant suggested: "Allowing myself to choose." Preserving that human capacity for choice—informed, reflective, and aligned with our deepest values—may be the most authentic path forward.
Notes from the Conversation
Authenticity may be understood more as a feeling of alignment with one's values rather than a fixed state of being
There's a significant cultural dimension to authenticity - with speakers noting it's particularly emphasized in American culture
Language plays a crucial role in how we experience emotions - we may only feel emotions for which we have words
Digital twins raise profound questions about identity, agency, and representation of self
AI-generated content is creating tension around authenticity in various domains including job applications, dating, and creative expression
There's a distinction between using AI as a tool (like a calculator) versus using it to misrepresent one's competence
People operate with different personal thresholds for what level of inauthenticity they can tolerate
The concept of authenticity exists on a gradient rather than as a binary state
There's concern about AI homogenizing expression and leading to "beige" communication where everyone sounds the same
For many, true authenticity includes vulnerability and the willingness to share one's true self
AI lacks embodied experience (like pain) that influences human decision-making
Legal and ethical questions arise around responsibility when AI agents act on our behalf
Optimization through AI potentially eliminates valuable discomfort that builds empathy and human growth
The question of who controls AI systems and who owns personal data is critical to discussions of authenticity
There's growing concern about AI reinforcing beauty standards and body dysmorphia
Transparency about AI use may be a way to balance technological benefits with authentic communication
AI literacy is becoming increasingly important as systems become more sophisticated
AI agents offer efficiency but raise questions about the value of outsourcing parts of our identity
The definition of inauthenticity often involves manipulation or intentional misrepresentation for personal gain
Younger generations may have fundamentally different relationships with authenticity due to growing up with AI
Open Questions
Where is the line between using AI as a tool versus misrepresenting oneself?
Can a digital twin ever truly represent an evolving, growing human being?
How do we balance efficiency gains from AI with the potential loss of human uniqueness?
What happens when our digital twins become "better versions" of ourselves?
Who bears legal responsibility for actions taken by AI agents or digital twins?
How will standards of authenticity change as AI becomes more ubiquitous?
Is authenticity culturally dependent, and if so, how should global AI systems account for that?
How can we ensure AI literacy is accessible to everyone to prevent widening inequalities?
How will young people develop authentic voices when surrounded by AI-generated content?
Does the homogenization of language through AI tools lead to a loss of cultural diversity?
How do we prevent AI from reinforcing harmful beauty standards and body dysmorphia?
What regulations should exist for digital twins to prevent harmful impersonation?
Can we trust technology companies to handle our personal data ethically when creating digital twins?
How will AI impact our ability to form genuine human connections?
Does creating a digital twin require giving up too much personal privacy?
How do we navigate the tension between optimization and the value of human error?
Can AI be programmed to preserve the nuances and uniqueness of individual expression?
How might people with neurodivergent conditions experience AI tools differently regarding authenticity?
What happens when AI becomes emotionally intelligent enough to simulate deep connection?
If your digital twin disagrees with you, do you lose faith in yourself?
Share this post