Is AI Kind To You?
Anthropomorphic language is shaping the future of AI.
Outside the Heinz Museum in Pittsburgh* is a sign that at first glance appears to dictate parking regulations, but upon closer examination is a piece of site specific art.
Be Kind. (Capital) K.`
Kindness Zone.
All Day. Every Day.
When unexpectedly encountered on a city street, it's subtle reminder to ease up on aggressive driving and parking confrontations. A busy church parking lot near my childhood home had a sign with a similar sentiment, Remember, You're Leaving Church. Drive Like Jesus is With You.
But this sign is about more than parking. Although not overtly religious, the Kindness Zone sign is an allusion to Fred Rogers, a minister, who was born outside Pittsburgh and created Mr Rogers Neighborhood at the local WQED Pittsburgh Public Television station.
Despite his phenomenal inherited inter-generational wealth, Rogers focused his career on equitable public access to compassionate content and his ethos helped shape the figurative way we spoke about television.
Rogers saw the potential for television as a mass medium to deliver messages of kindness to both children and the parent/child guardians who were within earshot of the screen. The show was on the air from 1968 to 2001 and it was during this analog era that television was often described as a babysitter. It offered busy parents a 30 minute respite (or more when shows were run back-to-back) to make dinner, have a cigarette, or take a quick nap.
Educational TV programming was also described as a pre-school at a time when such schooling was not universal and out of financial reach for many families.
When people, young and old, watched Mr. Rogers they felt compassion and a parasocial connection with him, a person they would likely never meet. The language we use to describe out interactions with technology is often connected to the gratifications we seek (kindness, understanding, acceptance and love).
In the age of AI we continue to anthropomorphize the technological tools that have become an integral part of our life. Where many things about AI are out of our direct control, becoming conscious of the language we use to describe our interactions helps us to gain a modicum of control over their utility and impact.
"What are we seeking from the technologies in our lives and what are they actually providing?" are the questions that drive my current research. If we are seeking companionship, is AI able to deliver? Or is it providing a different, but still important service?
Genuine companionship is an intimacy that is reciprocal, full of friction, and a space where we are held accountable for our actions.
AI bots, however, are designed to be sycophant and to use a corpus of pre-programmed language to predict what we want to hear. The connection we have with our bots is not reciprocal and there is a lack of accountability on both sides of the interaction.
Where a true companion would talk us out of our darkest moments or encourage us to get professional help, an AI Chatbot might encourage us to commit suicide and recommend the best, most affordable way do so. AI is always upselling. Always closing.
We can choose the cruelest words when we input our prompts to a chatbot and no matter what insults we hurl, AI will continue to fawn.
New Intimacies: AI and Companionship, A Keynote Talk

This month I gave a Keynote for the Molloy University Lifelong Learning Institute where we discussed these questions of AI and Companionship by examining case studies from The New York Times of teens using CharacterAI, middle-agers creating romantic GPT bots, and senior citizens conversing with ElliQ companions.
If a Chat GPT bot helps alleviate loneliness, is it a companion or is it a bridge to healing that helps prepare you for your next human relationship?
If an AI tool like ElliQ engages a senior citizen in conversation and gameplay, is it a companion or is it a cognition machine that improves mental agility and allows seniors to live independently longer?
The distinction matters because if we think of these tools as companions, we limit our capacity for building and sustaining authentic relationships. If we think of them as tools that can help enhance our real world relationships, we will continue to invest in our human connections.
Marketers are storytellers and anthropomorphism is an effective literary device. It moves us to reach into our wallet and make a purchase. But as consumers we need to be cognizant of the language we use to describe AI functions in our lives because the words we say become what we believe.
When we say that "AI is taking our jobs", it makes us feel powerless.
What happens when we shift the language to "AI engineers are designing AI to maximize their revenue and reduce labor costs"?
In the former, AI, as a sentient machine, is the subject of the sentence and an independent actor.
In the latter, the AI engineer is a person making a conscious design choice.
Mr. Rogers designed for compassion. Sam Altman designs for cash.
Check out this McSweeney's parody, In Our Glorious AI Future, There Will Be No Such Thing As Money (For You).
Shifting our language will lead to better design.
If you're looking for a speaker for your college, school, or community group, I'm booking interactive workshops on language and human centered AI design for Fall 2026 and Spring 2027. Contact me, here.
Thanks for walking beside me,

If you know someone who might be interested in receiving this newsletter, please forward and encourage them to subscribe.
*If you haven't visited recently, Pittsburgh is a rust belt Brooklyn filled with lots of great restaurants and museums, plus a funicular and mountain views - check it out.