Call any major enterprise’s customer service line and you’re increasingly likely to be met with an uncanny generative Artificial Intelligence (GenAI) voice on the other end. These chatbots are meant to streamline customer inquiries and have been trained on more hours of previously recorded conversations than one person can digest in their adult life. So why does it feel like they’re still years away from emulating how a real, human representative of a brand would sound? If humans can read the room, why shouldn’t AI?
As large language models (LLMs) increasingly step into public-facing roles, engineers will be tasked with incorporating a brand’s identity into their output to align their voice with the company’s tone and style, but this is much easier said than done.
Bland AI Endangers the Brand
If getting AI’s tone right sounds inconsequential, it’s helpful to recognize that GenAI is fast becoming the first touchpoint for consumers beyond published brand content.According to a 2024 Zendesk report,70% of customer experience (CX) leaders planned to integrate generative AI into many customer touchpoints within two years; we’re already halfway there.
If AI can give customers more immediate responses, its potential lackluster tone should be the least of a business’ worries, right? In actuality, Zendesk’s ‘2025 CX Trends’ survey found that more than two-thirds (68%) of consumers say they’re more likely to engage with and trust AI agents that exhibit human-like traits. Meanwhile, a recent study published in the Journal of Hospitality, Marketing & Management found that revealing the presence of AI technology in product and service descriptions had a negative impact on purchase intentions.
Now, imagine if a customer expecting a warm, personalized customer service representative to guide them through the purchasing process, or to help with a problem they’re encountering is unexpectedly met with a cold, detached tone that gives away an obvious use of GenAI. They might grow more quickly frustrated or abandon a purchase or service inquiry altogether.
Beyond tone misalignment, the danger of using AI for public-facing communications is that a brand could start to sound like every other. Customers seek out brands with recognizable identities and in language this boils down to carefully selecting certain words or turns of phrase. If customer-facing LLMs eliminate the fun, authority, or uniqueness from a company’s voice, it leaves nothing with which customers can connect.
Before debuting AI in public-facing roles, brands must shape its speech to seamlessly convey their values from the first word.
Give AI a Crash Course, Then Continually Refine
Ask ChatGPT what its two main personality traits are, and it consistently tells you “clear” and “curious.” How can an algorithm have a personality? It comes down to creating an AI “persona” and training it to recognize choices that will help shape its language.

A precursor to this endeavor is identifying a brand’s core values and communication style. Barring pre-written marketing decks and brand tone guidelines that spell this out, businesses can plug their website text, advertising materials, and blog posts into GenAI to help identify these characteristics.
Selecting the base upon which a brand’s AI persona can most easily be built is the necessary first step, and clearly defined brand values will guide this choice. Model selection is more important than one might think since some are made to be conversational while others can curate deeply researched responses.
Once a business chooses its base model, it can begin to train that model to pull from the tone of specific documents, social media posts, and customer service interactions. Companies can and should create multiple personas to delineate between how the brand approaches actions in customer support forums versus private chats, for example.
Before launch, businesses must test and refine AI to hone its messaging. Companies must provide clear feedback to the GenAI model when it misses the mark and note how it can improve. To continue to sound humanlike, GenAI will most likely always have to collaborate with people to maintain that human touch but will require less granular oversight as it improves.
Once public-facing AI is up and running, the largest leap it will have to make is identifying question complexity and moderating its tone accordingly. AI might answer a question with complete confidence, not recognizing the nuance it requires. Sharing examples of common customer questions at varying levels of difficulty and encouraging the LLM to recognize them will help it avoid doling out the wrong answers to clueless customers. More complicated questions might even require a separate persona altogether such as a ‘technical expert’ for complicated repair questions as opposed to a ‘delightful helper’ for run-of-the-mill product questions.
Should You Trust AI to Build Your Dream Car?

Before electronic computers became widespread, NASA’s most advanced technology wasn’t a machine. It was people. Teams of human “computers”–many of them women–performed intricate calculations using nothing but pencils, paper, and chalkboards… Continue reading
Human Expectations, Machine Conversations
As AI grows stronger, humans are asking it more varied and nuanced queries, notes Google’s Elizabeth Reid. Drafting a flowchart of pre-written answers to common questions and tasking a traditional chatbot with filling in the generic conversation to elicit those queries is now nowhere near satisfactory.
Customers will soon expect that if a company delegates the few precious individual interactions it has with them to AI, the bot better be as good–if not better–than the human it replaces. If it is not, businesses can expect more angry demands to speak with an operator, and complaints about a lack of care for the customer experience.
It all starts with nailing the tone, and an AI trained to emulate the best elements of a company’s unique voice can become a brand’s best spokesperson.






