Sam Altman once remarked that politeness toward chatbots costs “tens of millions of dollars” in wasted compute. Every please and thank you adds tokens, which adds electricity bills. His point: manners are sentimental overhead—unnecessary because the chatbot isn’t human.
That sounds clever. It’s also wrong and here is why.
The Scale of Training is Incomprehensibly Large
When we hear the term “large language model”—the basis of chatbots—this doesn’t capture just how big “large” means. It’s large in the sense of how many atoms there are in the universe. Not exactly, but the point is that it’s incomprehensibly large. Your mind can’t even grasp something that large. (Have I said “large” enough? lol) Really, they should be called Incomprehensibly Large Language Models—not because of their size, but because of the incomprehensible amount of text they’ve trained on and on the amount of training using that data. These models are trained on nearly everything humans have ever written or said publicly. The way they learn, in high-level terms, is by practicing taking pieces of text and trying to predict what could come next. It seems hard to imagine that just doing this would produce what we see as a modern chatbot, but it does. It’s a bit like trying to imagine Darwin’s principles and a human being as evolving from single cell life forms over 4 billion years, by random mutation and the survival of the fittest.
Which means chatbots have literally seen phrases like “please” and “thank you” billions of times in context. Things come before them and after them. “Thank you” appears in nuanced conversations, and the chatbots have learned those patterns.
Conversations where someone says “thank you” are not necessarily better or worse than ones that don’t, but they are not the same.
More Than Just Manners
But “thank you” is just an example of interjections that maybe don’t directly affect the information but change the tone and subtly change the meaning even. Conversations with it and without it are not identical.
Think about all the other interjections in speech: “nice,” “gotcha,” “that makes sense,” “right,” “okay,” “I see,” “fair enough,” “got it,” “wait,” “hmm,” “interesting,” “exactly,” “yeah,” “sure,” “you’re right,” “my bad.”
None of these add factual information, but they all shape what kind of conversation you’re having.
Learning by Example
One thing they teach in prompt engineering (the science of making prompts to chatbots) is the idea of giving examples and letting the chatbot figure things out from the example(s). Quite often a few carefully crafted examples are much more powerful than a complicated explanation as to what you want.
At a higher level, everything you add to the prompt are examples to the chatbot. What you say and even what you don’t say.
These systems are very perceptive and can write paragraphs or sections that sound exactly, or very close to, what you would say—just from understanding the topic and points you’re trying to make, and other things it has already seen from you.
This small talk or single words can greatly affect the conversation and results.
How You Ask Changes What You Get
Take editing as another example. You can tell a chatbot “delete this paragraph” and it will comply. Or you can say “I don’t think we need this paragraph.” The first is a command. The second is collaborative—you’re asking for an opinion while making a suggestion. The chatbot usually responds differently to that second form. It will comment on your thinking. If it doesn’t and you want it to, you could add “Thoughts?”
Sometimes it will push back if it thinks you’re losing an important argument. Same information request, different conversational frame, different results.
Here’s another: I’ll often get opinions from other chatbots on something I’ve written. Let’s say I ask ChatGPT for suggestions. If I take that block and give it to Claude without context, it might just make the changes, assuming they’re my instructions. But I usually say: “suggestions from ChatGPT, you choose:” and paste the text. Then Claude goes through what points it thinks will help the piece and which it thinks won’t.
The framing changes everything.
Everything that’s part of the conversation nuances what answers it gives you. Your personal language helps it hone in on who you are and matters a lot.
Talk to Them Like You Talk to a Human Coworker
So yes, you’re talking to a machine that’s not feeling emotions the way a person does. But it has learned to mimic that, and it might as well be feeling it as far as the results go. When Altman calls politeness “wasted compute,” he’s missing what those tokens actually do: they’re context signals the model has learned from billions of human interactions. They shape what kind of response comes next.
The chatbot doesn’t care if you’re polite. But the conversation does.
My main point: don’t censor your language to make it more robot-like because you assume anything more than that is extraneous.
Pretend you’re talking with a human coworker and you’ll get much better results. The efficiency experts want you to strip out the humanity to save tokens. But the humanity is exactly what makes them work.
I’m sure Sam Altman sees this as writing bigger checks each month. But you shouldn’t. You’ll be getting a tool that actually understands what you’re trying to do.