The danger isn’t that AI learns from you. It’s that you start learning the wrong lessons from it. <subtitle>
Some people think saying “please” and “thank you” to AI means you’ve started assigning feelings to a spreadsheet with stage makeup.
I don’t.
I say it for the same reason I don’t kick dogs.
Not because the machine cares. Because I do.
It isn’t anthropomorphism. It’s self-discipline. The point is not whether the machine deserves courtesy. The point is whether I want to normalize casual contempt as a habit.
Interfaces don’t just shape outputs – they shape users.
If I spend hours a week issuing clipped demands to systems that mimic conversation, that is still rehearsal for real life.
The machine doesn’t care. You’re still the one being trained.
This Is Not About Robot Feelings
Let’s get the obvious objections out of the way.
No, I don’t think the chatbot has feelings.
No, I’m not trying to appease Skynet.
And no, this isn’t a plea for “AI rights,” which is exactly the kind of meeting invite I would delete on sight.
This isn’t about whether the model deserves politeness.
It’s about whether you want to rehearse casual contempt and pretend that habit stays neatly contained.
People see me type “please” in a prompt and act like I’ve started treating a toaster like a houseguest.
I haven’t.
People act like courtesy only matters when the recipient can appreciate it. But that’s not how behavior works. A lot of what we call manners is really just self-governance. It’s not always about honoring the other party. Sometimes it’s about refusing to become the kind of person who gets comfortable forgetting they know how to be decent in the first place.
Courtesy is not always for the recipient. Sometimes it’s for the person you refuse to become.
Repetition Is Never Neutral
The reason this matters has nothing to do with whether AI is “alive.”
It has everything to do with repetition.
If you use conversational AI often, you are not just operating software. You are participating in a repeated social simulation. The interface is designed to feel conversational. It invites human patterns: asking, clarifying, correcting, directing, approving, dismissing.
They say it takes seven days to build a habit.
Repeated behavior becomes default behavior.
And defaults are where the real story lives.
We tend to think of software as a passive tool. You click buttons, it gives results, end of story. But that’s not how modern interfaces work – especially the ones designed to mimic conversation. They don’t just help you do things. They shape how you do them.
They reward speed.
They reward command-style phrasing.
They remove social friction.
They make abruptness feel efficient.
They can make contempt feel harmless.
And if you repeat that pattern enough, it starts feeling normal.
That’s the catch.
“It’s Just a Tool” Is Doing a Lot of Work Here
People say, “It’s just a tool.”
Sure.
So is a dashboard.
So is a workflow.
So is a recommendation engine.
And yet we already know tools change behavior. We’ve seen it over and over.
Dashboards make people over-trust green lights.
Auto-correct means none of us can spell anymore.
Recommendation systems make people follow suggestions as if they were decisions.
AI assistants make people talk like little emperors and call it productivity.
That last one may sound funny. It’s also real.
The interface lowers the cost of abruptness. That doesn’t sound like a big deal until you remember how much human behavior is just whatever became easy enough to repeat.
If you make a behavior frictionless, you get more of it.
That’s true in product design. It’s true in organizations. And it’s true in people.
Courtesy Is Not the Same as Delusion
This is where people get tangled up.
They hear “I say please and thank you to AI” and immediately assume that means I’ve gone bonkers and think the smart appliances are about to become my therapist.
Nope.
I know what the system is.
I also know what I am.
There’s a difference between:
- believing the machine is sentient
- choosing not to practice contempt in an interaction that feels social enough to make the habit stick
That is not anthropomorphism.
That is not delusion. It is self-governance.
If I’m going to spend a chunk of my workday interacting with systems that imitate conversation, I’d rather reinforce a habit of measured communication than rehearse being a jerk because the recipient “doesn’t count.”
The habit of deciding who or what “counts” has a way of spreading.
And frankly? I’m not buying that.
It’s Not Just at Work
This is not just about tone. It’s not “be nicer.” It’s not etiquette class for people who prompt models.
It’s about what repeated system interaction trains in you.
Because the same posture that says:
- “Just do what I said”
- “That’s NOT what I said.”
- “Why is this wrong again?”
- “What part of that don’t you understand?”
- “Don’t make me explain this twice”
- “Ugh, useless”
. . . can bleed into how people handle:
- junior staff
- support teams
- QA feedback
- vendors
- customers
- family and friends
- even their own review of system output
And that last one is the sneaky one.
A person who gets used to issuing fast, confident commands to a system that always responds fluently can start expecting the world to behave the same way. Less patience. Less curiosity. Less checking. Less tolerance for ambiguity. More assumption that a fast answer is a good answer.
That is not harmless.
That is how bad decisions start looking like competence.
We’ve talked about how systems influence behavior. What matters is which behavior they reward. This is just that principle wearing a new hat.
The Risk Isn’t Sentience. It’s Conditioning.
The popular scare tactic is that machines will become more human.
The real, boring story is that humans become more mechanical.
That’s the one worth watching.
Not because saying “please” magically makes you virtuous.
And not because being blunt to AI means you’re secretly one missed coffee away from punting a golden retriever across the yard – or, in my case, that yappy little Maltese.
But because repeated interaction trains posture.
It trains pacing.
It trains tone.
It trains assumptions.
It trains what feels acceptable.
It trains what stops feeling worth noticing.
If you think that doesn’t matter because “it’s just software,” you’re making the same mistake people make with every system that quietly trains them while they stare at outputs.
So . . . Should You Say Please?
If you don’t say please and thank you to AI, I am not calling the manners police.
That’s not the point.
The point is simpler and more useful:
Pay attention to what the interface is rewarding in you.
If the tool makes you more impatient, notice that.
If it makes you more dismissive, notice that.
If it makes you less careful because the responses sound smooth, definitely notice that.
If it gets you used to being obeyed immediately, don’t act surprised when that expectation starts leaking.
Because this is bigger than politeness.
It’s about whether you’re letting a system train your habits while telling yourself you’re just using a tool.
AI does not need your kindness.
It does not need your respect.
It does not need your courtesy.
You might.
Because the habits you practice in low-stakes environments don’t always stay there. The systems you interact with every day are not just helping you work. They are quietly teaching you what “normal” feels like.
That’s true whether the interface is a dashboard, a workflow, a recommendation engine, or a chatbot with suspiciously good grammar.
So no, I’m not saying please because I think the machine has feelings.
I’m saying it because I don’t want to rehearse being the kind of person who forgets that tone is also a habit.
The machine doesn’t care.
Don’t let it train you anyway.
Jana Diamond
Jana Diamond, PMP, is a Technical Project Manager at Protovate with a career spanning software development and Department of Defense programs. She’s known for bridging technical detail with practical execution—and for asking the questions that keep projects honest. When she’s not working, she’s likely reading science fiction or hunting down her next salt and pepper shaker set.