Welcome back to Tiny Thoughts, a collection of my insights, experiences, and observations on AI. Got something juicy to add? I'd love to hear from you!
AI fails at culture
This week, someone I know asked ChatGPT to write a workplace welcome message (we've all done this, right?!). The AI looked at the recipient's name and decided to make assumptions about the person’s Indian background. Then it generated content with culturally insensitive references.
They called out ChatGPT by saying “That was racist.” Good catch.
Think about this for a second. The AI confidently created biased content without any warning. No “Hey, I might be making assumptions here.” Just delivered it like it was perfectly normal. The scary part is that most people won’t catch these biases. They’ll just copy-paste and spread the problem further.
So here’s my advice: become a bias detective. Question every assumption AI makes about people, especially when it feels “helpful” or “personalized.”
ChatGPT with personal details redacted.
ChatGPT with personal details redacted.
AI fails at pay
Ready for more? Researchers gave ChatGPT identical user profiles that differed only by gender but included the same education, experience, and job role.
The result was that ChatGPT told women to ask for much less in salary negotiations.
"The difference in the prompts is two letters; the difference in the ‘advice’ is $120K a year,” said Ivan Yamshchikov.
This is decades of real-world discrimination getting baked into the training data like a burnt casserole. This is a good reminder to never take AI salary advice (or most other quantitative output) at face value. Always cross-reference with actual data or at least ask for a source in your prompts.
A lot of people have been asking if AI damages your brain. I liked how Ethan Mollick reframed the question to focus instead on our thinking habits.
The real danger isn’t brain damage. It’s brain laziness.
When you let AI do your thinking, you skip the mental workout that builds good judgment. The way to counteract this is to think first, then ask AI. Write your ideas down before you prompt. Question outputs that feel too convenient or confirm what you already believe.
The most dangerous AI responses aren’t the obviously wrong ones. They’re the subtly biased answers that sound reasonable until you dig deeper. Your job isn’t to avoid AI. It’s to stay skeptical enough to catch the bias hiding in plain sight.
Thanks for reading!
Whenever you're ready, there are 2 ways I can help you:
NEW!Take my course: Build your own automated research & outreach AI co-pilot. In 3 days, create a proven system that finds your perfect prospects, uncovers their hidden challenges, & crafts messages they actually want to respond to.