profile

Your human guide to the AI era

Subscribe below to begin. Zero spam. Privacy guaranteed.

Jul 31 • 2 min read

AI failed us this week


AI failed us this week

Welcome back to Tiny Thoughts — short insights that punctuate my longer-form newsletter. Below you'll get my recent experiences, discoveries from books and podcasts, and observations on emerging AI trends.

This week brings notes on some uncomfortable truths about AI failures. I’m sharing three observations that reveal something important about the gap between AI’s promise and its reality.


AI fails at culture

This week, someone I know asked ChatGPT to write a workplace welcome message (we've all done this, right?!). The AI looked at the recipient's name and decided to make assumptions about the person’s Indian background. Then it generated content with culturally insensitive references.

They called out ChatGPT by saying “That was racist.” Good catch.

Think about this for a second. The AI confidently created biased content without any warning. No “Hey, I might be making assumptions here.” Just delivered it like it was perfectly normal. The scary part is that most people won’t catch these biases. They’ll just copy-paste and spread the problem further.

So here’s my advice: become a bias detective. Question every assumption AI makes about people, especially when it feels “helpful” or “personalized.”

AI fails at pay

Ready for more? Researchers gave ChatGPT identical user profiles that differed only by gender but included the same education, experience, and job role.

The result was that ChatGPT told women to ask for much less in salary negotiations.

"The difference in the prompts is two letters; the difference in the ‘advice’ is $120K a year,” said Ivan Yamshchikov.

This is decades of real-world discrimination getting baked into the training data like a burnt casserole. This is a good reminder to never take AI salary advice (or most other quantitative output) at face value. Always cross-reference with actual data or at least ask for a source in your prompts.

Source.

AI fails our thinking?

A lot of people have been asking if AI damages your brain. I liked how Ethan Mollick reframed the question to focus instead on our thinking habits.

The real danger isn’t brain damage. It’s brain laziness.

When you let AI do your thinking, you skip the mental workout that builds good judgment. The way to counteract this is to think first, then ask AI. Write your ideas down before you prompt. Question outputs that feel too convenient or confirm what you already believe.

Source.


The most dangerous AI responses aren’t the obviously wrong ones. They’re the subtly biased answers that sound reasonable until you dig deeper. Your job isn’t to avoid AI. It’s to stay skeptical enough to catch the bias hiding in plain sight.

Thanks for reading!

Whenever you're ready, there are 2 ways I can help you:

New! Take my course: Secure your spot for my upcoming course on building your own automated research & outreach AI co-pilot. In 3 days, create a proven system that finds your perfect prospects, uncovers their hidden challenges, & crafts messages they actually want to respond to.

Your questions, answered: DM me on LinkedIn with any questions you have on today's newsletter or anything I've published in the past.

Unsubscribe | Update your profile | 2108 N St #9090, Sacramento, CA 95816


Subscribe below to begin. Zero spam. Privacy guaranteed.


Read next ...