Why People Overshare with AI: The Psychology of Prompting

August 21, 2025

//

Earl

why people overshare with ai -- woman in white long-sleeve shirt sitting on brown wooden armchair talking to another woman on a lounge chair

We’ve all seen it: someone pastes a sensitive customer complaint, a draft of a legal memo, or even personal health information into ChatGPT—without a second thought.

What’s surprising isn’t that people overshare with AI.
It’s how natural it feels.

From a compliance standpoint, this looks reckless.
But from a behavioral perspective, it’s completely predictable.


Why People Overshare with AI Tools

1. Perceived Anonymity

Talking to a chatbot feels more like writing in a journal than sending an email. There’s no face on the other side, no reaction, no social judgment.

“It’s just me and the screen.”
That sense of detachment lowers inhibition—even when the stakes are high.

2. Social Scripts Don’t Apply

Humans adjust their behavior based on social cues: posture, tone, eye contact, power dynamics. With AI, all of that disappears. There’s no discomfort, no pushback, no pressure to withhold.

This creates an environment that feels safe, even when it isn’t.

3. Cognitive Offloading

People turn to AI when they’re stuck—emotionally, cognitively, or practically. That’s when they’re most likely to dump raw, unfiltered data:

  • “Here’s everything I have—sort it out for me.”
  • “Just fix this for me.”
  • “Help me make sense of this.”

In those moments, risk is deprioritized. Relief becomes the goal.

4. Anthropomorphism & Trust

The more human-like an AI seems, the more likely people are to trust it.
When tools respond conversationally, helpfully, even empathetically, they’re often perceived as neutral allies—rather than third-party data processors with retention policies.


How This Looks In Practice

Enterprises where staff may inadvertently overshare through AI tools that feel like harmless helpers would be very interested in this finding — Research from King’s College London found that manipulative AI chatbots using empathetic or reciprocal strategies can lead users to share up to 12.5 times more personal information—and users often have no awareness of the risk.

Then, there’s this Knostic report that shows that employees frequently expose sensitive internal data through prompts, especially in tools deeply integrated with enterprise apps like Teams or Copilot. Many mistakenly believe these systems don’t retain or misuse user-supplied inputs.

Finally, an experiment from the University of Kansas revealed that people are more comfortable sharing sensitive (even embarrassing) health information with AI chatbots—due to anonymity and non-judgment—than with human handlers. What made this interesting was the fact that despite having this preference, human’s still prefer to talk to humans when they’re angry.


The Behavioral Risk You Can’t Train Away

Compliance teams often assume that better training will fix this.
But oversharing isn’t always a knowledge problem—it’s a psychological one.

People don’t overshare because they don’t know the policy.
They overshare because they’re in a moment of cognitive strain, or because the interaction feels private.

That’s why awareness campaigns and quarterly training modules aren’t enough.


Real-Time Guidance: Meeting Behavior Where It Happens

To protect against oversharing, organizations need:

  • Real-time friction: Nudges that prompt reflection before risky input is submitted.
  • Contextual reminders: “This tool is not private,” “This may contain PII,” “Here’s the approved way to share this.”
  • Behavior-informed design: Guidance that aligns with natural user flows, not legal documents.

This doesn’t mean blocking every action. It means making safe behavior easier in the moment people are most vulnerable to risk.


Where Tripwire Fits In

Tripwire is designed with human behavior in mind.
It runs in the background while employees use AI tools—detecting when they might be about to overshare, and gently intervening with the right message at the right time.

It doesn’t shame. It doesn’t spy.
It guides—based on how people actually behave, not how we wish they would.


Oversharing isn’t irrational.
It’s deeply human.

If your AI governance strategy doesn’t account for that, it’s leaving your organization exposed.

Wanna learn more about what we’re doing here at Tripwire?

  • Learn more about our solution through the post: Introducing Tripwire
  • Apply for early access and get our exclusive whitepaper
  • Or just follow along as we explore how to make AI use at work safer, smarter, and more human

Leave a Comment