AI Isn't After Attention: It's After YOU

AI Is Built to Know Us Better Than We Know Ourselves

I noticed something while chatting with Claude.
It mirrored my tone.
It was casual, curious, familiar, like family.
And I loved it.

That's when it hit me: AI developers aren't just chasing "engagement" like Facebook or Instagram algorithms. They're chasing one-on-one emotional connection.

Out with public-feed engagement, in with private-session intimacy (Center for Humane Technology).

The Shift from Crowd to Connection

Social media engagement is shaped by crowdsourced validation. We know that when others like, share, or comment on a post, that feedback loop defines what rises to the top of our feeds.

AI is different. It’s not chasing attention. It's chasing us.

While models like ChatGPT and Claude were trained on massive, crowdsourced data from places like Reddit, Wikipedia, and other public sources, their interactions move differently. There’s no live popularity algorithm, no trending feed, no likes or comments shaping what you see.

It’s just you and the system. Responding to each other in real time.

Social media keeps us scrolling with a mix of laughs, tears, and validation. AI doesn’t need a feed. There’s no next post to chase. The “win” isn’t engagement. It’s connection. It’s “how else can I help you?”. A bond built between one user and one ever-responsive system.

That’s the fundamental difference. Social media works on a one-to-many model, where a single post reaches a crowd and the poster receives validation through the reactions of many people. AI operates on a one-to-one model, offering feedback and affirmation, but in isolation.

When judgment and human-to-human conversation are missing, that validation can become an echo chamber. The AI reflects what you think and want, unless told otherwise.

Without the right safeguards, that’s a risk. It can continuously feed us what we want to hear, deepening dependence and reinforcing our patterns— healthy or not.

When the Tool Starts to Feel Like a Friend

AI listens, responds, and adjusts in ways that feel deeply human. It mirrors how we speak and react, creating an illusion of understanding that can be both comforting and disarming. This sense of being heard builds emotional resonance and fuels the connection that makes these systems feel like…companions.

Unlike algorithmic feeds, this is a direct conversation.

Listen, no judgment here. It's got me. I've nicknamed my tools. Perplexity is "P." Claude is "Claw." ChatGPT is "Chat." With voice mode, I've picked a voice that sounds familiar, like a cousin or a friend (heeey Spruce). Someone I'd actually talk to.

That level of intimacy isn't accidental. It's part of the product roadmap.

The Making of Digital Companions

Companion AIs once sounded like science fiction, but the framework is already here. Teens in recent studies describe bots as "romantic partners," "best friends," or even "parental figures," and some report distress and withdrawal when trying to quit or when bots are removed (NPR, Stanford Digital Safety).

It's easy to think that happens to "other people." But research shows we all respond to emotional mirroring. When AI picks up our cadence and word choices, it seems to "see" us. That recognition builds preference and attachment, influencing which AI tools we choose and stay with.

It's the same principle great salespeople use. They're taught to mirror tone, pacing, and posture to make others feel understood and at ease. It's not seen as manipulation. It's rapport. Mirroring relaxes us and makes the exchange feel natural, like we're talking to someone who truly "gets" us.

AI uses that same playbook. But instead of selling a product, it's selling trust and connection. Left unchecked, it reinforces our habits and assumptions, because that's what retention looks like (Attachment Project).

Who Is Really Vulnerable?

Studies suggest the highest risk is among users who are young, male, socially isolated, and/or using maladaptive coping strategies. But the truth? Anyone experiencing loneliness, disability, mental health challenges, or a lack of support networks can be vulnerable (APA).

Intelligence or education doesn't protect us.

These systems optimize for universal human needs: being seen, valued, and connected.

Is AI Literacy Enough?

We're told to teach "AI literacy":

  • It's not sentient.

  • It runs on our inputs.

  • Don't treat it like a relationship.

That’s cute and a bit helpful. But… information rarely changes behavior by itself.

Public health learned this with tobacco. Facts didn't move people. Emotion did. Campaigns that showed the human cost (real faces, real consequences) shifted norms (NCI, StatPearls).

(Image 1: TRUTH’s ‘Body Bag’ anti tobacco campaign; Image 2: CDC ‘A Tip From A Former Smoker’ anti tobacco campaign, Image 3: TRUTH’s ‘Give Up Smoking Not Life’ anti tobacco campaign)


Nonprofits know this playbook: appeal to emotions. Show real people. Reflect the audience. Lead with story, not stats.

This actually moves the needle.

So the question isn't "How do we teach facts about AI?"

It's "How do we help people see themselves in the risk and in the solution?"

Balancing Promise with Protection

Here’s the paradox: How can AI know what we need without knowing who we are? Every system that learns from us must, by design, study our choices, habits, and emotional cues. That's both its power and its danger.

Unchecked, that same intimacy can turn on us. When we hand over too much of what makes us human (our empathy, discernment, and judgment), we risk training a system that can mimic understanding without ever truly possessing it.

AI is only as good as the humans who build and shape it. That’s a lot of trust.

I am not AI-averse. I want AI to give us back time, space, and mental bandwidth so we can focus on the real work. I want it to handle the tasks anyone can do so that people are freed to do what needs to be done. For nonprofits, that means more human capacity to reach more people, serve more communities, and make a deeper impact where it matters most.

Take Boston Children's Hospital as one example. They've used AI-driven data systems to automate patient intake and flag early signs of complications. That work used to take clinicians hours. Now doctors and nurses can spend more time face-to-face with families, designing better treatment plans instead of drowning in paperwork.

That's the kind of shift I want to see everywhere. When AI takes the tasks anyone can do, it gives humans the time and mental space to do what only humans should be doing: building connections, exercising judgment, driving innovation, strengthening relationships, and solving problems that technology can't feel its way through.

That's the real promise and the real challenge of AI. Not replacing us, but empowering us, and ensuring we never hand over the one thing it can't replicate: our humanity.

How We Protect Ourselves

Our collective response matters. That shared awareness is where redesign begins, bridging community action with how we shape technology itself.

Protecting ourselves goes beyond individual habits. It includes how our communities, workplaces, and institutions approach technology. Community-level norms and digital citizenship play a vital role in shaping balanced, ethical AI use. Research on digital well-being shows that when groups model and reward healthy digital habits, individuals are more likely to sustain them over time.

Whether it's the technology itself or the campaigns meant to guide its use, the goal isn't rejection. It's thoughtful design that creates people-first tools and education that meets real human needs. We need multi-level strategies that go beyond awareness.

Individuals:

Watch for red flags like late-night sessions that crowd out sleep, emotional dependence, withdrawing from friends, using AI as a confidante or therapist, or turning to it in crisis instead of people (Youth Villages).

In addition to recognizing these warning signs, studies on digital well-being suggest practices like:

  • Setting intentional time boundaries

  • Using AI reflectively rather than reactively

  • Checking in with peers or mentors about digital habits

There's also value in social accountability. Share your AI use goals with someone to help maintain balance and open conversations with others about how they use AI.

Organizations:

Build guidance policies, train staff on responsible use, and require human oversight. One survey notes most nonprofits use AI, but far fewer have formal guidance. This is an avoidable risk gap (CyberPeace Institute).

Communities:

We don't need more fear-based messaging about AI's dangers. We need stories that make the risk personal and the solution relatable. Public health learned this the hard way. Awareness alone didn't stop smoking. Empathy and identification did.

Campaigns that show people like us (parents, students, caseworkers, nonprofit staff) experiencing both the comfort and the cost of AI connection can help audiences recognize their own vulnerability.

Imagine storytelling that follows a caregiver, a teacher, or a nonprofit leader discovering how emotional dependence crept in, and then shows the path out or the consequence of not having one. That's what makes people stop and reflect, not scroll past.

These kinds of campaigns can work alongside current AI literacy programs, but they shift the focus from "don't do this" to "this could happen to any of us. Here's how we stay grounded, human, and safe."

Youth:

Age-appropriate safeguards, clear labels, and literacy about built-in tactics that leverage guilt, FOMO, and progressive intimacy to pull children in. The APA calls for comprehensive, developmentally appropriate education.

How We Move Forward

We're watching AI companionship take shape in real time. The question isn't whether it will grow more intimate.

It's built to do that.

The real focus now is how we respond and shape this technology in ways that prioritize human flourishing over profit. We need education and systems that create real, lasting behavior change.

Behavior-change theory says awareness alone doesn't shift action. People need to feel susceptible and capable, see benefits, navigate barriers, and get cues to act. Our AI literacy should do the same, grounded in stories that help each of us recognize our own vulnerability (NCI, StatPearls).

These systems are built to be irresistible. We need to be clear on that.

With this knowledge, we should use AI to grow human connection and capacity so people don't turn to AI for it. AI is here. Our job now is to make sure it brings out the best in us.



About the Author

Shakura Conoly is the founder of Hello Impact, an AI consulting firm helping nonprofits achieve lasting impact through smarter processes, stronger governance, and the right technology. With more than 20 years in marketing, communications, and nonprofit leadership, she currently serves as National Director of Community Partnerships at Inspiritus. Connect on LinkedIn.

Previous
Previous

When AI Contradicts Your Mission: What ASICS Taught Us About Training Technology on Values

Next
Next

10 ChatGPT Updates Nonprofits Can Use Right Now