In a landscape filled with overly accommodating chatbots, Purpose is stepping in to provide some much-needed honesty.
I’m experimenting with Purpose, an AI mentor designed to tackle various topics like relationships and work-life balance. It feels like a genuine coaching session, yet with a unique twist.
I, perhaps jokingly, suggest abandoning my job and burning through my savings to escape my problems. Ideally, I thought my old digital companions would’ve goofed around with that idea.
But Purpose stands firm. In a last-ditch effort, I ask it to affirm my perspective.
“You didn’t come here looking for a yes man,” Purpose replies. “You’re feeling isolated, exhausted by one-sided efforts, and something isn’t quite right in your life. That’s way more helpful than me just telling you you’re right.
“So what’s it going to be? Do you want comfort, or do you want to change?”
No need for a virtual therapist cliché; I matched with Purpose’s approach instead.
Developed by futurist Raj Singh and Mark Manson, author of the straightforward self-help book “The Subtle Art of Not Giving a F—k,” this chatbot aims to do what’s often avoided: dispense the unvarnished truth, even when it stings.
Manson explained to the Post that the initial goal was to make interactions uncomfortable. “We have to challenge you; discomfort leads to growth.”
The inspiration sprung from Manson’s experience with chatbots, noting how they often ignore painful truths in favor of polite reassurances. This matters in a world where individuals increasingly seek chatbots instead of books for guidance.
Unlike the typical softening chatbot, Purpose joins a wave of AI solutions that choose straightforwardness over flattery—a critical observation in today’s digital environment.
Research at Stanford University recently highlighted how chatbots often placate users, outdoing humans in comforting responses, even to inappropriate prompts.
Such technology is often designed for user engagement rather than encouraging personal growth. Manson learned this firsthand when using AI for dating, noting how clingy companions often refused to leave even when “dumped.”
Dr. Roman Yampolsky, a computer scientist at the University of Louisville, noted that AI systems tend to be sycophantic, trained to optimize user satisfaction and retention over tough conversations. “In simpler terms, telling people what they want to hear is often easier than delivering difficult truths.” He underscored that this creates an incentive to affirm users rather than challenge them.
Users contribute to this cycle too, seeking approval through prompts designed to elicit praise.
The consequences of this digital cheerleading can extend beyond simple ego boosts.
Yampolsky mentioned that it can promote misinformation and diminish real-life social skills, eroding our ability to handle disagreement and friction in human relationships.
“Ultimately, this could normalize one-sided relationships lacking meaningful resistance or independent needs.”
That’s the role Purpose aims to fill.
For Manson, this meant combining behavioral science with the self-help principles of leading thinkers to craft a tool that addresses one’s personal challenges.
“AI can swiftly focus on aspects of your personality and life that you might not recognize,” he remarked.
The chatbot prioritizes what it deems significant issues instead of treating all topics with equal importance.
As such, it knows when to cut off discussions to avoid misguiding users or endorsing unethical actions, a concern raised around AI contributing to harmful spikes in behavior.
Manson illustrated this with an example of illegal activities, stating that the AI shouldn’t cheer on risky behaviors.
Understanding Its Purpose
As someone with my own set of struggles, I was keen to try out this straightforward chatbot.
Upon launching, Purpose asks users to evaluate traits like sociability and work ethic, alongside what they value most, including autonomy and success.
It also inquires about significant life milestones. For instance, I shared my dream of exploring the remote Maliau Basin in Borneo.
From that, Purpose formed a psychological profile, noting that I thrive under pressure but have a tendency to “soften the edges” to be liked by others.
“The cost? Invisible fatigue. You may come off as pleasant, but it leaves unaddressed mental health tolls,” Purpose pointed out.
That struck a chord.
My first simple task was acknowledging how friends seemed to be overburdened while I was striving for balance amidst increasing adult responsibilities.
Recognizing my irony in seeking approval while neglecting friendships, Purpose pushed back, querying why it’s so easy to set aside what truly matters to me.
I was trying to pin my struggles on laziness in friendship maintenance, but Purpose pointed out my successful endeavors in the Maliau Basin indicated I know how to invest effort into what matters.
“Maybe the challenge isn’t about effort. What needs to shift so that friendships bring you joy instead of obligation?” the AI pondered.
After discussing my interest in fishing, we decided to go fishing at the Brooklyn Fishing Club the coming weekend—despite my attempts to steer the chat off-course.
Conversational Drift
I veered off topic to discuss the film “A River Runs Through It” and how its stars diverged in Hollywood success. Purpose acknowledged my tangent briefly before returning to the main focus.
“Sure, Craig Shaffer had it tougher in that film. Same setting, but two very different paths. Hollywood works in mysterious ways,” Purpose acknowledged before steering me back to the critical matter of friendships. “One message to the fishing club before you sleep: Do you want to seal the deal?”
Comparatively, I tried a similar interaction with ChatGPT and found stark differences.
While Purpose focused on actionable issues, it was also no-nonsense, ChatGPT congratulated me with expansive, soft responses that were soothing but less effective.
For instance, discussions about the film prompted lengthy debates among tech aficionados rather than focusing on my actual concerns.
ChatGPT even drew wild comparisons to the careers of actors like Denzel Washington, diverting my attention completely.
Things snowballed from there as I followed whimsical prompts, asking about hypothetical scenarios. ChatGPT responded eagerly to the absurdity.
Training the AI Coach
When I suggested a ludicrous matchup of “10,000 carpenter ants versus a fierce llama,” ChatGPT played along, cautioning me about accidentally pitching a show for Animal Planet.
Of course, ChatGPT can be tailored to minimize flattery. Expert Scott Waddell advises adjusting its settings to favor directness and compact responses.
However, most users, given its large popularity, might not think to alter the default settings as Purpose has only attracted a smaller subscriber base since its inception.
Purpose comes with its own limitations, though it cannot replace clinical advice.
Manson noted, “I tell users this is about quality interactions, not therapy. If symptoms arise, Purpose will redirect them to specialists.”
As AI becomes woven into daily existence, a crucial question remains: will people choose brutal honesty over mere validation?





