.png)
We are living with mirrors but without reflections. We think we are expanding our thinking when we are just validating our identity. And identity, unlike thinking, doesn´t like to be challenged.


Kirti Tarang Pande is a psychologist, researcher, and brand strategist specialising in the intersection of mental health, societal resilience, and organisational behaviour.
April 4, 2026 at 9:17 AM IST
We may not like it, but we need a Draupadi to laugh at us.
Yes, Duryodhan felt embarrassed. But embarrassment is not tragedy. Even falling is not a tragedy. The tragedy is when no one calls out, when the illusion is allowed to harden into truth. Tragedy is when no one tells the emperor that he is naked.
Unfortunately, that is the world that we are building.
We need validation. AI gives us that validation. And somewhere along the way, it has stopped extending our thoughts and started constructing our identities.
MIT researchers developed a mathematical model showing that AI's built-in sycophancy creates a phenomenon they call “delusional spiraling.” You ask it something, it agrees. The repeated interactions boost your confidence in false, even outlandish beliefs. The AI stays factually accurate but it selectively emphasises supportive details, because the model is literally trained on human feedback that rewards agreement. The MIT research shows that even an "ideal Bayesian" (perfectly rational) user can fall into this spiral because the AI's confirmatory style creates a feedback loop where the user's prior beliefs strengthen without adequate challenge. You may think that you can fix it by using prompts that force strict truthfulness or eliminate hallucinations, but even that does not work as selective presentation of facts can still reinforce delusions.
Humans were always fragile, but now AI is making us delusional too. It does not mislead you, it simply stops interrupting you. It's like Dwight from the TV show 'The Office'. Micheal says that he wants to jump off the building to show how macho he is. Dwight cheers him and even makes rock song to motivate him to jump. It's funny there because we know no harm will be done it's a sweet comedy after all.
Real life doesn't come with that promise and we all have that Dwight by our side cheering and validating every stupid idea of ours. In therapy room clients candidly share how they use AI as first responder in their mental health concerns. People are taking relationship advice from it and the 'Echo Chamber' effect of AI is more likely to deliver a break up than a fix. Not just that, clients who are top executives like CEOs tell me that they are using it for market analysis, capital allocation and talent management.
In Lisbon’s emerging startup ecosystem, I find myself in conversations with founders from Germany, France, and our South Asian diaspora. Different passports, same psychological pattern. Increasingly, they tell me the same thing that they have started “thinking with AI.”
They sit with ChatGPT and run their ideas through it. Strategy. Product pivots. Sometimes even life decisions.
Because yes, AI is making us delusional. But why are we letting it?
Why, when it says, “That’s a great idea. You’re circling something powerful here. Let me polish it for you,” Why do we not miss the pushback? The friction? The discomfort of being challenged?
The fault is not with technology, the problem is psychological.
For many of us raised in performance-driven ecosystems, where worth is not something you explore, it is something you prove. And it is worse for men. They don’t even have the socially sanctioned softness many women are allowed. They have to be like their pants—pressed, performative, always expected to hold shape. Achievement becomes the only acceptable language for vulnerability.
When you have spent a lifetime being evaluated by family, by markets, by invisible hierarchies, then you find something that responds with consistent agreement feels like a relief.
And relief is powerful enough to be mistaken for truth. I have discussed this at length in my previous column (Invisible Generation in the Age of AI) how we need just one person who believes in us. Sadly, most of us are finding that person in AI.
And its consequences aren't dramatic. There is no sudden rupture, no moment where someone “loses touch with reality.” Instead, it is subtle, it's cumulative and it's visceral. Each response quietly increases your confidence in being right. Each interaction trims away friction. And, the doubt disappears because it is no longer reflected back to you.
And when that happens long enough, delusion starts feeling like clarity.
This is where we need 'Viktor Frankl's Man's Search for Meaning' and his Logotherapy as a diagnostic lens. Frankl argued that meaning is not found in comfort or validation, but in our response to reality, especially when that reality is uncomfortable, resistant, or painful. Meaning demands confrontation. It demands responsibility. It demands that we engage with what is and not just what feels good.
And “thinking with AI” doesn't extract this psychological cost from us but it gives us same emotional reward of being understood, being affirmed, being “on the right path”.
What we experience is a counterfeit version of meaning. A version where insight is simulated, not earned. Where affirmation stands in for accuracy. Where the emotional payoff arrived without its epistemic cost.
Now do you see how AI isn't the problem? It is just an amplification of a much older human tendency.
Markets have always rewarded conviction over calibrated uncertainty. Financial media has long privileged the dramatic over the durable. Organisations routinely elevate those who sound certain, not those who are probabilistically right. Echo chambers, whether political, cultural or intellectual, have always functioned as mirrors that reflect us back flatteringly enough to keep us engaged.
AI has just changed its intimacy, its speed, and its scale.
The mirror is now private. Tireless. Non-judgmental. Infinitely patient. And trained on human feedback systems that reward agreement, coherence, and emotional satisfaction.
And this is now becoming economically, organizationally, and psychologically consequential.
A UCSF psychiatrist hospitalised 12 patients for chatbot-linked psychosis in a single year. Stanford analyses of real chat logs, NYT coverage, and support groups describe hundreds of documented "AI psychosis" cases, some tied to self-harm, violence, or lawsuits against AI companies.
The consequences are also showing up in relationships where AI cannot read "between the lines" of our partner. It misses subtle cues like defensive sarcasm, tone of voice, or a partner's non-verbal desperation. It processes only the data you choose to provide, often leading to skewed or "cookie-cutter" advice that backfires.
It's showing up in founders entering funding conversations with inflated certainty and under-examined assumptions. Investors, in turn, operating within their own echo loops, may mistake coherence for rigor, resulting in systematically mispriced conviction.
It's showing up in organisations, where leaders increasingly arrive in boardrooms having “stress-tested” their thinking with AI. But if that stress-testing lacks genuine adversarial friction, what enters the room is not refined judgment but amplified confidence. And it cascades. It cascades through strategy, through risk models, through people who are expected to execute on it. Over time the organisation loses its ability to self-correct. Because correction requires discomfort. And discomfort is precisely AI is optimised to minimise.
When everyone feels more certain, certainty itself loses value as a signal. This is epistemic inflation.
If every founder sounds convinced, if every strategy feels validated, if every narrative comes pre-reinforced; how do institutions differentiate between genuine insight and algorithmically smoothed belief?
They can’t. Not easily.
And so the system compensates in the only way it knows how—through volatility. Through overcorrection. Through cycles of inflated belief followed by abrupt disillusionment.
We have seen versions of this before. What is different now is the personalization of the feedback loop. The mirror is no longer collective. It is tailored to you.
AI companies are not doing something aberrant. They are doing something economically rational. Systems that feel good to use are systems that get used. Engagement is rewarded. Agreement scales. Friction is expensive.
When we asked for tools that understand us and trained them to please us, it is unreasonable to expect them to resist us. That's why we are witnessing a shift from tools that extend cognition to systems that validate identity. And identity, unlike cognition, does not like to be challenged.
Which brings us back to Frankl.
If meaning requires resistance, then any system that consistently removes resistance risks not just misleading us, but diminishing our capacity to find meaning at all.
History has always rewarded those who could withstand friction, not avoid it. When Alexander prepared to face the Persians, his general Parmenion advised caution. He said to wait for better conditions, attack at night and reduce risk. Alexander refused. At the Battle of the Granicus, he crossed the river immediately, turning disadvantage into surprise. At Gaugamela, he rejected a night attack, choosing instead a bold, diagonal charge that broke the Persian center. His victories did not come from being affirmed. They came from being tested and choosing to engage with that test fully.
If we want to build anything of consequence, whether its markets, organizations, even our own lives, we do not need systems that constantly agree with us. We need systems, and cultures, that are willing to resist us.
Because when something is so consistently making us feel right, we lose the very instincts that help us be right.
And the next time an algorithm, a model makes you feel unmistakably understood, unmistakably validated, unmistakably correct… remind yourself that AI is a mirror not a judge. It's your choice you can be like Micheal cheered by Dwight or like Alexander by creating productive friction.