Can AI Resolve Your Fights? Interview with TechRadar
- Jordan Conrad
- Mar 18
- 2 min read
Updated: Mar 18

Arguments are painful, but unavoidable, part of relationships. One of the most common features we see in couples therapy is the desire to “win” the argument – to get your partner to acquiesce and acknowledge you were right all along. So, perhaps it is no surprise that some couples are turning to ChatGPT and other large-language models (LLMs) to adjudicate their arguments.
When TechRadar was looking for a couples therapist in NYC with an understanding of AI and mental health, they reached out to Madison Park's founder and clinical director, Jordan Conrad, PhD, LCSW. In "Should you use ChatGPT to win an argument? I spoke to mental health and relationship experts to find out" Jordan sat down with Becca Caddy to discuss what AI can, and cannot, do and its intrusion into romantic relationships.
“The biggest problem with using AI to 'win' arguments is that it demonstrates that you are not 'in it' together – you are trying to win, not resolve the issue. That is a big red flag,” Jordan explains. Although it sounds corny, Jordan says “in relationships, arguments have to be framed as 'you and me vs. the problem,' not 'you vs. me.' If one person 'wins' and the other 'loses,' then you both lose.” That is because in a relationship you have to be a unit, a team. When your teammate scores, you get the point. That has to be where you’re at in your relationship.
However, there is another problem using LLMs to win an argument. Not only is the motivation behind its use a problem, LLMs are more limited in what they can do than people realize. “It is not at all clear, at this stage, if AI can genuinely win an argument and not simply do what it is programmed to do, which is provide the statistically most likely word in a sequence,” Jordan says. “A calculator does not actually know arithmetic, it just solves the problem, and similarly, Microsoft Word does not actually know English grammar,” Jordan explains. In the same way AI does not actually know anything at all, they are just following their programming.
Part of the problem comes from context. Jordan provides an example: “A romantic partner and a drunken stranger saying the same words convey entirely different meanings. Likewise, a person with a history of trauma will interpret ‘I love you’ differently from someone with secure attachments.”