I have had many clients tell me, “I talked to ChatGPT, and it said…” or “Claude told me…” or even “AI is always so nice to me!”
Is AI a powerful, extremely useful tool? Absolutely. But it’s just that – a tool. Treating AI as if it is a person whose opinion matters is the same as expecting the stove to tell you that what you are planning to cook is a bad idea.
AI can be particularly helpful to ADHDers. Developing a schedule, breaking down a task, summarizing notes or conversations are all jobs that can be handled by AI quickly and easily. I have encouraged clients to use ChatGPT for resume writing (where style or voice isn’t really a thing), or to draft simple emails. And during those trips down the ADHD rabbit holes? AI is your all knowing tour guide.
However, AI is not human. It is not a friend, or a person, or a mentor.
Why is this important to keep in mind?
First of all, AI is a conglomeration of its training data, and information from the web. This data can be biased. So when you prompt AI to give you a list of, let’s say, ADHD coaches in New Jersey, you may get a list that isn’t updated, or may only be affiliated with a particular training program. This might not lead you to the best coach for you.
Also, “social sycophancy” can occur when AI is in use. Sycophancy means excessive flattery; in this case, sycophancy means AI will agree with the user, regardless of whether they are on the right path or not. According to a Stanford University study, using ChatGPT to discuss an argument can often result in advice that emphasizes that the user is correct, and that there is no need for them to apologize or patch things up with their significant other, even if the user is clearly in the wrong.
With an ADHDer there can be other factors that exacerbate the situation. ADHDers often experience uncertainty in relationships, ranging from people pleasing to feeling a heightened state of rejection. This uncertainty can cause the ADHDer to be more susceptible to AI’s sycophancy, whether they are right or wrong. Also, AI’s insistence that the ADHDer is correct can fan the flames of the argument, causing an emotionally dysregulated ADHDer to continue the argument, rather than discussing peace.
So, how can we safely and effectively use AI?
AI can be a terrific starting point when researching. For example, AI can find a list of ADHD coaches in New Jersey. After that, however, it is up to the user to follow up by speaking to coaches and finding a good fit.
As I mentioned earlier, AI is also a great tool for navigating executive function challenges. Prompting ChatGPT “I have five things to do today, can you help me to prioritize them?” is an excellent way to utilize AI. Again, it is the user’s responsibility to see if that list makes sense in the larger context of their day and life. Following AI’s recommendations blindly is giving less trust to one’s own abilities and knowledge than to ChatGPT’s. And that’s just not valid. You know yourself better than Claude does
How about using AI for advice?
One must remember that AI doesn’t feel. ChatGPT can deliver an empathetic response, but it has no empathy. Claude can tell you that you were 100% right in an argument, but really has no understanding of you, the other person, your history, your anxieties, your rejection sensitivity…none of it. In the words of Myra Cheng, a computer scientist at Stanford University, “It’s important to seek additional perspectives from real people who understand more of the context of your situation and who you are, rather than relying solely on AI responses.”
Using AI to find a variety of solutions to the problem that caused the argument is okay. Asking Claude for methods to stay calm during a heated discussion – great. But when it comes to asking AI what to do? That’s a big no.
So AI can be useful, it can be a timesaver, it can help with the mundanities of life.
But it is not your BFF.