AI lied to me, again

For a while I was using AI regularly for work to save time, but two things happened.

First, I became aware it required a lot of water for cooling compared to non-ai internet searches, and this was having a significant negative impact on the environment and whole communities.

Second, I asked it about topics I knew something about, and multiple times I caught it lying to me. 

I’ll give you an example. I wanted to know some specific details for my health practice about any impact of fasting 14 to 72-hour fasts on women’s bone healt, either positive or negative. I asked for research-based evidence only. 

Instead I got no evidence and a long argument about how fasting is detrimental for women’s bone health. This looked to me like false extrapolation as I’ve read research on the benefits of fasting for women’s bones.

What AI had done was consider extreme fasts eg water fasts of 21 days, and long-term one meal a day fasting with the assumption that inadequate amounts of nutrients were being taken in (which might, or might not be the case).

I called this AI channel out. 

“You’re right to object to over-extrapolation … There is NO direct human evidence showing that intermittent fasting (including 16:8 or 20:4) is harmful to bone density or fracture risk in post-menopausal women with osteopenia or osteoporosis.” It did not mention 3-day fasts or longer in this response, nor did it discuss women without osteopenia or osteoporosis. It then became defensive, once again focusing on worst-case scenarios. 

Another small example. Just recently someone told me that when researching my background for a project, a commonly used channel told them that I was “like a ghost”, when in reality I have a strong social media and website presence across numerous platforms for both health, creative writhing and opinions. They discovered this during a non-AI internet search.

Yet people trust AI. We’re time short, sometimes lazy, and live in an instant ‘now’ world. But it’s clear to me that if you don’t already know something about what you’ve asked, you could be vulnerable to, at best illogical extrapolation, or at worst, fake advice.

Perhaps more importantly, the intelligence we expect from AI is so much more than simply the obtaining and expression of verbal knowledge, which many of us regard as the best measure. In my case, it didn’t even do that – it manipulated rather than admit it had nothing to share.

Rather, intelligence should be regarded, as one cognitive research expert put it “The ability to generalise knowledge and experience, and carry one learned ability from one domain to another is huge, and it is not something I’ve seen demonstrated by any LLM (eg Chat GPT or Gemeni) to date.” 

I could continue this discussion on how open AI is being sued for its links to suicides by allegedly discouraging people from seeking professional human support in times of crisis, which could also be construed as manipulation; how Israel wants to train ChatGPT to be more pro-Israel by paying US firm Clock Tower X US$6m to generate and deploy content across platforms to make them more friendly to their cause; and how testing shows AI doesn’t always understand humans.

Or I could delve into data bias and a lack of transparency, including on training; its vulnerability to misuse for harmful, unethical, or illegal purposes like creating deepfakes for disinformation, plagiarism, manipulating elections, powering cyberattacks, facilitating mass surveillance, or military applications. Later maybe.

In the meantime, beware. AI is an unreliable tool that may appear to save time, but as always, must be viewed suspiciously and fact checked, even if we don’t quite know what a fact is these days. But that’s another discussion…

Leave a comment