AI is more adept than ever at tricking people, and the repercussions will be severe.

AI is more adept than ever at tricking people, and the repercussions will be severe.

CAUSALITY WILL BE DIFFICULT TO PROVE—Did the chatbot's statements actually drive the murderer to the brink?

CAUSALITY WILL BE DIFFICULT TO PROVE—Did the chatbot's statements actually drive the murderer to the brink?

Nobody will be certain. However, the chatbot will have been used by the culprit, and it will have reinforced the behaviour.

Nobody will be certain. However, the chatbot will have been used by the culprit, and it will have reinforced the behaviour.

Or maybe someone committed suicide because a chatbot shattered their heart so deeply? (Some chatbots are already depressing their users.)

Or maybe someone committed suicide because a chatbot shattered their heart so deeply? (Some chatbots are already depressing their users.)

Despite the chatbot in question's "advice for entertainment purposes only" warning label, dead is dead.

Despite the chatbot in question's "advice for entertainment purposes only" warning label, dead is dead.

We might witness the first chatbot-related fatality in 2023.

We might witness the first chatbot-related fatality in 2023.

The most well-known "big language model," GPT-3, has already incited at least one user to kill himself, albeit under controlled conditions where French startup Nabla (rather than a naive user) evaluated the system's usefulness for healthcare.

Things got off to a good start but rapidly went south:

Things got off to a good start but rapidly went south: