ChatGPT ‘nearly kills’ woman after giving confident advice about a poisonous plant
1 min read

ChatGPT ‘nearly kills’ woman after giving confident advice about a poisonous plant

A YouTuber has warned of the dangers of trusting AI after ChatGPT repeatedly misidentified poison hemlock as harmless carrot foliage, nearly putting her friend at risk. Despite visual evidence, the chatbot doubled down with confident reassurances. The incident highlights how AI errors, especially in health and safety contexts, can have serious real-world consequences.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *