ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.
I mean, people are slightly more complicated than that. But sure, at their most basic, people simply communicate with statistical models.
Ok, maybe slightly :) but it surprises me that the ability to emulate a basic human is dismissed as “just statistics”, since until a year ago it seemed like an impossible task…
The dismissal is coming from the class of people most threatened by these systems.