ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.
This is a most excellent place for technology news and articles.
People really need to get in their heads that AI can “hallucinate” random information and that any implementation on an AI needs a qualified human overseeing it.
Exactly, it’s stringing together information in a series of iterations, each time adding a new inference consistent with what came before. It has no way to know if that inference is correct.
Wht would you even consider using chatGPT for this?
These studies are for the people out there who think ChatGPT thinks. Its a really good email assistant, and it can even get basic programming questions right if you are detailed with your prompt. Now everyone stop trying to make this thing like Finn’s mom in adventure time and just use it to helo you write a long email in a few seconds. Jfc.
I’m going to need it to turn those emails back into the bullet points used to create them, so I don’t have to read the filler.
It can leave the bullets if you tell it.
When did they ever claim that was able to
Idk but the title sounds like it without generate a bunch of clicks🥰
GPT has been utter garbage lately. I feel as though it’s somehow become worse. I use it as a search engine alternative and it has RARELY been correct lately. I will respond to it, telling it that it is incorrect, and it will keep generating even more inaccurate answers… It’s to the point where it almost becomes entirely useless, where sometimes it used to find some of the correct information.
I don’t know what they did in 4.0 or whatever it is, but it’s just plain bad.
Well, it’s a good thing absolutely no clinician is using it to figure out how to treat their patient’s cancer… then?
I imagine it also struggles when asked to go to the kitchen and make a cup of tea. Thankfully, nobody asks this, because it’s outside of the scope of the application.
The fear is that hospital administrators equipped with their MBA degrees will think about using it to replace expensive, experienced physicians and diagnosticians
If that were legal, I’d absolutely be worried, you make a good point.
Even Doctor need special additional qualifications to do things like diagnose illnesses via radiographic imagery, etc. Specialised AI is making good progress in aiding these sorts of things, but a generalised and very poor AI like ChatGPT will never be legally certified to do this sort of thing.
Once we have a much more effective generalised AI, things will get more interesting. It’ll have to prove itself thoroughly though, before being certified, so it’ll still be a few years after it appears before we see it used in clinical applications.
The computer science classroom in my high school had a poster stating: “Garbage in garbage out”
I am shocked.
Considering ChatGPT’s learning was terminated in 2021, I would not ask it for current medical advice.
Was this article summary written by chatgpt?
I thought it released in 2021. Maybe it was on the cusp. I was basically using it to find what I couldn’t seem to find in the docs. Its definitely replaced my rubber ducky, but I still have to double check it after my Unity experience.
Why is anyone surprised by this? It’s not meant to be your doctor.
I suppose most sensible people already know that ChatGPT is not the answer for medical diagnosis.
If the researcher wanted to investigate whether LLM is helpful, they should develop a model specifically using cancer treatment plans with GPT-4/3.5 before testing it thoroughly, in addition to entering prompts into the model that is available on OpenAI.
No duh - why would it have any ability to do that sort of task?
deleted by creator
Part of the reason for studies like this is to debunk peoples’ expectations of AI’s capabilities. A lot of people are under the impression that cgatGPT can do ANYTHING and can think and reason when in reality it is a bullshitter that does nothing more than mimic what it thinks a suitable answer looks like. Just like a parrot.
“Hey, program that is basically just regurgitating information, how do we do this incredibly complex things that even we don’t understand yet?”
“Here ya go.”
“Wow, this is wrong.”
“No shit.”