Over just a few months, ChatGPT went from accurately answering a simple math problem 98% of the time to just 2%, study finds::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.
This is a most excellent place for technology news and articles.
Ah fuck, it’s been scraping the Facebook comments under every math problem with parentheses that was posted for ‘engagement’
The masses of people there who never learned PEMDAS (or BEDMAS depending on your region) is depressing.
Pretty much all of those rely on the fact that PEMDAS is ambiguous with actual usage. The reason why is it doesn’t differentiate between explicit multiplication and implicit multiplication by placement. E.G. in actual usage “a*b” and “ab” are treated with two different precedence. Most of the time it doesn’t matter but when you introduce division it does. “a*b/c*d” and “ab/cd” are generally treated very differently in practice, while PEMDAS says they’re equivalent.
Have we considered the possibility that math has just gotten more difficult over the past few months?
Well, lots of people deleted their Reddit posts and comments. ChatGPT can’t find a place to learn no more. We got to beef up the Fediverse to help ChatGPT put. /s
Seems pretty plausible that the compute required for the “good” version was too high for them to sustainably run it for the normies.