I for one welcome the ability to:
You may now begin the downvotes. (Even if you’re wrong I respect your opinions)
Sorry this is just a screenshot from the web, but the advanced menu is just 20 or so toggles for the PC, its definitely missing a few options in there though, I forget what the setting is called ATM but I’m trying to force a display port to run through the GPU instead of the integrated graphics, I’ve done it before successfully
Well I see there’s a way to get to the “advanced” settings by pressing Ctrl+left alt+shift+f2 but its been doing nothing for me, even when going back into windows and unlocking the function keys.
And I have flashed straight from the bios offered on the computers specifications page and confirmed that it matched, I’m really not sure what’s going on here, or why their bios on the downloads page is an older version than what was shipped with the laptop. I know sure as hell the original owner, my dad, wasn’t the one to upgrade.
Tbh I think you’re making a lot of assumptions and ignoring the point of this paper. The small model was used to quickly show proof of generative degradation over itérations when the model was trained on its own output data. The use of opt125 was used precisely due to its small size so they could demonstrate this phenomenon in less iterations. The point still stands that this shows that data poisoning exists, and just because a Model is much bigger doesn’t make sense that it would be immune to this effect, just that it will take longer. I suspect that with companies continually scraping the web and sources for data, like reddit which this article mentions has struck a deal with Google to allow their models to train off of, this process will not in fact take too long as more and more of reddit posts become AI generated in itself.
I think it’s a fallacy to assume that a giant model is therefore “higher quality” and resistant to data poisoning
We used to use this in home ec class!