A shocking story was promoted on the “front page” or main feed of Elon Musk’s X on Thursday:
“Iran Strikes Tel Aviv with Heavy Missiles,” read the headline.
This would certainly be a worrying world news development. Earlier that week, Israel had conducted an airstrike on Iran’s embassy in Syria, killing two generals as well as other officers. Retaliation from Iran seemed like a plausible occurrence.
But, there was one major problem: Iran did not attack Israel. The headline was fake.
Even more concerning, the fake headline was apparently generated by X’s own official AI chatbot, Grok, and then promoted by X’s trending news product, Explore, on the very first day of an updated version of the feature.
This is a most excellent place for technology news and articles.
Nah, this problem is actually too hard to solve with LLMs. They don’t have any structure or understanding of what they’re saying so there’s no way to write better guardrails… Unless you build some other system that tries to make sense of what the LLM says, but that approaches the difficulty of just building an intelligent agent in the first place.
So no, if this law came into effect, people would just stop using AI. It’s too cavalier. And imo, they probably should stop for cases like this unless it has direct human oversight of everything coming out of it. Which also, probably just wouldn’t happen.
deleted by creator