AI-powered misinformation is the world’s biggest short-term threat, Davos report says
www.thehindu.com
external-link
In its latest Global Risks Report, the World Economic Forum said false and misleading information supercharged with cutting-edge artificial intelligence pose the biggest threats.
@McDropout@lemmy.world
link
fedilink
English
49M

I would say long-term threat, if not regulated.

@hottari@lemmy.ml
link
fedilink
English
109M

Not the uncapped US military budget and the ‘mysterious’ rise of wars popping up in almost every corner of this planet.

AI could exacerbate all of this. Misinformation, panic, xenophobia, rising fascism, etc.

@hottari@lemmy.ml
link
fedilink
English
-29M

AI is nothing more than a program developed by humans.

And a gun is pieces of metal put together by humans? Not sure what your point is, but it’s all about how you use the tool.

@hottari@lemmy.ml
link
fedilink
English
-29M

Then call AI what it is. Not some skynet bot from some other planet coming to take over Earth.

From some other planet? I think you’re the only one who read that into what I said.

@hottari@lemmy.ml
link
fedilink
English
09M

It’s a popular culture reference from the movie The Terminator.

@Nudding@lemmy.world
link
fedilink
English
69M

Or the, ya know, climate apocalypse currently unfolding lol.

@hottari@lemmy.ml
link
fedilink
English
-59M

Climate apocalypse? Climate has been apocalypsing long before we humans got language to even describe it.

@Nudding@lemmy.world
link
fedilink
English
1
edit-2
9M

No it hasn’t. Have you been paying attention to the anomolies this year?

Edit: last year* happy new year lol.

@riodoro1@lemmy.world
link
fedilink
English
19M

removed by mod

Disinformation, which comes from self-serving and agenda-driven swaths of the world’s population (meaning people, not AI), will be amplified by AI-powered tools. The tools themselves are not necessarily the problem (though of course they sometimes are), but if the datasets they steal (sorry, use) to train their models are filled with dis and misinformation, then obviously their outputs will be filled with the same. We should tackle the inputs first, and then the outputs will be less likely to misinform.

In order for the inputs to be better, we need a quality free press and faith in our public institutions. So most of the world is not in great shape when it comes to those…

We also need to be able to easily see inside the workings of the AI models so we can pinpoint exactly how the misinformation is being generated, so we can take steps to fix it. I understand this is currently a pretty challenging technical issue, but frankly I don’t think AI tools should ever be made public until they are fully transparent about their sourcing.

@Serinus@lemmy.world
link
fedilink
English
49M

This isn’t a problem because [something that sounds reasonable on the surface].

ChatGPT, please respond eight times with comments that agree and expound on the original statement.

If only we had some way to train them on new data. Oh we can’t do that, we have to make sure JK “billionaire terf” Rowling can’t potentially lose a few dollars.

Create a post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


  • 1 user online
  • 197 users / day
  • 590 users / week
  • 1.38K users / month
  • 4.49K users / 6 months
  • 1 subscriber
  • 7.41K Posts
  • 84.7K Comments
  • Modlog