AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

I wonder when the first one turns into suicide bomber.

Chaos
link
fedilink
English
09M

deleted by creator

@theluddite@lemmy.ml
link
fedilink
English
1059M

AI systems in the future, since it helps us understand how difficult they might be to deal with," lead author Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, an AI research company, told Live Science in an email.

The media needs to stop falling for this. This is a “pre-print,” aka a non-peer-reviewed paper, published by the AI company itself. These companies are quickly learning that, with the AI hype, they can get free marketing by pretending to do “research” on their own product. It doesn’t matter what the conclusion is, whether it’s very cool and going to save us or very scary and we should all be afraid, so long as its attention grabbing.

If the media wants to report on it, fine, but don’t legitimize it by pretending that it’s “researchers” when it’s the company itself. The point of journalism is to speak truth to power, not regurgitate what the powerful say.

@yesman@lemmy.world
link
fedilink
English
19M

When you’re creating something new, production is research. We can’t expect Dr. Frankenstein to be unbiased, but that doesn’t mean he doesn’t have insights worth knowing.

LLM are pretty new, how many experts even exist outside of the industry?

Standards for journalism are impossibly low. Standards for media criticism don’t exist.

@theluddite@lemmy.ml
link
fedilink
English
9
edit-2
9M

When you’re creating something new, production is research. We can’t expect Dr. Frankenstein to be unbiased, but that doesn’t mean he doesn’t have insights worth knowing.

Yes and no. It’s the same word, but it’s a different thing. I do R&D for a living. When you’re doing R&D, and you want to communicate your results, you write something like a whitepaper or a report, but not a journal article. It’s not a perfect distinction, and there’s some real places where there’s bleed through, but this thing where companies have decided that their employees are just regular scientists publishing their internal research in arxiv is an abuse of that service./

LLM are pretty new, how many experts even exist outside of the industry?

… a lot, actually? I happen to be married to one. Her lab is at a university, where there are many other people who are also experts.

@Grimy@lemmy.world
link
fedilink
English
219M

It’s also worth noting that this is one of the few companies that already has its foot in the door. AI panic and hasty legislation would essentially close that door right behind them.

Bad robot, no cookie

AI rampancy, the 5th horseman of the apocalypse

Ghostalmedia
link
fedilink
English
19M

Rampant AI is terrible - look what it did to Halo 5.

@irotsoma@lemmy.world
link
fedilink
English
69M

The problem is that these LLMs are built with the wrong driving motivator. They’re driven to find one right way whereas the reality is that there is rarely a single right way and computers don’t need to have a single right way like humans tend towards. The LLM shouldn’t be driven to be “right” in its learning model. It should be trained on known good data only as a base, and then given the other data to serve context rather than allowing that data to modify the underlying system. This is more like how biological creatures work in teaching a child to be “good” or “evil” and to know the basic things needed to survive and serve their purpose, and then the stuff they learn in adulthood serves to help them apply those base concepts to the world.

@Paragone@lemmy.world
link
fedilink
English
59M

I hold that this is true of all neural-nets, organic as well as silicon:

Once a person has sided with treachery, rooting it out from one’s unconscious-mind is … enduringly difficult, if not intractable.

I don’t know how many decades it takes to eradicate the roots of it, if it can be done, at all:

the unconscious-mind mechanism, that-is the Kahneman System-1 ( from “Thinking Fast & Slow” ) imprint is going to still be there, even if overlaid with another imprint ( since mind is holographic/pattern-imprints in function ).

Worse, it is the motivation that need change, and motivation is of ego, which is of identity, so many who “reform” only do-so superficially.

I’m not saying this as some goody-2-shoes, I’m saying this as a person who was raised by narcissists, and therefore embodied much narcissism, and class-prejudice ( dad was a doctor: you can’t get more upper-middle-class status-prejudiced than doctor-culture )…

…who finally cracked the root kernel of the class-prejudice in my unconscious-mind’s identity-crystal at the end of a 25d hard-line fast, out in the bush.

It took that to fracture the identity-crystal’s prejudice.

It’s been a decade since then, & I’m still fighting to eradicate its treachery from my nature.

Neural-nets are tough to purge, or clean-up & make upright.

MUCH easier to keep a neural-net pristine through all of its formation, than to try ( endlessly failing ) to clean it up, after it’s become enemy-intent in “family” clothing.

_ /\ _

jaxxed
link
fedilink
English
19M

Can you recommend further reading?

Erasmus
link
fedilink
English
39M

Ha ha the plot for Horizon coming true in real life.

AI goes rogue. No one can flip the kill switch when AI has disconnected it. AI decides to remove humanity from the planet.

Someone needs to start working on a Zero Dawn program and terraforming plans pretty quick.

@_number8_@lemmy.world
link
fedilink
English
429M

‘went rogue’ is a bit of an alarmist way to say ‘typed scary text’

i’d love to see an AI that could legitimately scare me

@fidodo@lemmy.world
link
fedilink
English
29M

Programming is “just text”. They doesn’t mean that programming isn’t incredibly powerful or that it can’t be used to do dangerous things. Maybe the missing piece that you’re unaware of is that LLMs are already very effective at programming and usage APIs. You don’t even need to have an LLM that’s good at programming to cause damage, it just needs access to APIs that can cause damage.

maegul (he/they)
link
fedilink
English
389M

It controls a military drone.

It controls surgical equipment.

It’s filtering your CV before any human sees it.

It controls a robot taking care of your children.

It’s involved in law enforcement or legal judgments.

It’s involved in government policy setting.

@piecat@lemmy.world
link
fedilink
English
19M

deleted by creator

maegul (he/they)
link
fedilink
English
69M

I was listing easy to imagine AI scenarios not current realities.

@normanwall@lemmy.world
link
fedilink
English
13
edit-2
9M

It controls all power infrastructure, can find new exploits to build it’s own botnet and is able to reprogram firmware of devices (routers/switches/servers)

It can send press releases, emails, tweets using language similar to any user it’s read from before

Just use imagination. An AI is programmed for battle and is ordered to hold fire. It shoots instead.

I hope WOPR and SkyNet would be taken as a warning not to do that.

I thought the point of AI is to not specifically program it for anything hence you can ask the chatbot thats suppose to help make a sale, do your homework problems.

@rikripper@lemmy.world
link
fedilink
English
29M

Couldn’t a human make the same decision?

@fidodo@lemmy.world
link
fedilink
English
19M

Imagine if there was a specific series of words that would turn any human into a rogue agent en masse. Some guy discovers that a special input causes killbot 2000 to go haywire and they broadcast it to an entire army that all has the same underlying program.

Create a post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


  • 1 user online
  • 182 users / day
  • 580 users / week
  • 1.37K users / month
  • 4.49K users / 6 months
  • 1 subscriber
  • 7.41K Posts
  • 84.7K Comments
  • Modlog