Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

@FooBarrington@lemmy.world
link
fedilink
English
1
edit-2
8M

I’ll get the usual downvotes for this, but:

Because the AI doesn’t know anything.

is untrue, because current AI fundamentally is knowledge. Intelligence fundamentally is compression, and that’s what the training process does - it compresses large amounts of data into a smaller size (and of course loses many details in the process).

But there’s no way to argue that AI doesn’t know anything if you look at its ability to recreate a great number of facts etc. from a small amount of activations. Yes, not everything is accurate, and it might never be perfect. I’m not trying to argue that “it will necessarily get better”. But there’s no argument that labels current AI technology as “not understanding” without resorting to a “special human sauce” argument, because the fundamental compression mechanisms behind it are the same as behind our intelligence.

Edit: yeah, this went about as expected. I don’t know why the Lemmy community has so many weird opinions on AI topics.

This is all the same as saying a book is intelligent.

No, it’s not. It’s saying “a book is knowledge”, which is absolutely true.

@sxt@lemmy.world
link
fedilink
English
88M

Part of the problem with talking about these things in a casual setting is that nobody is using precise enough terminology to approach the issue so others can actually parse specifically what they’re trying to say.

Personally, saying the AI “knows” something implies a level of cognizance which I don’t think it possesses. LLMs “know” things the way an excel sheet can.

Obviously, if we’re instead saying the AI “knows” things due to it being able to frequently produce factual information when prompted, then yeah it knows a lot of stuff.

I always have the same feeling when people try to talk about aphantasia or having/not having an internal monologue.

I can ask AI models specific questions about knowledge it has, which it can correctly reply to. Excel sheets can’t do that.

That’s not to say the knowledge is perfect - but we know that AI models contain partial world models. How do you differentiate that from “cognizance”?

@rambaroo@lemmy.world
link
fedilink
English
3
edit-2
8M

Omg give me a break with this complete nonsense. LLMs are not an intelligence. They are language processors. They do not “think” about anything and don’t have any level of self awareness that implies cognizance. A cognizant ai would have recognized that the Nazis it was creating looked historically inaccurate, based on its training data. But guess what, it didn’t do that because it’s fundamentally incapable of thinking about anything.

So sick of reading this amateurish bullshit on social media.

A cognizant ai would have recognized that the Nazis it was creating looked historically inaccurate, based on its training data.

Do you understand that the model is specifically prompted to create “historically inaccurate looking Nazis”? Models aren’t supposed to inject their own guidelines and rules, they simply produce output for your input. If you tell it to produce black Hitler it will produce a black Hitler. Do you expect the model to instead produce white Hitler?

@thehatfox@lemmy.world
link
fedilink
English
48M

Knowledge is a bit more than just handling data, and in terms of intelligence it also involves understanding. I don’t think knowledge in an intelligent sense can be reduced to summarising data to keywords, and the reverse.

In those terms an encyclopaedia is also knowledge, but not in an intelligent way.

I’m not saying knowledge is summarising data to keywords, where did you get that?

Intelligence is compression, and the training process compresses data. There is no “summarising” here.

@kromem@lemmy.world
link
fedilink
English
1
edit-2
8M

Lemmy hasn’t met a pitchfork it doesn’t pick up.

You are correct. The most cited researcher in the space agrees with you. There’s been a half dozen papers over the past year replicating the finding that LLMs generate world models from the training data.

But that doesn’t matter. People love their confirmation bias.

Just look at how many people think it only predicts what word comes next, thinking it’s a Markov chain and completely unaware of how self-attention works in transformers.

The wisdom of the crowd is often idiocy.

Thank you very much. The confirmation bias is crazy - one guy is literally trying to tell me that AI generators don’t have knowledge because, when asking it for a picture of racially diverse Nazis, you get a picture of racially diverse Nazis. The facts don’t matter as long as you get to be angry about stupid AIs.

It’s hard to tell a difference between these people and Trump supporters sometimes.

@kromem@lemmy.world
link
fedilink
English
2
edit-2
8M

It’s hard to tell a difference between these people and Trump supporters sometimes.

To me it feels a lot like when I was arguing against antivaxxers.

The same pattern of linking and explaining research but having it dismissed because it doesn’t line up with their gut feelings and whatever they read when “doing their own research” guided by that very confirmation bias.

The field is moving faster than any I’ve seen before, and even people working in it seem to be out of touch with the research side of things over the past year since GPT-4 was released.

A lot of outstanding assumptions have been proven wrong.

It’s a bit like the early 19th century in physics, where everyone assumed things that turned out wrong over a very short period where it all turned upside down.

Exactly. They have very strong feelings that they are right, and won’t be moved - not by arguments, research, evidence or anything else.

Just look at the guy telling me “they can’t reason!”. I asked whether they’d accept they are wrong if I provide a counter example, and they literally can’t say yes. Their world view won’t allow it. If I’m sure I’m right that no counter examples exist to my point, I’d gladly say “yes, a counter example would sway me”.

Create a post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


  • 1 user online
  • 210 users / day
  • 601 users / week
  • 1.38K users / month
  • 4.49K users / 6 months
  • 1 subscriber
  • 7.41K Posts
  • 84.7K Comments
  • Modlog