Google to pause Gemini image generation after AI refuses to show images of White people
www.foxbusiness.com
external-link
Google issued an apology and will pause the image generation feature of its artificial intelligence model Gemini after it refused to show images of White people.

Google to pause Gemini AI image generation after refusing to show White people.::Google will pause the image generation feature of its artificial intelligence model, Gemini, after the model refused to show images of White people when prompted.

@j4k3@lemmy.world
link
fedilink
English
198M

So what. It means they overtrained, deployed, and had to choose between reverting to a model with known issues or training a new model. They probably tried a temporary fix with a LoRA and it failed so they have to wait on the next big version to finish training and those can take weeks even on massive data center class hardware.

People don’t seem to have any fundamental understanding of AI here. It is all static tensor math. There is no persistence or learning inside the model. Any illusion of persistence is due to the loader code that turns your text into math tokens. That is just standard code.

There is no fundamental difference between an offline AI and the proprietary like Gemini. One loader code is just data mining while the other is not. Training has a sweet spot. If too much John Oliver is added, everything will generate as John Oliver, like absolutely everything.

@slacktoid@lemmy.ml
link
fedilink
English
118M

Theres no such thing as too much John Oliver. this guy doesnt know what they are talking about.

Jo Miran
link
fedilink
English
408M

The black and Asian Third Reich soldiers were pretty funny though.

kingthrillgore
link
fedilink
English
88M

I can see the reddit data is working out

Prophet
link
fedilink
English
38M

The guy who leads this group is extremely vocal (almost weirdly so) about white privilege and systemic racism. He is also white. It’s true that many AI models have white-bias. The reasons for this are multi-faceted. Our datasets are grossly imbalanced against racial minorities. I also think I understand that for some darker-skinned races, it is more difficult for the model to extract relevant features from the shitty Flickr photos they scrape for these models.

That said, injecting words into the users prompt to force the model to generate minorities more often is an extremely naive approach. Kind of like if Google added “reddit” to all searches just because it worked for some specific test cases, but ignoring that you now no longer get any site except reddit. Probably the solution here looks like paying a lot of money for high quality datasets as well as investing in user education and more AI explainability of these tools.

Create a post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


  • 1 user online
  • 210 users / day
  • 601 users / week
  • 1.38K users / month
  • 4.49K users / 6 months
  • 1 subscriber
  • 7.41K Posts
  • 84.7K Comments
  • Modlog