• 0 Posts
  • 3 Comments
Joined 1Y ago
cake
Cake day: Jun 12, 2023

help-circle
rss

No mention of Gemini in their blog post on sge And their AI principles doc says

We acknowledge that large language models (LLMs) like those that power generative AI in Search have the potential to generate responses that seem to reflect opinions or emotions, since they have been trained on language that people use to reflect the human experience. We intentionally trained the models that power SGE to refrain from reflecting a persona. It is not designed to respond in the first person, for example, and we fine-tuned the model to provide objective, neutral responses that are corroborated with web results.

So a custom model.


It wasn’t Gemini, but the AI generated suggestions added to the top of Google search. But that AI was specifically trained to regurgitate and reference direct from websites, in an effort to minimize the amount of hallucinated answers.


I mean “taking pictures of people who are smiling” is definitely a bias in our culture. How we collectively choose to record information is part of how we encode human biases.

I get what you’re saying in specific circumstances. Sure, a dataset that is built from a single source doesn’t make its biases universal. But these models were trained on a very wide range of sources. Wide enough to cover much of the data we’ve built a culture around.