Google CEO Sundar Pichai says problems with its AI can't be solved because hallucinations are an inherent problem in these AI tools.

You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

@badbytes@lemmy.world
link
fedilink
English
275M

Step 1. Replace CEO with AI. Step 2. Ask New AI CEO, how to fix. Step 3. Blindly enact and reinforce steps

The moment a politician’s kid drinks bleach because of Google’s AI is the moment any regulatory action is taken.

@blazeknave@lemmy.world
link
fedilink
English
15M

Like when they got drafted to Nam

Then it sounds like the “web” tab should be the default and the AI Overview should be the optional tab the user has to choose to go click on.

@jaybone@lemmy.world
link
fedilink
English
15M

But this week’s debacle shows the risk that adding AI – which has a tendency to confidently state false information – could undermine Google’s reputation as the trusted source to search for information online.

🤣🤣🤣

@exanime@lemmy.world
link
fedilink
English
95M

Neither does ChatGPT… they over-hyped this tech so hard, I am afraid they are makers of their own demise…

@unreasonabro@lemmy.world
cake
link
fedilink
English
10
edit-2
5M

That’s ok, we were already used to not getting what we wanted from your search and are already working on replacing you since you opted to replace yourselves with advertising instead of information, the role you were supposed to fulfill which you betrayed.

die in ignominy. Open source is the only way forward.

It is probably the most telling demonstration of the terrible state of our current society, that one of the largest corporations on earth, which got where it is today by providing accurate information, is now happy to knowingly provide incorrect, and even dangerous information, in its own name, an not give a flying fuck about it.

@Hackworth@lemmy.world
link
fedilink
English
10
edit-2
5M

Wikipedia got where it is today by providing accurate information. Google results have always been full of inaccurate information. Sorting through the links for respectable sources just became second nature, then we learned to scroll past ads to start sorting through links. The real issue with misinformation from an AI is that people treat it like it should be some infallible Oracle - a point of view only half-discouraged by marketing with a few warnings about hallucinations. LLMs are amazing, they’re just not infallible. Just like you’d check a Wikipedia source if it seemed suspect, you shouldn’t trust LLM outputs uncritically. /shrug

@Naatan@lemmy.world
link
fedilink
English
-45M

Good lord what is wrong with the people in this thread. The guy is literally owning up to the hard limitations of LLMs. I’m not a fan of him or Google either, but hey kudos for being honest this once. The entire industry would be better off if we didn’t treat LLMs like something they’re not. More of this please!

@londos@lemmy.world
link
fedilink
English
75M

But it’s not an isolated R&D project. They’re rolling it out in general search. If I have a promising new braking technology, but which still only works well 48% of the time, I’d keep working on it but not put it in production vehicles.

@Naatan@lemmy.world
link
fedilink
English
15M

I think @Microw@lemm.ee has the right idea on how to handle this type of issue. Hopefully they will improve the messaging around this, because I’m getting really tired of explaining to people how what we have is not true AI.

Honestly though this is nothing new for Google, they’ve been providing answers and web results with false information since their inception. You as a user will always need to do some vetting, Google is never going to be able to give you fully accurate information. They’re just sending you to places that may contain more information on the topic you’re searching for. Or at least, that’s how you should use them.

@londos@lemmy.world
link
fedilink
English
1
edit-2
5M

I swear I’m not just trying to start an argument, but I don’t see the disagreement here. You’re saying people here are too negative, but people aren’t shitting on the idea of LLMs, but the over promising of what they can do. You’re tired of explaining that it’s not true AI, but the confusion of caused by Google calling it “AI Overviews.”

You say it’s nothing new and that we’ve always had to vet sources when Google sends us somewhere, which is true, but the Overviews aren’t sending people anywhere, they’re summarizing and trying to give you an answer. They do link to sources for now, but the end goal is clearly that we trust the summary without following the links.

People who are listening to and parsing his comments are not the same people who will be blindly consuming these “AI Overviews.” It’s a problem.

@Naatan@lemmy.world
link
fedilink
English
15M

I’m saying most of the time people are correctly complaining about the over-promising that’s happening around AIs. Now here’s an example of a CEO acknowledging the limitations, and yes; perhaps still over-promising on some degree. But the fact that we’re seeing actual acknowledgement of the limitations is a positive thing. Change doesn’t happen overnight, but this is a step in the right direction.

I say all this as someone with a strong distaste of modern google. I actively avoid their services as much as reasonably possible. I’ve tried their AI and found it to be more trouble than it’s worth (what the hell is that control panel). But I can still recognize a positive change when I see one, my distaste of the company doesn’t change that.

TBH this is surprisingly honest.

There’s really nothing they can do, that’s just the current state of LLMs. People are insane, they can literally talk with something that isn’t human. We are literally the first humans in human experience to have a human-level conversation with something that isn’t human… And they don’t like it because it isn’t perfect 4 years after release.

@ameancow@lemmy.world
link
fedilink
English
4
edit-2
5M

I can talk to something that’s not human all day long, the question is does that thing have qualia? Does it experience? Do my words have even the most abstract meaning to it? Most animals experience even if they don’t have language. What LLM’s are, are simply mirrors. Very complicated mirrors.

It’s fine, it’s great, it’s a step towards making actual intelligences that experience the world in some way. But don’t get swept up in this extremely premature hype over something that only looks magical because you wildly overestimate and over-essentialize the human being. Lets have some fucking humility out there in tech-bro land, even just a little. You’re not that special and the predictive text programs we’re making are not worth the reverence people are giving them. Yet.

@platypus_plumba@lemmy.world
link
fedilink
English
-1
edit-2
5M

Does it matter if it is actually experiencing things? What matters is what you experience while talking to it, not what it experiences while talking to you. When you play videogames, do you actually think the NPCs are experiencing you?

It’s pretty insane how negative people are. We did something so extraordinary. Imagine if someone told the engineers who built the space shuttle “but it isn’t teleportation”. Maybe stop being so judgemental of what others have achieved.

“Uhh actually, this isn’t a fully simulated conscious being with a fully formed organic body that resembles my biological structure on a molecular level… Get this shit out of here”

@ameancow@lemmy.world
link
fedilink
English
15M

It’s pretty insane how negative people are.

It’s not negativity, it’s reigning in unwarranted faith and adoration, this is a technology, a product that is largely being made by and for corporate mega-giants who are not going to steer this towards betterment of anyone or anything, just like every other technology, it will take decades or more for any of us to see this take the form of the life changing wonder that too many people are already seeing in it. If you can stop being impressed by how easy it is to mirror human traits back at us, you will let these companies know that you do NOT want advertising AI’s in your fucking toaster.

You want the real thing? Then put some pressure where it belongs and don’t be a hype-person for advertising platforms and plagiarism simulators.

@platypus_plumba@lemmy.world
link
fedilink
English
1
edit-2
5M

“How easy it is to blah blah blah”… Dude, what the hell are you talking about, there’s nothing easy about this system.

If they release a real AI you’d still dislike it because it was created by a corporation.

@ameancow@lemmy.world
link
fedilink
English
15M

And you’re going to be waiting a suspiciously long time before your life is tangibly, positively impacted by any of this. Nothing is changing for the better any time soon, and many things are going to get worse. The carrot will always be a few years away.

The worst part is I want it to succeed and live up to it’s promises, but I am old enough to be smart enough to know that as a species, and I fucking cannot stress this enough, not that fucking special. We do the same sad shit over and over, this is tech promises that will change the world, but not soon enough to actually help because they want to make money from it. That’s the hard truth, the real pill you and all the kids watching this shit are going to have to slowly… ever so slowly… swallow.

@platypus_plumba@lemmy.world
link
fedilink
English
0
edit-2
5M

I’m already using Copilot every single day. I love it. It helps me save so much time writing boilerplate code that can be easily guessed by the model.

It even helps me understand tools faster than the documentation. I just type a comment and it autocompletes a piece of code that is probably wrong, but probably has the APIs that I need to learn about. So I just go, learn the specific APIs, fix the details of the code and move on.

I use chatgpt to help me improve my private blog posts because I’m not a native English speaker, so it makes the text feel more fluent.

We trained a model with the documentation of our company so it automatically references docs when someone asks it questions.

I’m using the AI from Jira to automatically generate queries and find what I want as fast as possible. I used to hate searching for stuff in Jira because I never remembered the DSL.

I have GPT as a command line tool because I constantly forget commands and this tool helps me remember without having to read the help or open Google.

We have pipelines that read exceptions that would usually be confusing for developers, but GPT automatically generates an explanation for the error in the logs.

I literally ask Chatgpt questions about other areas of technology that I don’t understand. My questions aren’t advanced so I usually get the right answers and I can keep reading about the topics. Chatgpt is literally teaching me how to do front ends, something that I hated my whole career but now feels like a breeze.

Maybe you should start actually figuring out how to use the tool instead of complaining about it in this echo chamber.

@mdk_@lemmy.world
link
fedilink
English
2
edit-2
5M

This isn’t like talking to a human. It lacks depth, empathy, context, knowledge in all questions.

Just try to get more about a topic out of it, asking deeper questions. You will find that it begins writing something that might sound right or helpful but actually isn’t.

All around, it just feels artifical. No emotion, no voice patterns, no body language, no changes in behavior, no reaction to jokes. Sorry this doesn’t feel real.

If nobody told you that you were talking to an AI in 2020,you’d have thought it was a person in quick interactions.

The only reason why it doesn’t feel more real is because they literally programmed it to feel the way it does. They didn’t create chatgpt to express emotions, that would be insane.

Yeah, it feels like a much improved version of an Eliza. Much improved, but still software. It doesn’t understand what it’s saying. TBF though I know a few humans like that.

What y’all are forgetting is that when it comes to dominating a technology space, historically, it’s not proving the better product, is providing the cheapest/widest available product. With the goal being to capture enough of the market to get and retain that dominant position. Nobody knows what the threshold is for that until years later when the dust has settled.

So from Google’s perspective if a new or current rival is going to get there first, then just push it out and fix it live. What are people going to do? Switch to Bing?

So is you want Google to stop doing this dumb broken LLM shite, use the network effect against them. Switch to a different search provider and browser and encourage all of your friends and family to do so as well.

@pacology@lemmy.world
link
fedilink
English
45M

These models are mad libs machines. They just decide on the next word based on input and training. As such, there isn’t a solution to stopping hallucinations.

@blazeknave@lemmy.world
link
fedilink
English
25M

I use it like crazy, but I never forget it’s just a heavy duty version of keyboard next word suggestions

Yeah no shit, that’s what LLMs do

The model literally ate The Onion, and now they can’t get it to throw it back up.

@Granite@lemmy.world
link
fedilink
English
35M

I love this wording, because it’s so true

@StaySquared@lemmy.world
link
fedilink
English
9
edit-2
5M

Maybe Google should put a disclaimer… warning people it’s not 100% accurate. Or… just take down the technology because clearly their AI is chit tier.

Pumpkin Escobar
link
fedilink
English
275M

Rip up the Reddit contract and don’t use that data to train the model. It’s the definition of a garbage in garbage out problem.

@unreasonabro@lemmy.world
cake
link
fedilink
English
45M

mithtaketh were made

@Sam_Bass@lemmy.world
link
fedilink
English
15M

Myth takes were made

Create a post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


  • 1 user online
  • 182 users / day
  • 580 users / week
  • 1.37K users / month
  • 4.49K users / 6 months
  • 1 subscriber
  • 7.41K Posts
  • 84.7K Comments
  • Modlog