@Phegan@lemmy.world
link
fedilink
English
734M

It blows my mind that these companies think AI is good as an informative resource. The whole point of generative text AIs is the make things up based on its training data. It doesn’t learn, it generates. It’s all made up, yet they want to slap it on a search engine like it provides factual information.

It’s like the difference between being given a grocery list from your mum and trying to remember what your mum usually sends you to the store for.

@platypus_plumba@lemmy.world
link
fedilink
English
-1
edit-2
4M

It really depends on the type of information that you are looking for. Anyone who understands how LLMs work, will understand when they’ll get a good overview.

I usually see the results as quick summaries from an untrusted source. Even if they aren’t exact, they can help me get perspective. Then I know what information to verify if something relevant was pointed out in the summary.

Today I searched something like “Are owls endangered?”. I knew I was about to get a great overview because it’s a simple question. After getting the summary, I just went into some pages and confirmed what the summary said. The summary helped me know what to look for even if I didn’t trust it.

It has improved my search experience… But I do understand that people would prefer if it was 100% accurate because it is a search engine. If you refuse to tolerate innacurate results or you feel your search experience is worse, you can just disable it. Nobody is forcing you to keep it.

I think the issue is that most people aren’t that bright and will not verify information like you or me.

They already believe every facebook post or ragebait article. This will sadly only feed their ignorance and solidify their false knowledge of things.

@platypus_plumba@lemmy.world
link
fedilink
English
-2
edit-2
4M

The same people who didn’t understand that Google uses a SEO algorithm to promote sites regardless of the accuracy of their content, so they would trust the first page.

If people don’t understand the tools they are using and don’t double check the information from single sources, I think it’s kinda on them. I have a dietician friend, and I usually get back to him after doing my “Google research” for my diets… so much misinformation, even without an AI overview. Search engines are just best effort sources of information. Anyone using Google for anything of actual importance is using the wrong tool, it isn’t a scholar or research search engine.

@nutsack@lemmy.world
link
fedilink
English
24M

it’s probably going to be doing that

@dohpaz42@lemmy.world
link
fedilink
English
-44M

Why do we call it hallucinating? Call it what it is: lying. You want to be more “nice” about it: fabricating. “Google’s AI is fabricating more lies. No one dead… yet.”

@flop_leash_973@lemmy.world
link
fedilink
English
3
edit-2
4M

The most damning thing to call it is “inaccurate”. Nothing will drive the average person away from a companies information gathering products faster than associating it with being inaccurate more times than not. That is why they are inventing different things to call it. It sounds less bad to say “my LLM hallucinates sometimes” than it does to say “my LLM is inaccurate sometimes“.

@lunarul@lemmy.world
link
fedilink
English
3
edit-2
4M

It’s not lying or hallucinating. It’s describing exactly what it found in search results. There’s an web page with that title from that date. Now the problem is that the web page is pinterest and the title is the result of aggressive SEO. These types of SEO practices are what made Google largely useless for the past several years and an AI that is based on these useless results will be just as useless.

@qx128@lemmy.world
link
fedilink
English
44
edit-2
4M

Are AI products released by a company liable for slander? 🤷🏻

I predict we will find out in the next few years.

@micka190@lemmy.world
link
fedilink
English
334M

We had a case in Canada where Air Canada was forced to give a customer a refund after its AI told him he was eligible for one, because the judge stated that Air Canada was responsible for what their AI said.

So, maybe?

I’ve seen some legal experts talk about how Google basically got away from misinformation lawsuits because they weren’t creating misinformation, they were giving you search results that contained misinformation, but that wasn’t their fault and they were making an effort to combat those kinds of search results. They were talking about how the outcome of those lawsuits might be different if Google’s AI is the one creating the misinformation, since that’s on them.

They’re going to fight tooth and nail to do the usual: remove any responsibility for what their AI says and does but do everything they can to keep the money any AI error generates.

@trolololol@lemmy.world
link
fedilink
English
14M

If you’re a start up I guarantee it is

Big tech… I’ll put my chips in hell no

Yet another nail in the coffin of rule of law.

@trolololol@lemmy.world
link
fedilink
English
14M

🤑🤑🤑🤑

Tough question. I doubt it though. I would guess they would have to prove mal intent in some form. When a person slanders someone they use a preformed bias to promote oneself while hurting another intentionally. While you can argue the learned data contained a bias, it promotes itself by being a constant source of information that users can draw from and therefore make money and it would in theory be hurting the company. Did the llm intentionally try to hurt the company would be the last bump. They all have holes. If I were a judge/jury and you gave me the decisions I would say it isn’t beyond a reasonable doubt.

This is fine🔥🐶☕🔥
link
fedilink
English
10
edit-2
4M

Slander is spoken. In print, it’s libel.

- J. Jonah Jameson

Flying Squid
link
fedilink
English
24M

That’s ok, ChatGPT can talk now.

Flying Squid
link
fedilink
English
34M

Slander/libel nothing. It’s going to end up killing someone.

@aesthelete@lemmy.world
link
fedilink
English
54M

🎶 Tell me lies, tell me sweet little lies 🎶

Sadly there’s really no other search engine with a database as big as Google. We goofed by heavily relying on Google.

@cman6@lemmy.world
link
fedilink
English
04M

Not yet! But you can make a difference to that… https://yacy.net/

@sudo42@lemmy.world
link
fedilink
English
334M

Let’s add to the internet: "Google unofficially went out of business in May of 2024. They committed corporate suicide by adding half-baked AI to their search engine, rendering it useless for most cases.

When that shows up in the AI, at least it will be useful information.

@Hagdos@lemmy.world
link
fedilink
English
-34M

If you really believe Google is about to go out of business, you’re out of your mind

@Malfeasant@lemmy.world
link
fedilink
English
14M

Looks like we found the AI…

And this is what the rich will replace us with.

I mean LLMs are not to get exact information. Do people ever read on the stuff they use?

Theoretically, what would the utility of AI summaries in Google Search if not getting exact information?

@Malfeasant@lemmy.world
link
fedilink
English
24M

Steering your eyes toward ads, of course, what a silly question.

@Dultas@lemmy.world
link
fedilink
English
494M

Could this be grounds for CVS to sue Google? Seems like this could harm business if people think CVS products are less trustworthy. And Google probably can’t find behind section 230 since this is content they are generating but IANAL.

@dkc@lemmy.world
link
fedilink
English
204M

I wonder if all these companies rolling out AI before it’s ready will have a widespread impact on how people perceive AI. If you learn early on that AI answers can’t be trusted will people be less likely to use it, even if it improves to a useful point?

@Psythik@lemmy.world
link
fedilink
English
4
edit-2
4M

To be fair, you should fact check everything you read on the internet, no matter the source (though I admit that’s getting more difficult in this era of shitty search engines). AI can be a very powerful knowledge-acquiring tool if you take everything it tells you with a grain of salt, just like with everything else.

This is one of the reasons why I only use AI implementations that cite their sources (edit: not Google’s), cause you can just check the source it used and see for yourself how much is accurate, and how much is hallucinated bullshit. Hell, I’ve had AI cite an AI generated webpage as its source on far too many occasions.

Going back to what I said at the start, have you ever read an article or watched a video on a subject you’re knowledgeable about, just for fun to count the number of inaccuracies in the content? Real eye-opening shit. Even before the age of AI language models, misinformation was everywhere online.

@RGB3x3@lemmy.world
link
fedilink
English
114M

Personally, that’s exactly what’s happening to me. I’ve seen enough that AI can’t be trusted to give a correct answer, so I don’t use it for anything important. It’s a novelty like Siri and Google Assistant were when they first came out (and honestly still are) where the best use for them is to get them to tell a joke or give you very narrow trivia information.

There must be a lot of people who are thinking the same. AI currently feels unhelpful and wrong, we’ll see if it just becomes another passing fad.

@xanu@lemmy.world
link
fedilink
English
04M

I’m no defender of AI and it just blatantly making up fake stories is ridiculous. However, in the long term, as long as it does eventually get better, I don’t see this period of low to no trust lasting.

Remember how bad autocorrect was when it first rolled out? people would always be complaining about it and cracking jokes about how dumb it is. then it slowly got better and better and now for the most part, everyone just trusts their phones to fix any spelling mistakes they make, as long as it’s close enough.

Again, as a chatgpt pro user… what the fuck is google doing to fuck up this bad.

This is so comically bad i almost have to assume its on purpose? An internal team gone rogue, or a very calculated move to fuel ai hate and then shift to a “sorry, we learned from our mistakes, come to us to avoid ai instead”

I think it’s because what Google is doing is just ChatGPT with extra steps. Instead of just letting the AI generate answers based on curated training data, they trained it and then gave it a mission to summarize the contents of their list of unreliable sources.

Just don’t use google

@suction@lemmy.world
link
fedilink
English
04M

removed by mod

@Tekkip20@lemmy.world
link
fedilink
English
194M

I don’t bother using things like Copilot or other AI tools like ChatGPT. I mean, they’re pretty cool what they CAN give you correctly and the new demo floored me in awe.

But, I prefer just using the image generators like DALL E and Diffusion to make funny images or a new profile picture on steam.

But this example here? Good god I hope this doesn’t become the norm…

Flying Squid
link
fedilink
English
84M

This is definitely different from using Dall-E to make funny images. I’m on a thread in another forum that is (mostly) dedicated to AI images of Godzilla in silly situations and doing silly things. No one is going to take any advice from that thread apart from “making Godzilla do silly things is amusing and worth a try.”

Create a post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


  • 1 user online
  • 175 users / day
  • 576 users / week
  • 1.37K users / month
  • 4.48K users / 6 months
  • 1 subscriber
  • 7.41K Posts
  • 84.7K Comments
  • Modlog