Social media platforms like Twitter and Reddit are increasingly infested with bots and fake accounts, leading to significant manipulation of public discourse. These bots don’t just annoy users—they skew visibility through vote manipulation. Fake accounts and automated scripts systematically downvote posts opposing certain viewpoints, distorting the content that surfaces and amplifying specific agendas.

Before coming to Lemmy, I was systematically downvoted by bots on Reddit for completely normal comments that were relatively neutral and not controversial​ at all. Seemed to be no pattern in it… One time I commented that my favorite game was WoW, down voted -15 for no apparent reason.

For example, a bot on Twitter using an API call to GPT-4o ran out of funding and started posting their prompts and system information publicly.

https://www.dailydot.com/debug/chatgpt-bot-x-russian-campaign-meme/

Example shown here

Bots like these are probably in the tens or hundreds of thousands. They did a huge ban wave of bots on Reddit, and some major top level subreddits were quiet for days because of it. Unbelievable…

How do we even fix this issue or prevent it from affecting Lemmy??

@jordanlund@lemmy.world
link
fedilink
English
91M

Lemmy.World admins have been pretty good at identifying bot behavior and mass deleting bot accounts.

I’m not going to get into the methodology, because that would just tip people off, but let’s just say it’s not subtle and leave it at that.

@Alpha71@lemmy.world
link
fedilink
English
51M

Ban them all.

@asap@lemmy.world
link
fedilink
English
81M

Add a requirement that every comment must perform a small CPU-costly proof-of-work. It’s a negligible impact for an individual user, but a significant impact for a hosted bot creating a lot of comments.

Even better if you make the PoW performing some bitcoin hashes, because it can then benefit the Lemmy instance owner which can offset server costs.

@tree@lemmy.ml
link
fedilink
English
21M

There was discussion about implementing Hashcash for Lemmy: https://github.com/LemmyNet/lemmy/issues/3204

@asap@lemmy.world
link
fedilink
English
3
edit-2
1M

It seems like a no-brainer for me. Limits bots and provides a small(?) income stream for the server owner.

This was linked on your page, which is quite cool: https://crypto-loot.org/captcha

@zaphod@sopuli.xyz
link
fedilink
English
11M

Hashcash isn’t “cryptocurrency”.

@zzx@lemmy.world
link
fedilink
English
11M

It doesn’t seem like a no brainer to me… In order to generate the spam AI comments in the first place, they have to use expensive compute to run the LLM.

@nutsack@lemmy.world
link
fedilink
English
21M

what happens when the admin gets greedy and increases the amount of work that my shitty android phone is doing

@nutsack@lemmy.world
link
fedilink
English
1
edit-2
1M

I think the computation required to process the prompt they are processing is already comparable to a hashcash challenge

@Jimmycakes@lemmy.world
link
fedilink
English
41M

You don’t.

You employ critical thinking skills in all interactions on the web.

Media Sensationalism
link
fedilink
English
4
edit-2
1M

Signup safeguards will never be enough because the people who create these accounts have demonstrated that they are more than willing to do that dirty work themselves.

Let’s look at the anatomy of the average Reddit bot account:

  1. Rapid points acquisition. These are usually new accounts, but it doesn’t have to be. These posts and comments are often done manually by the seller if the account is being sold at a significant premium.

  2. A sudden shift in contribution style, usually preceded by a gap in activity. The account has now been fully matured to the desired amount of points, and is pending sale or set aside to be “aged”. If the seller hasn’t loaded on any points, the account is much cheaper but the activity gap still exists.

  • When the end buyer receives the account, they probably won’t be posting anything related to what the seller was originally involved in as they set about their own mission unless they’re extremely invested in the account. It becomes much easier to stay active in old forums if the account is now AI-controlled, but the account suddenly ceases making image contributions and mostly sticks to comments instead. Either way, the new account owner is probably accumulating much less points than the account was before.
  • A buyer may attempt to hide this obvious shift in contribution style by deleting all the activity before the account came into their possession, but now they have months of inactivity leading up to the beginning of the accounts contributions and thousands of points unaccounted for.
  1. Limited forum diversity. Fortunately, platforms like this have a major advantage over platforms like Facebook and Twitter because propaganda bots there can post on their own pages and gain exposure with hashtags without having to interact with other users or separate forums. On Lemmy, programming an effective bot means that it has to interact with a separate forum to achieve meaningful outreach, and these forums probably have to be manually programmed in. When a bot has one sole objective with a specific topic in mind, it makes great and telling use of a very narrow swath of forums. This makes Platforms like Reddit and Lemmy less preferred for automated propaganda bot activity, and more preferred for OnlyFans sellers, undercover small business advertisers, and scammers who do most of the legwork of posting and commenting themselves.

My solution? Implement a weighted visual timeline for a user’s points and posts to make it easier for admins to single out accounts that have already been found to be acting suspiciously. There are other types of malicious accounts that can be troublesome such as self-run engagement farms which express consistent front page contributions featuring their own political or whatever lean, but the type first described is a major player in Reddit’s current shitshow and is much easier to identify.

Most important is moderator and admin willingness to act. Many subreddit moderators on Reddit already know their subreddit has a bot problem but choose to do nothing because it drives traffic. Others are just burnt out and rarely even lift a finger to answer modmail, doing the bare minimum to keep their subreddit from being banned.

@NateNate60@lemmy.world
link
fedilink
English
21M

Perhaps the only way to get rid of them for sure is to require a CAPTCHA before all posts. That has its own issues though.

@profdc9@lemmy.world
link
fedilink
English
81M

If they don’t blink and you hear the servos whirring, that’s a pretty good sign.

@Kbobabob@lemmy.world
link
fedilink
English
21M

Such as?

@CluckN@lemmy.world
link
fedilink
English
51M

Making them multiply prime numbers.

Usually by tying your real world identity to your screen name, with your ID or mail or something.

@Kbobabob@lemmy.world
link
fedilink
English
51M

Hard pass.

@Kbobabob@lemmy.world
link
fedilink
English
11M

From the article you linked:

A typical process is:

You take a photo of a document (e.g. a passport or driving licence)
It is checked digitally to confirm it is genuine
You take a photo or video of yourself which is matched to the one on the document

Like I said, hard pass.

@jimmy90@lemmy.world
link
fedilink
English
11M

sorry for spam replying you, i didn’t notice it was the same username :)

@jimmy90@lemmy.world
link
fedilink
English
11M

https://www.gov.uk/guidance/digital-identity

i think it should be anonymized as well so that no PII is associated with your online ids even though they are verified

@Kbobabob@lemmy.world
link
fedilink
English
11M

A typical process is:

You take a photo of a document (e.g. a passport or driving licence)
It is checked digitally to confirm it is genuine
You take a photo or video of yourself which is matched to the one on the document

No thanks. Hard pass

@jimmy90@lemmy.world
link
fedilink
English
11M

why?

@Kbobabob@lemmy.world
link
fedilink
English
11M

I’m not comfortable uploading things like my passport to entities that have proven time and time again that they don’t care about data security.

@jimmy90@lemmy.world
link
fedilink
English
11M

fair enough

in this case it would be the british government that already has my passport and driver license photos stored digitally so this would just be to validate a digital login as a real person

would that change your mind at all?

@Metz@lemmy.world
link
fedilink
English
01M

Long before cryptocurrencies existed, proof-of-work was already being used to hinder bots. For every post, vote, etc., a cryptographic task has to be solved by the device used for it. Imperceptibly fast for the normal user, but for a bot trying to perform hundreds or thousands of actions in a row, a really annoying speed bump.

See e.g. https://wikipedia.org/wiki/Hashcash

This combined with more classic blockades such as CAPTCHAs (especially image recognition, which is still expensive in mass despite the advances in AI) should at least represent a first major obstacle.

Give up. There is no hope we already lost. Fuck us fuck our lives fuck everything we should just die.

@robocall@lemmy.world
link
fedilink
English
11M

I love dailydot. They summarize tiktoks about doordash and then provide the same video at the bottom of the page. I can feel my mind rot while consuming it but I still do it.

Make your own bot account that randomly(or not randomly) posts something bots will reply to, a system based response preferably. Last I was looking at bots they were simply programs, and have dev commands that can return information on things like system resources, or OS version. Your bot posts commands built in from the bot apps Dev, the bots reply like bots do with their version, system resources, or whatever they have built in. Boom - Banned instantly.

Resol van Lemmy
link
fedilink
English
111M

Create a bot that reports bot activity to the Lemmy developers.

You’re basically using bots to fight bots.

Resol van Lemmy
link
fedilink
English
51M

Love that name too. Rock 'Em Sock 'Em Robots.

wuphysics87
link
fedilink
English
51M

While a good solution in principle, it could (and likely will) false flag accounts. Such a system should be a first line with a review as a second.

Resol van Lemmy
link
fedilink
English
11M

Whenever I propose a solution, someone [justifiably] finds a problem within it.

I got nothing else. Sorry, OP.

@AlexWIWA@lemmy.ml
link
fedilink
English
261M

By being small and unimportant

That’s the sad truth of it. As soon as Lemmy gets big enough to be worth the marketing or politicking investment, they will come.

@AlexWIWA@lemmy.ml
link
fedilink
English
31M

Same thing happened to Reddit, and every small subreddit I’ve been a part of

@DandomRude@lemmy.world
link
fedilink
English
31M

I think the only way to solve this problem for good would be to tie social media accounts to proof of identity. However, apart from what would certainly be a difficult technical implementation, this would create a whole bunch of different problems. The benefits would probably not outweigh the costs.

Create a post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


  • 1 user online
  • 182 users / day
  • 580 users / week
  • 1.37K users / month
  • 4.49K users / 6 months
  • 1 subscriber
  • 7.41K Posts
  • 84.7K Comments
  • Modlog