Social media platforms like Twitter and Reddit are increasingly infested with bots and fake accounts, leading to significant manipulation of public discourse. These bots don’t just annoy users—they skew visibility through vote manipulation. Fake accounts and automated scripts systematically downvote posts opposing certain viewpoints, distorting the content that surfaces and amplifying specific agendas.
Before coming to Lemmy, I was systematically downvoted by bots on Reddit for completely normal comments that were relatively neutral and not controversial at all. Seemed to be no pattern in it… One time I commented that my favorite game was WoW, down voted -15 for no apparent reason.
For example, a bot on Twitter using an API call to GPT-4o ran out of funding and started posting their prompts and system information publicly.
https://www.dailydot.com/debug/chatgpt-bot-x-russian-campaign-meme/
Bots like these are probably in the tens or hundreds of thousands. They did a huge ban wave of bots on Reddit, and some major top level subreddits were quiet for days because of it. Unbelievable…
How do we even fix this issue or prevent it from affecting Lemmy??
This is a most excellent place for technology news and articles.
Lemmy.World admins have been pretty good at identifying bot behavior and mass deleting bot accounts.
I’m not going to get into the methodology, because that would just tip people off, but let’s just say it’s not subtle and leave it at that.
Ban them all.
Add a requirement that every comment must perform a small CPU-costly proof-of-work. It’s a negligible impact for an individual user, but a significant impact for a hosted bot creating a lot of comments.
Even better if you make the PoW performing some bitcoin hashes, because it can then benefit the Lemmy instance owner which can offset server costs.
There was discussion about implementing Hashcash for Lemmy: https://github.com/LemmyNet/lemmy/issues/3204
It seems like a no-brainer for me. Limits bots and provides a small(?) income stream for the server owner.
This was linked on your page, which is quite cool: https://crypto-loot.org/captcha
Hashcash isn’t “cryptocurrency”.
It doesn’t seem like a no brainer to me… In order to generate the spam AI comments in the first place, they have to use expensive compute to run the LLM.
what happens when the admin gets greedy and increases the amount of work that my shitty android phone is doing
I think the computation required to process the prompt they are processing is already comparable to a hashcash challenge
You don’t.
You employ critical thinking skills in all interactions on the web.
Signup safeguards will never be enough because the people who create these accounts have demonstrated that they are more than willing to do that dirty work themselves.
Let’s look at the anatomy of the average Reddit bot account:
Rapid points acquisition. These are usually new accounts, but it doesn’t have to be. These posts and comments are often done manually by the seller if the account is being sold at a significant premium.
A sudden shift in contribution style, usually preceded by a gap in activity. The account has now been fully matured to the desired amount of points, and is pending sale or set aside to be “aged”. If the seller hasn’t loaded on any points, the account is much cheaper but the activity gap still exists.
My solution? Implement a weighted visual timeline for a user’s points and posts to make it easier for admins to single out accounts that have already been found to be acting suspiciously. There are other types of malicious accounts that can be troublesome such as self-run engagement farms which express consistent front page contributions featuring their own political or whatever lean, but the type first described is a major player in Reddit’s current shitshow and is much easier to identify.
Most important is moderator and admin willingness to act. Many subreddit moderators on Reddit already know their subreddit has a bot problem but choose to do nothing because it drives traffic. Others are just burnt out and rarely even lift a finger to answer modmail, doing the bare minimum to keep their subreddit from being banned.
Perhaps the only way to get rid of them for sure is to require a CAPTCHA before all posts. That has its own issues though.
If they don’t blink and you hear the servos whirring, that’s a pretty good sign.
by embracing methods of verifying that a user is a real person
edit: to add this example
https://www.gov.uk/government/publications/uk-digital-identity-and-attributes-trust-framework-beta-version/uk-digital-identity-and-attributes-trust-framework-beta-version
Such as?
Making them multiply prime numbers.
Usually by tying your real world identity to your screen name, with your ID or mail or something.
Hard pass.
https://www.gov.uk/guidance/digital-identity
From the article you linked:
Like I said, hard pass.
sorry for spam replying you, i didn’t notice it was the same username :)
https://www.gov.uk/guidance/digital-identity
i think it should be anonymized as well so that no PII is associated with your online ids even though they are verified
https://www.gov.uk/guidance/digital-identity
No thanks. Hard pass
why?
I’m not comfortable uploading things like my passport to entities that have proven time and time again that they don’t care about data security.
fair enough
in this case it would be the british government that already has my passport and driver license photos stored digitally so this would just be to validate a digital login as a real person
would that change your mind at all?
Long before cryptocurrencies existed, proof-of-work was already being used to hinder bots. For every post, vote, etc., a cryptographic task has to be solved by the device used for it. Imperceptibly fast for the normal user, but for a bot trying to perform hundreds or thousands of actions in a row, a really annoying speed bump.
See e.g. https://wikipedia.org/wiki/Hashcash
This combined with more classic blockades such as CAPTCHAs (especially image recognition, which is still expensive in mass despite the advances in AI) should at least represent a first major obstacle.
Give up. There is no hope we already lost. Fuck us fuck our lives fuck everything we should just die.
I love dailydot. They summarize tiktoks about doordash and then provide the same video at the bottom of the page. I can feel my mind rot while consuming it but I still do it.
Make your own bot account that randomly(or not randomly) posts something bots will reply to, a system based response preferably. Last I was looking at bots they were simply programs, and have dev commands that can return information on things like system resources, or OS version. Your bot posts commands built in from the bot apps Dev, the bots reply like bots do with their version, system resources, or whatever they have built in. Boom - Banned instantly.
Create a bot that reports bot activity to the Lemmy developers.
You’re basically using bots to fight bots.
Love that name too. Rock 'Em Sock 'Em Robots.
While a good solution in principle, it could (and likely will) false flag accounts. Such a system should be a first line with a review as a second.
Whenever I propose a solution, someone [justifiably] finds a problem within it.
I got nothing else. Sorry, OP.
By being small and unimportant
That’s the sad truth of it. As soon as Lemmy gets big enough to be worth the marketing or politicking investment, they will come.
Same thing happened to Reddit, and every small subreddit I’ve been a part of
I think the only way to solve this problem for good would be to tie social media accounts to proof of identity. However, apart from what would certainly be a difficult technical implementation, this would create a whole bunch of different problems. The benefits would probably not outweigh the costs.