🔔 Stay Updated!

Get instant alerts on breaking news, top stories, and updates from News EiSamay.

You might soon have to prove you’re human on Reddit—here’s why

Reddit will now require accounts showing suspicious behaviour to verify whether they are human or a bot, as part of efforts to tackle AI-driven and automated activity on the platform.

By Soumodip Adhikary

Mar 26, 2026 21:34 IST

Popular social media platform Reddit is rolling out a new system to identify and limit bot activity by asking certain accounts with suspicious activity to verify whether they are human. The move comes amid growing concerns over massive AI-generated content and automated behaviour affecting online discussions.

According to a report by Mint, the company is introducing targeted verification measures aimed at ensuring users are interacting with real people rather than bots.

Reddit CEO Steve Huffman announced that the verification process will not apply to all users, but only to accounts that display “automated or otherwise fishy behaviour". The platform emphasised that this step is meant to preserve authentic human interaction while maintaining user anonymity.

Also Read | Are social media platforms designed to be addictive? US court ruling sparks debate

Targeting suspicious activity

As part of the update, Reddit will begin identifying unusual or suspicious account behaviour through internal monitoring systems that track posting patterns and engagement signals. Once flagged, users may be prompted to confirm that they are human, failing which, their activity on the platform could be limited.

The company is also working to distinguish between harmful bots and legitimate automated accounts. Developers will be able to register approved bots, which will then be clearly labelled, helping users better understand the nature of interactions on the platform. At the same time, Reddit is enhancing reporting tools, allowing users to flag suspected bot activity more easily.

“The internet feels different lately. It’s getting harder to tell who—or what—you’re interacting with. But Reddit’s purpose is for people to talk to people. And we want it to stay that way,” Steve Huffman said in a recent post on Reddit.

“Our product has always been human conversation: messy, opinionated, sometimes great, sometimes not, but always real (or at least, really creative writing). As AI becomes a bigger part of the internet, we want to make sure that when you’re on Reddit, you know when you’re talking to a person and when you’re not,", as reported by Mint.

Verification methods and privacy focus

To carry out verification, Reddit plans to rely on third-party authentication systems rather than building everything in-house to save cost and time. These may include passkey-based verification tools from companies such as Apple, Google or YubiKey. The platform is also exploring other methods, including biometric authentication and decentralised identity systems.

In certain cases where required by regulations, government-issued ID verification may be considered, though Reddit has indicated that this would be used sparingly due to privacy concerns.

Also Read | iPhone ‘DarkSword’ hack: What the ZeroSpy-style threat really means

With AI-generated content becoming increasingly common across the internet, Reddit’s move reflects a broader push by social media platforms to maintain credibility and trust.

“Before there was AI slop, there was slop. It’s not a new problem, and it’s one that Reddit, with its voting and moderation system, is better than most at dealing with,” Huffman noted. While the verification process is expected to remain limited in scope, it signals a significant shift in how platforms may handle bot activity going forward.

Articles you may like: