Twitter has a new tool against “harmful” language so they are going to “warn” you if they believe you are engaging in “harmful” language. Twitter is worried that users may accidentally infect users in “wrongthink.”
“When things get heated, you may say things you don’t mean,” wrote Twitter’s official support account in a tweet yesterday.
“To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful. We’re trying to encourage people to rethink their behavior and rethink their language before posting because they often are in the heat of the moment and they might say something they regret,” said Sunita Saligram, Twitter’s global head of site policy for trust and safety, in a comment to Reuters.
Twitter’s “trust and safety” division has a shady past, its former policy manager, Olinda Hassan was caught in an undercover video in 2018 by Project Veritas. In the recording, Hassan said that her team was working on a way to “get the shitty people to not show up.” Hassan made the comment when she was asked about conservative author Mike Cernovich.
“I’m in Trust and Safety, I do all the policy work, safety policy. I do… I’m in a controversial team. I’m the team everyone says a lot about, yeah,” commented Hassan.
“We’re trying to down rank it, but you also need to have control of your timeline… Yeah that’s something we’re working on. Yeah it’s something we’re working on, where we’re trying to get the shitty people to not show up. It’s a product thing we’re working on,” Hassan said.
Instead of having you not show up, Twitter now will inform you that you are not thinking correctly and will try to train you to change your bad behavior.
Pranay Singh, a direct messaging engineer also revealed that they are using artificial intelligence to target en-masse conservative accounts. Singh explained:
Just go to a random tweet, and just look at the followers. They’ll all be like, guns, God, ‘Merica, like, and with the American flag and, like, the cross… Like, who says that? Who talks like that? It’s for sure a bot.
You just delete them, but, like, the problem is there are hundreds of thousands of them, so you got to, like, write algorithms, that do it for you.
I would say majority of it are for Republicans, because they’re all from Russia and they wanted Trump to win, so yeah.
— Conservative Collections (@ConserveCollect) May 5, 2020