Twitter Inc on Tuesday rolled out a moderation feature that would allow its users to limit how many people can reply to their tweets, giving account-holders more control over conversations on their page. All accounts, including those of elected officials, can now select the people who will be allowed to reply while composing a new tweet, Twitter said.
Users can select from three sets of people - everyone, only people they follow and only people they mention in the tweet. The microblogging site, which started testing the feature in May, added that all users can continue to like and retweet posts, but cannot reply if they have been excluded by the author. This feature would also enable Twitter users carry out better conversations and limit their exposure to online trolls and abusers.
Elsewhere, Facebook Inc said on Tuesday it removed 7 million posts in the second quarter for sharing false information about the novel coronavirus, including content that promoted fake preventative measures and exaggerated cures.
It released the data as part of its sixth Community Standards Enforcement Report, which it introduced in 2018 along with more stringent decorum rules in response to a backlash over its lax approach to policing content on its platforms.
The company also removed about 22.5 million posts with hate speech on its flagship app in the second quarter, a dramatic increase from 9.6 million in the first quarter. It attributed the jump to improvements in detection technology.
It also deleted 8.7 million posts connected to “terrorist” organizations, compared with 6.3 million in the prior period. It took down less material from “organized hate” groups: 4 million pieces of content, compared to 4.7 million in the first quarter.
Facebook said it relied more heavily on automation for reviewing content starting in April as it had fewer reviewers at its offices due to the COVID-19 pandemic.
That resulted in less action against content related to self-harm and child sexual exploitation, executives said on a conference call.