Automated Moderation Must be Temporary, Transparent and Easily Appealable

Automated Moderation Must be Temporary, Transparent and Easily Appealable

For most of us, social media has never been more crucial than it is right now: it’s keeping us informed and connected during an unprecedented moment in time. People have been using major platforms for all kinds of things, from following and posting news, to organizing aid—such as coordinating the donations of masks across international boundaries—to sharing tips on working from home to, of course, pure entertainment.

At the same time, the content moderation challenges faced by social media platforms have not disappeared—and in some cases have been exacerbated by the pandemic. In the past weeks, YouTube, Twitter, and Facebook have all made public statements about their moderation strategies at this time. While they differ in details, they all have one key element in common: the increased reliance on automated tools.

Setting aside the justifications for this decision—especially the likely concern that allowing content moderators to do that work from home may offer particular challenges to user privacy and moderator mental health—it will inevitably present problems for online expression. Automated technology doesn’t work at scale; it can’t read nuance in speech the way humans can, and for some languages it barely works at all. Over the years, we’ve seen the use of automation result in numerous wrongful takedowns. In short: automation is not a sufficient replacement for having a human in the loop.

And that’s a problem, perhaps now more than ever when so many of us have few alternative outlets to speak, educate and learn. Conferences are moving online, schools are relying on online platforms, and individuals are tuning in to videos to learn everything from yoga to gardening. Likewise, platforms continue to provide space for vital information, be it messages from governments to people, or documentation of human rights violations.

It’s important to give credit where credit is due. In their announcements, YouTube and Twitter both acknowledged the shortcomings of artificial intelligence, and are taking that into account as they moderate speech. YouTube will not be issuing strikes on video content except in cases where they have “high confidence” that it violates their rules, and Twitter will only be issuing temporary suspensions—not permanent bans—at this time. For its part, Facebook acknowledged that it will be relying on full-time employees to moderate certain types of content, such as terrorism.

These temporary measures will help mitigate the inevitable over-censorship that follows from the use of automated tools.  But history suggests that protocols adopted in times of crisis often persist when the crisis is over. Social media platforms should publicly commit, now, that they will restore and expand human review as soon as the crisis has abated. Until then, the meaningful transparency, notice, and robust appeals processes called for in the Santa Clara Principles will be more important than ever.

Notice and Appeals: We know the content moderation system is flawed, and that it’s going to get worse before it gets better. So now more than ever, users need a way to get the mistakes fixed, quickly and fairly. That starts with clear and detailed notice of why content is taken down, combined with a simple, streamlined means of challenging and reversing improper takedown decisions.

Transparency: The most robust appeals process will do users little good if they don’t know why their content is taken down. Moreover, without good data, users and researchers cannot review whether the takedowns were fair, unbiased, proportional, and respectful of users’ rights, even subject to the exigencies of the crisis. That data should include how many posts were removed and accounts permanently or temporarily suspended, for what reason, at whose behest.  

The Santa Clara Principles provide a set of baseline standards to which all companies should adhere. But as companies turn to automation, they may not be enough. That’s why, over the coming months, we will be engaging with civil society and the public in a series of consultations to expand and adapt these principles. Watch this space for more on that process.

Finally, platforms and policymakers operating in the EU should remember that using automation for content moderation may undermine user privacy. Often, automated decision-making will be based on the processing of users’ personal data. As noted, however, automated content removal systems do not understand context, are notoriously inaccurate and prone to overblocking. The GDPR provides users with a right not to be subject to significant decisions that are based solely on automated processing of data (Article 22). While this right is not absolute, it requires safeguarding user expectations and freedoms. 




Published April 02, 2020 at 08:40PM
Read more on eff.org

Post a Comment

0 Comments