Last month, a report in the Guardian over secret documents detailing Facebook’s policies on nudity, “credible threats of violence” or images of animal cruelty brought to light the difficulties inherent in policing user uploaded content and designing effective online content moderation policies. It also got us thinking: how do internet regulators handle nudity in art? How are those images treated as opposed to images that have no artistic merit?
The Facebook Files, as they have come to be known, show how Facebook has chosen to handle these controversial subjects. While Facebook and other social networks can never realistically be expected to check all content before users upload it, there are certainly methods they can use to limit potential problems. Unfortunately, online content moderation has been somewhat of a last-minute implementation for Facebook. Until recently, Facebook had taken a laissez-faire approach, asserting that it was a content-neutral technology company laying the responsibility for policing harmful content onto the users. That hands-off approach hasn’t worked well.
Several high-profile incidents have pushed Facebook to become a more active participant in online content moderation. For example, after the U.S. Elections this past November, Facebook was heavily blamed for the proliferation of “fake news,” resulting in an entirely new process for handling news uploaded to their site. Facebook instituted a scheme for alerting its users about unverified and potentially inaccurate reporting. The company also took a lot of heat for its decision to take down an iconic Vietnam war picture – which featured a naked girl. The internet erupted in protest, with many arguing that the image was not only a historical relic with educational value, but also an example of nudity in art that should generally be tolerated for its instructive worth.
Facebook’s lack of attention to these issues ultimately prompted Germany to propose legislation that would force social media companies, like Facebook, to enforce a set of standards for online content moderation, imposing seven-figure fines for not swiftly removing unacceptable content. To other critics, the problem revolves around Facebook’s assertion that they are merely a technology company when they are more of a publishing and media organization, and should, therefore, be held to the same standards as other publishing and media companies.
Reluctantly, Facebook has begun to make policy changes. Regrettably, when companies place inherent features like online content moderation onto the backburner, revamping the process can be difficult, requiring technological revisions, employee training and additional manpower that can be time consuming and expensive.
To illustrate the point, as of April 2017, Facebook only employed 4,500 content moderators, with Mark Zuckerberg announcing an additional 3,000 moderators coming on board. Given the amount of content on Facebook, that seems wholly inadequate.
Why not more moderators? Think of the logistics required to not only hire 3,000 people but train them, as well. Yet, can 7,500 online content moderators handle even the 54,000 cases of revenge porn alone each month? Probably not, which is why any online content moderator will say they are overwhelmed by the volume of online content moderation work, often having “just 10 seconds’ to make a decision,” as reported in the Guardian.
By not having taken responsibility for its content, Facebook is unprepared for the changes – resulting not only in an inadequate force of moderators but also a hastily revamped online content moderation policy that is confusing, subjective and full of contradictions.
Facebook Nudity, Nudity in Art, and Violence Policies
As discussed earlier, backlash over certain censorship, such as the removal of the iconic Vietnam issue, became a catalyst for change. Facebook’s previous policy regarding nudity was stringent, although easier to grasp than its new policy update. Of course, implementing a censorship policy when every post is unique and comes with its own context is complicated. It is not easy, especially when implementing changes in such a short time frame. Just consider the difficulties associated with deciphering between nudity in art and pornography, the incitement of violence and its condemnation, or even suicidal calls for help. Nonetheless, implementing online content moderation policies prematurely, before they are well thought out, can also be disastrous. Per the Guardian, moderators expressed concern over their ability to perform their duties to confusing and contradictory guidelines.
For example, consider the new guidelines for handling revenge porn, which under the Facebook rules is defined as the act of sharing nude or sexual imagery of someone without their consent or in an attempt to shame them. Revenge porn must be produced in a private setting, AND contain nudity or sexual activity AND lack of consent by the person in the photo. Lack of consent can be confirmed by the context associated with the image, such as vengeful words in the caption, comment or page title, OR through an independent source, such as media coverage or a police report.
These criteria are somewhat subjective. A close-up photo may or may not be in a private setting. Some imagery would garner a 50-50 distribution among moderators. Nobody wants the decision to remove a sexual image of you to be dependent upon the moderators’ decision over whether the image was taken in a private setting. In addition, do we expect moderators to research local news, or require the user to submit a police report to the moderator? What is the process for doing that? Is this report and its details about sexual contact available to all moderators? How does privacy work in this situation?
In addition to revenge porn, Facebook has also changed its online content moderation policy regarding nudity in art. The prior policy banned digital nudity and sexual activity. The revised plan allows digital images of nudity in art but maintains the existing ban on digital art images of sexual activity. Additionally, Facebook users may post images of nudity in art and sexuality, so long as the work was created in a manual medium. Under the new plan, Facebook asserts:
“We allow nudity when it is depicted in art like paintings, sculptures, and drawings. We do not allow digitally created nudity in art or sexual activity. We drew this line so that we could remove a lot of very sexual digital nudity, but it also covers an increasing amount of non-sexual digitally made nudity in art. The current line is also difficult to enforce because it is hard to differentiate between handmade art and digitally made depictions.”
“a set of art guidelines that consider painting, but not photography or video, “real world art” would, in theory, find Courbet’s “The Origin of the World” acceptable, but not a digital print of a woman’s butt.
Facebook is also tackling child abuse, which has also been controversial. Facebook only removes:
“imagery of child abuse if shared with sadism and celebration . . . Facebook does not automatically delete evidence of non-sexual child abuse to allow the material to be shared so “the child [can] be identified and rescued, but we add protections to shield the audience.”
For example, Facebook may add a warning on a video flagged as including child abuse by letting users know that the content may be disturbing, but the company will not remove the video. The same basic policy is true of animal abuse, as well:
“We allow photos and videos documenting animal abuse for awareness, but may add viewer protections to some content that is perceived as extremely disturbing by the audience . . . Generally, imagery of animal abuse can be shared on the site. Some extremely disturbing imagery may be marked as disturbing,” Facebook policy reads.
In line with that approach, Facebook will allow people to live stream attempts to self-harm because it “doesn’t want to censor or punish people in distress” but doesn’t allow humans dying in accident, murders or other violence, although abortions are okay as long as they don’t contain nudity.
These controversial approaches for handling potentially harmful content have created problems for the company. Part of the problem is the quick turnaround for these policy changes for online content moderation. Facebook was forced to not only make quick fixes but also had to choose policies that the company could manage given its limited team of moderators. Instead, they created a mess that is more difficult and confusing for its moderation team.
Had online content moderation been considered “by design,” meaning considered at each step of the development process for all features, these issues could have been easily remedied. For example, one potential approach for dealing with revenge porn could have been to provide an “abuse” button next to an image, where a person could mark the image as on with lack of consent. The button alerts could be given high priority review and include instructions for adding ancillary material, such as a police report, along with information about the process and privacy. Since this technical approach is not part of the Facebook infrastructure, adding it at this point could be difficult. There may be many other approaches but in any case, waiting until the last minute to deal with issues we all knew would be problematic will rarely result in the best solution.
Orangenius Online Content Moderation Policies in Contrast
Let’s contrast the laissez-faire approach with that of online business and professional networking platform, Orangenius [Full disclosure: Orangenius is the parent company of Artrepreneur]. Orangenius, a platform designed for artists and creatives, allows users to upload their creative work, tell the stories behind the work, and use the platform to help promote themselves, apply for jobs, or get freelance work. While Orangenius does not allow users to post articles or commentary unrelated to their creative works, we cannot control the image content, which may be viewed by the general public.
As an added challenge, artistic work often is infused with social commentary, violent or morally controversial situations, and even offensive language. For example, Facebook would outlaw the words “Someone shoot Trump” which it considers a credible threat of violence, yet for Orangenius, those words included in a creative work may be considered artistic expression. If the estate of Robert Mapplethorpe were to post some of his more explicit sexual images, ones that had been on exhibition at the Metropolitan Museum of Art, should Orangenius remove them due to their explicit sexual content?
Orangenius does not want to be the arbiter of what is considered “art.” At the same time, the company is still responsible to make the site safe so must develop a scheme that protects the public from potentially damaging content, while giving members the freedom to express themselves.
To deal with these issues, Orangenius built an infrastructure that would allow it to make online content moderation policies with technological solutions. Just after launch, the company does not have significant issues and could realistically handle them manually, however, the processes needed to be in place for when the problems become significant. And they will, it’s just a matter of time.
As a startup, Orangenius does not have the resources available to review every image that comes through the platform. Yet, the company needed to handle the inevitable problems regardless of available resources. To tackle the problem (and remain neutral as to deciding what constitutes artistic expression) Orangenius developed a multilevel approach, using a combination of technology, crowdsourcing, and management authority.
Orangenius’ online content moderation process is organic and flexible so that it can comfortable evolve over time. However, by building it into the fabric of the platform, Orangenius can make policy changes based on real-world use and get ahead of new issues as they arise.
Why do some companies do what Orangenius is doing while others wait until the crisis looms ahead? Some companies don’t want to devote time to something that isn’t a problem or may never be a problem. Later, when they have the money, they can deal with it. That’s a gamble though, and assumes both that resources will be available and the problem can be easily fixed. That thinking hasn’t worked out so well for Facebook.
While Facebook is finally taking online content moderation seriously, the lack of preparation has made the process rocky, and will likely take more time and resources to fully resolve the problem. Add to that the bureaucracy inherent in large companies, and we can expect any further revision to take a while. Still, better late than never and lucky for them, the company is not short on capital, although logistically, it may take a while for the company to work through the problems. Startups can certainly learn from the mistakes of companies like Facebook.
How do you think online content moderation policies should treat nudity in art?