RED FLAG # 6

Providing online platforms for individuals to interact where use of the platform can lead to harm to human rights

For Example
  • Social media, messaging and online platforms through which individuals may post abusive content, form groups with the purpose of inciting hatred or violence, or engage in discriminatory practices
  • Platforms predominantly used by children and young people that allow users (including adults) to post videos and images of violent, sexual or dangerous behavior
  • Applications designed for use by specific groups that can increase the possibility of States surveilling and persecuting individuals from those groups (e.g. members of the LGBTQI community)
  • Online gaming sites where players may use related chat rooms to engage in misogynistic behavior, graphic language and imagery, and predatory child grooming and abuse
  • Online marketplaces through which individuals can refuse to do business – e.g. sell a service, exchange goods, offer jobs or rent property – with individuals of a certain ethnicity or sexual orientation
  • Adult websites to which individuals can upload videos or images of people without their consent, or illegal content such as of the sexual exploitation of children

Higher-Risk Sectors
  • Social media and messaging platforms
  • Web-based calling and video services
  • Online marketplaces and sharing economy platforms (such as online classified advertisements, dating, recruitment and real estate sites)
  • Platforms with high numbers of users being children and young people
  • Online gaming sites and related chat rooms
  • Cloud and hosting services companies offering the infrastructural backbone and computing power to businesses listed above
Questions for Leaders
  • How does the company assess whether its platform is, or risks, enabling human rights harms? Does this include a review of how strategies to increase user numbers, user engagement and revenue may undermine the company’s efforts to operate responsibly?
  • How does the company prevent the posting and spread of harmful content? Does it enable users or third parties in all markets to report harmful or abusive content and how does it respond to such reports?
  • Does the company have processes in place to engage with civil society and other experts to remain aware of the potential impacts on people of their platforms, and to explore any dilemmas that may arise in seeking to mitigate those risks?
  • Is the company engaging with peers and governments to help define industry standards and laws aimed at protecting against platform-related harms?

How to use this resource. Group 33 Created with Sketch. ( Click on the “+” sign to expand each section. You can use the side menu to return to the full list of red flags, download this Red Flag as a PDF or share this resource. )

Understanding Risks and Opportunities

Risks to People

Hate Speech, Harassment and Illegal Content
(Right to equality and non-discrimination; Right to life, liberty and security; Right to freedom of thought, conscience and religion; Right to Just and favorable conditions of work; Right to highest attainable standard of physical and mental health):

Mis-/Disinformation and Censorship
(Right to freedom of opinion and expression; Right to freedom of thought, conscience and religion; Right to free and fair elections):

“Ephemeral Post” Features that may exacerbate harm
(Right to Privacy; Right to freedom of opinion and expression; Right to equality and non discrimination)

  • Platforms like Snapchat pioneered the “ephemeral post” feature (followed by Facebook and Twitter), where messages and posts exist for only a certain period of time and then disappear “forever.” While billed as a way to support more private modes of sharing, experts acknowledge the added difficulty in monitoring and removing toxic or harmful content from more private interactions such as these.

Adverse Impacts on High-Risk Vulnerable Groups
(Right to Privacy; Right to highest attainable standard of physical and mental health; Right to Education):

  • Platforms predominantly used by young people (pre-teens, teenagers and young adults) may allow videos and posts that reflect or promote harmful behavior, such as bullying, extreme dieting, anorexia, drug use, body dysmorphia, and inappropriate content such as porn and suicide livestreams.
  • Platforms can expose young people to high-levels of targeted advertising and marketing with critics highlighting the inherent tension between advertising-based models that moderate content based on viewer engagement and content safety issues.
  • Online gaming sites and their connected chat rooms for players have in some instances become predatory grooming grounds for child abuse.
  • Dating platforms for the LGBTQI communities are vulnerable to data hacking and surveillance, and require additional security protections for their members.

Right to Equality and Non-discrimination
The introduction of technological platforms for transactions was expected by many to reduce or remove the inherent bias that can negatively affect the way that humans approach and conduct transactions with others. However, high profile studies and incidents have shown that discriminatory conduct has made its way into platform-based transactions, and in some cases, been exacerbated by platforms that institutionalize the discrimination.

  • In the rental housing market, landlords offering rooms for accommodation who refuse to host on the grounds of assumed ethnicity or gender identity have been identified in various studies. In Japan, real estate platforms that a) allow landlords to select “Foreigner accepted/not accepted” or b) do not remove such references by  landlords, can become connected to discrimination against non-Japanese. In the US, a A Harvard Business School study noted that, “applications [to Airbnb] from guests with distinctively African-American names are 16% less likely to be accepted relative to identical guests with distinctively White names.
  • Similarly, photos and names were implicated in a 2016 study, that found that drivers for ride sharing platforms Uber and Lyft were found to make Black clients wait longer before accepting their trip requests and that drivers were more likely to cancel on people with “Black-sounding” names.
  • Job advertisements on job search platforms may contain discriminatory content specifying, for example, desired age or gender in the job post. Laws regarding discrimination in employment vary, such that postings that violate the right to non-discrimination may be legal in some jurisdictions.
Risks to the Business
  • Regulatory and Legal Risks: Despite their vast reach, social media platforms have been described as “operat[ing] in a regulation-free zone,” and increasing lobbying efforts to maintain that status. Concerns about impacts on people are leading, however, to calls for increased regulation, including from some platforms themselves, with debate as to the form the regulation should take.
    • Recent movements towards regulating platforms include the upcoming UK Online Harms Bill, which will set out strict guidelines governing the removal of illegal content and setting out specific responsibilities with regard to children.
    • The EU Digital Services Act Package (Digital Services and Digital Markets Acts) was announced by the European Commission in December 2020, aimed at ensuring a safe, rights-respecting online space in Europe, and a level-playing field for technology innovation and competitiveness across the region, and bolstered by substantial fines and penalties.
  • Reputation and Legal Risks: Online platforms linked to discriminatory practices or content have seen legal challenges, boycotts and widely disseminated online campaigns.

 

What the UN guiding principles say

*For an explanation of how companies can be involved in human rights impacts, and their related responsibilities, see here.

A company operating an online platform can cause human rights harms when it takes or fails to take a decision that results in people being prevented from enjoying rights such as the right to privacy, right to information, freedom of expression or their right to be forgotten. Examples include where a platform filters out user content or closes user accounts erroneously, or when a major data breach occurs that violates user privacy.

Companies operating online platforms can also contribute to a range of human rights harms when the design and functionality of platforms facilitates or incentivizes third parties to engage in harmful behaviour. In this context, harms might be experienced by:

  • a user due to their own use or misuse of the platform.
  • a user because another actor has used, misused or abused the platform.
  • a third party due to how a user has used, misused or abused the platform.
Possible Contributions to the SDGs

Addressing impacts to people associated with this red flag indicator can positively contribute to a range of SDGs depending on the impact concerned, for example:

SDG 5: Achieve gender equality and empower all women and girls, in particular Target 5.1: “End all forms of discrimination against all women and girls everywhere.”

SDG 16: Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels, in particular, Target 16.1: Significantly reduce all forms of violence and related death rates everywhere, and Target 16.2: End abuse, exploitation, trafficking and all forms of violence against and torture including of children. Finally, Target 16.10: Ensure public access to information and protect fundamental freedoms, in accordance with national legislation and international agreements.

 

Taking Action

Due Diligence Lines of Inquiry
  • How do we identify, assess and address discriminatory or otherwise abusive behaviors on platforms? Have we engaged with potentially vulnerable groups to educate ourselves on how our processes can be improved to combat discrimination or otherwise abusive content by other users?
  • Do we make clear to platform users that discrimination or otherwise abusive behavior will not be tolerated? Have we incorporated this into user agreements? Do we have in place clear and detailed content moderation policies and processes to prevent viral spreading of discriminatory or otherwise abusive content?
  • Do we have counseling programs in place for employed content moderators, regularly exposed to harmful, explicit or distressing online content?
  • What are we doing to educate our users on what kind of content will and will not be tolerated on our platform?
  • What systems are in place to ensure discriminatory behavior or exploitative, non-consensual or otherwise abusive content or interaction are flagged and managed (e.g. removed or otherwise dealt with)?
  • What systems are in place to ensure that ads tied to crimes such as sexual exploitation, including of children, are prevented and dealt with, including through collaboration with the relevant authorities?
  • What measures do we take to ensure only age-appropriate content is served to our young users?
  • How do we track the effectiveness of our efforts to combat discrimination or other human rights impacts associated with our platform? What are the tests and metrics used?
  • Do we provide or participate in effective grievance mechanisms that are accessible to individuals and communities at risk of discrimination by our platforms?
  • Do we ensure transparency of processes, specifically with making user data available or with regard to content removal?
Mitigation Examples

*Mitigation examples are current or historical examples for reference, but do not offer insight into their relative maturity or effectiveness.

Online Platforms:

  • In the run up to the 2020 US elections, Facebook announced a range of steps they were taking to ensure the integrity of the elections including by removing misinformation, violence-inciting posts, the creation of a Voting Information Center, the development of a new hate speech policy, as well as political advertising blackout periods the week before and after the election.
  • Social media companies have been developing stronger moderation systems to flag, escalate and make decisions about discriminatory or otherwise abusive behavior (e.g. employing monitoring staff that are trained on the local context; convening groups of experts to monitor important topics, especially where hate speech or fake news can lead to serious harm). For example, Facebook has announced the use of AI to limit the spread of hate speech and improve the speed of its removal and, with others including Twitter, has joined the global pledge to fight hate speech online.
    • Content moderation: monitoring and removing content is, in principle, a viable risk mitigation strategy and many social media companies employ moderators to manage the related risks to people. However, a number of additional risks to people are inherent to this work: (1) privacy risks related to having your content, personal information and private interactions monitored; (2) censorship if companies make inappropriate or incorrect decisions; and (3) risk to the mental health of the content moderators who are regularly exposed to harmful, toxic and violent content.
  • Facebook and Twitter have created lead roles for human rights experts, and Facebook has reportedly commenced “making sure that people with human rights training are in the meetings where executives sign off on new product features.” Facebook has also created an Independent Oversight Board to take final and binding decisions on whether specific content should be allowed or removed from Facebook and Instagram. The Board considers content referred to it by both users and Facebook. Members contract directly with the Oversight Board, are not Facebook employees and cannot be removed by Facebook.
  • Online recruitment companies, such as LinkedIn, use a “multitude of tools and systems to proactively monitor content and identify activity that may be in violation of [their] policies,” deploying human reviewers where users identify and report discriminatory content in job postings.

Online Marketplaces: