Red Flag 5. Algorithmic decision-making that can result in discrimination

RED FLAG # 5

Algorithmic decision-making to profile, and make predictions about, people in ways that can result in discrimination or other human rights harms.

For Example
  • Banks, insurance firms and mortgage companies using automated decision-making that results in individuals being declined credit based on their age or race
  • Recruitment companies using algorithmic systems to help employers make decisions about candidates in ways that result in certain groups such as women and ethnic minorities being disproportionality removed from recruitment processes
  • Companies offering algorithmic solutions to law enforcement or the criminal justice system that disproportionately predict that young black men will commit crimes and undermine the right to equal treatment before the law
  • Social media companies selling profiling and targeting services that enable political campaigners to spread misinformation about opponents or election dates in ways that undermine the ability of individuals to participate in political processes without interference
Higher-Risk Sectors
  • Companies that offer or make use of targeted advertising such as social media, search, websites, blogs, as well as advertisers and their media agencies
  • Consumer finance and credit
  • Health insurance and healthcare
  • Retailers when targeting customers with product promotions
  • Recruitment industry including online portals and software providers
  • IT and technology companies selling algorithmic solutions to government agencies including healthcare and the criminal justice system
  • Political consulting firms
  • Data brokers selling data analytics as a service
Questions for Leaders
  • Do we have evidence that algorithmic profiling delivers notably greater benefits than more traditional tools for decision-making? Have we done an expert review of whether it may lead to us excluding certain groups?
  • Do we have in place the necessary technical know-how and oversight to:
    • Design, build and deploy algorithmic tools in ways that minimize discriminatory and other risks?
    • Responsibly evaluate, procure and use algorithmic tools?
  • If challenged, are we prepared to explain the decisions we make using these tools? Can we evidence that we are not negatively impacting people’s right to non-discrimination, privacy and other rights?

How to use this resource. Group 33 Created with Sketch. ( Click on the “+” sign to expand each section. You can use the side menu to return to the full list of red flags, download this Red Flag as a PDF or share this resource. )

Understanding Risks and Opportunities

Risks to People

Right to non-discrimination and associated impacts on economic, social and cultural rights, such as housing, employment opportunities, livelihoods and healthcare. The use of algorithms to automate decision-making in industries as diverse as online advertising, recruitment, healthcare, retail, and consumer finance is rarely – if ever – intended to undermine individuals’ right to non- discrimination. In fact, these tools have the potential to reduce or remove human bias from decision-making. Nonetheless, the opposite can also be true. Examples include:

  • Social media, search and websites selling targeted advertising where:
    • Landlords have been enabled to exclude users based on race, age or gender. This has occurred when tools allowed agents to explicitly exclude certain groups from seeing housing ads. It can also happen in more subtle ways when companies allow targeting based on categories, such as age, marital status, and ZIP code, that are de facto proxies for certain groups. A series of court cases have led to many companies, including Facebook, committing to change their policies.
    • Ads for jobs placed on search platforms result in higher paying jobs being shown to more men than women, as in a 2015 case involving Google, reported in the Washington Post. Google has since taken steps to seek to address these, and similar examples, by updating its ad targeting policies.
    • Elderly populations have been targeted with fraudulent products or services to trick them out of cash or savings, ranging from anti-ageing products to funeral insurance and reverse mortgages. In one case, retired, politically conservative individuals in the United States were tricked into using much of their retirement savings to buy marked up gold and silver coins to “protect their money from the deep state.” Even though this broke the company’s rules, Facebook showed ads supporting this scheme more than 45 million times over a 21-month period.
  • Discrimination in credit and insurance decision-making, for example:
    • Where loan providers rely on algorithms to analyze credit worthiness. In 2020, a report from the US-based Student Borrower Protection Center found that two lending institutions were effectively raising the cost of credit for students at academic institutions serving predominantly Hispanic and Black students.
    • Where insurance companies use algorithms to set the price of cover. 2018 reports alleged that UK car insurance firms were using algorithms that quoted higher premiums to people with non-Western names. The International Association of Insurance Providers published a paper cautioning the industry about these risks.
    • Where individuals’ credit limits are influenced by their social connections. Some companies are requesting mobile phone data and social media records in order to make judgements about credit worthiness. Where individuals’ do not have a credit history this can be one way to positively increase financial inclusion. But is also risks bringing down minorities’ scores if, for example, an individual has friends and family members who have not paid past debts.
  • Recruitment industry tools that discriminate. The recruitment industry increasingly integrates automated decision-making as part of its value proposition to employers. In this context, discrimination can occur in a range of ways that have been well highlighted by researchers. High profile examples have included companies offering tools that:
    • Examine social media timelines and online postings about candidates with the risk that data which should not legally or ethically exclude an individual from a job – such as political opinion, sexual orientation or having family members convicted of a crime – ends up doing so.
    • Use Natural Language Processing to screen out candidate resumes that don’t fit an employer’s prior hiring patterns, which can perpetuate, racial, gender and other discrimination.
    • Allow employers conducting video interviews to grade verbal responses, tone, and facial expressions against high- performing employees potentially reinforcing biases and being unable to interpret non-white faces.
  • Discrimination in healthcare. Healthcare professionals are increasingly looking to leverage the power of artificial intelligence to achieve breakthroughs in disease detection, diagnosis and patient care plans. The WHO has begun to flag associated ethical risks. In one case, an algorithmic tool sold to hospitals and insurers to predict health care needs was found to underestimate the needs of Black patients.

Impacts on Civil and Political Rights including the right to equality before the law, freedom from arbitrary arrest, freedom of assembly, the right to information, political participation. For example:

  • Predictive Policing: In 2019, human rights organizations, journalist and academics reported that police departments in the United States and the United Kingdom were piloting private sector tools to predict crime as a means to allocate resources with discriminating effects based on race, sexuality and age.
  • Predicting Recidivism Rates in Criminal Justice: In 2016, A commercial tool developed by U.S company Northpointe to predict the likelihood of a criminal re-offending, was assessed by Pro Publica. Findings included that among other things “black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk.”
  • Facial Recognition: The proposition of facial recognition tools is to enable users to identify individuals by comparing their facial characteristics against a database of images. Users – such as law enforcement agencies, airports, border control and private security companies – can then act on matches where an individual has committed an offence or that they deem to be a threat. Concerns about these tools include: the risk of false positives and unfair detention (especially where they have proven to be less accurate on non-white and non-male faces), and chilling effects on freedom of assembly.
  • Political Campaigning and Disinformation: Social media companies that generate revenue by selling targeted advertising to political campaigners have come under intense scrutiny from civil society organizations making the case that this has threatened democratic processes. Of particular concern have been examples in which voters have been targeted by foreign parties with disinformation about voting dates and processes with the aim of suppressing some voters from going to the polls. Equally concerning, and spotlighted by the infamous Cambridge Analytica scandal, are when political lobby or consulting firms sell micro-targeting strategies using disinformation as a service to political incumbents or opposition parties.

Impacts on the Right to Effective Remedy: Whether algorithmic profiling and predictions amount to State violations, or a business abuse of human rights, the nature of the tools described above can undermine the right to an effective remedy for violations of human rights, which is a fundamental principle of international human rights law. In her 2020 report, the UN Special Rapporteur on contemporary forms of racism, racial discrimination and xenophobia explains that, “In many cases, the data, codes and systems responsible for discriminatory and related outcomes are complex and shielded from scrutiny, including by contract and intellectual property laws. In some contexts, not even computer programmers may themselves be able to explain the way that their algorithmic systems function. This “black box” effect makes it difficult for affected groups to overcome steep evidentiary burdens of proof typically required to prove discrimination through legal proceedings, assuming that court processes are even available in the first place.”

Privacy Impacts: Where business models depend on algorithmic profiling and predictions about individuals, this can create or compound risks to the right to privacy. For example:

Risks to the Business
  • Rapidly Evolving Regulatory Risks: The development, sale and use of algorithmic profiling and decision-making tools is gaining increased attention from regulators. In the United States there have been proposals for a federal Algorithmic Accountability Act and local law makers have already passed (New York City in 2017) or are debating laws (for example, Washington State). The most notable developments have taken place in the European Union.
    • The EU’s General Data Protection Regulation addresses
      the right of individuals not to be data subject to a decision based solely on automated processing, including profiling, where that decision has legal or other effects concerning him or her or similarly significantly affects them. In one example, a Swedish financial services company was ordered to correct its credit risk algorithm which was illegally using age as a parameter to determine credit. The EU Competition Commissioner has announced plans to further regulate this practice.”
  • Existing Legal Risk: Where algorithms are being designed and used to make traditional decisions in novel ways concerning employment, advertising and credit, existing laws apply. For example:
    • The use of AI in hiring in the United States may lead to companies failing to comply with existing laws such as the Employee Polygraph Protection Act or Genetic Information Non Discrimination Act.
    • The American Civil Liberties Union brought a series of cases against Facebook and the U.S Department of Housing also filed charges alleging that its algorithm violated US Equal Employment Opportunities laws and the US Fair Housing Act. Facebook settled in both cases, and has changed their policies and systems.
    • In the UK, law firms and academic institutions have warned that financial institutions that make use of algorithms can risk non-compliance with consumer lending laws.
  • Reputational Risk, Including with Employees: The sharp increase in civil society scrutiny of algorithmic tools means that companies developing or using such tools may experience reduced trust from consumers, employees and citizens. In 2019, 250 Facebook staff members published a letter criticizing the company’s refusal to fact-check political ads and tied the issue to ad targeting.
  • Lost Investment Pre-Launch: Where companies using algorithms are found to discriminate or are deemed to be making decisions in ways that lack a social license, this can result in companies having to choose not to take these products to market. In 2018, one company had to halt the launch of a product designed to vet people for domestic services using “advanced artificial intelligence” to analyze their personalities based on social media posts, after they faced a public backlash.
What the UN Guiding principles say

*For an explanation of how companies can be involved in human rights impacts, and their related responsibilities, see here.

Companies that make decisions and pursue actions based on algorithmic profiling and predictions can cause adverse impacts on human rights. An example would be a bank denying credit based on a tool that makes discriminatory recommendations.

Companies whose value proposition is to sell the capability to profile and predict to public or private third parties can contribute to adverse human rights impacts that those actors cause where their tools embed discriminatory biases. Contribution might arise due to the ways that customers are empowered to use these tools (such as by excluding certain groups) or may be more subtle such as when an algorithmic system has bias built into the data set.

An added complexity is that a single algorithmic system may integrate a number of inputs from different actors. For example, a data broker might provide training data; an AI research firm might license an algorithm and a developer might design the customer interface. Depending on the specific circumstances, each of these companies could contribute to adverse impacts.

In situations where companies have taken reasonable steps to prevent their tools contributing to discrimination and other human rights harms, they may nevertheless be linked to adverse impacts that business or government customers are causing.

Possible Contributions to the SDGs

Algorithmic systems may be used to advance a number of SDGs such as those listed below. Addressing impacts to people associated with this red flag can contribute to ensuring that this is done in ways that do not simultaneously impact people’s rights to non-discrimination, privacy and physical and mental health and well-being.

SDG10: Reduce Inequality within and Among Countries.

SDG3: Healthy Lives and Well-Being for all, including by tackling disruptions to progress such as from the COVID-19 global pandemic.

SDG 5.B: Promote Empowerment of Women Through Technology

SDG11: Make cities and human settlements inclusive, safe, resilient

The UN Secretary-General’s Roadmap for Digital Cooperation is an important resource to guide “all stakeholders to play a role in advancing a safer, more equitable digital world” even as technological solutions are used to achieve the SDGs.

Taking Action

Due Diligence Lines of Inquiry

Unless otherwise indicated, the following questions draw heavily on Ranking Digital Rights’ Best Practices: Algorithms, Machine Learning and Automated Decision-Making, and the World Economic Forum’s White Paper on How to Prevent Discriminatory Outcomes in Machine Learning.

  • Do we have a clear policy that describes how the company identifies and manages human rights risks related to the algorithmic system(s) we use?
  • Do we inform customers or users about the existence of algorithmic profiling, describe how this works, explain the variables that influence the algorithm, and explain how users and customers may be impacted?
  • Have we mapped and understood if any particular groups may be at an advantage or disadvantage in the context in which the system is being deployed? Do we have a method for checking if the output from an algorithm is decorrelated from protected or sensitive features?
  • Do we seek a diversity of views about the potential risks of proposed models, especially from specific populations affected by the outcomes of algorithmic systems we use?
  • Have we established robust diversity and inclusion policies at every level of the company, and notably in teams that develop algorithms, machine learning models, or other automated decision-making tools?
  • Have we consulted with all the relevant domain experts whose interdisciplinary insights allow us to understand potential sources of bias or unfairness, and to design ways to counteract them?
  • Do we assess whether any uses or use-cases of our algorithmic tools pose risks to human rights? Where we identify these, are we:
    • Creating clear and enforceable terms of use?
    • Engaging enterprise and government customers/users to educate and train them about how to use the tools without increasing human rights risks?
    • Do we have systems in place to monitor and review how customers are using our tools?
    • Are we clear about the actions we will take if we discover that our tools are being used in ways that lead to, or increase the likelihood of, adverse human rights impacts?
  • Do we apply “rigorous pre-release trials to ensure that algorithmic systems will not amplify biases and error due to any issues with the training data, algorithms, or other elements of system design?”
  • Have we outlined an ongoing system for evaluating fairness throughout the life cycle of our product? Do we have an escalation/emergency procedure to correct unforeseen cases of unfairness when we uncover them?
  • Are we clearly committed to only buying and/or using training datasets that comprise data whose data subjects have provided informed content to having their data included in datasets used for this purpose?
    • Are we making dataset(s) used to train machine learning models, terms of use, and APIs available to allow third parties to provide and review the behavior of our system?
    • What reporting, grievance or redress processes and recourse do we have in place? Do we have a process in place to make necessary fixes to the design of the system based on reported issues or concerns?
Mitigation Examples

Mitigation examples are current or historical examples for reference, but do not offer insight into their relative maturity or effectiveness. Moreover, some examples listed below are proposals for mitigating actions that have come from data science and engineering research institutes.

  • Principles, Governance and Oversight: A number of companies in the technology industry and beyond have committed to some form of AI fairness principles as well as having ethics officers and cross-functional committees that look at these issues. One example is Microsoft’s AI, Ethics and Effects of Engineering and Research (AETHER) Committee, which operates alongside the company’s Office of Responsible AI (ORA). Microsoft states that its governance arrangements are designed to set “company-wide rules for enacting responsible AI, as well as defining roles and responsibilities for teams involved in this effort” and that “senior leadership relies on Aether to make recommendations on responsible AI issues, technologies, processes, and best practices.”
  • Tech Tools to Detect Bias: Some technology companies – including IBM, Microsoft, Google’s What-If tool and Facebook’s Fairness Flow – have developed products aimed at detecting bias in algorithmic decision-making. Such efforts can be a way to root-out bias from companies’ own profiling and predictive models as well as a way for “big tech” to mitigate the risk that third parties develop, design and deploy discriminatory algorithms using these companies’ platforms or computing power. With a similar purpose, Aequitas is an “an open source bias audit toolkit developed at the University of Chicago, [that] can be used to audit the predictions of machine learning based risk assessment tools to understand different types of biases, and make informed decisions about developing and deploying such systems.”
  • Debiasing Discrimination in Lending: Start-up Zest AI has created a feature that “uses a technique called adversarial debiasing to correct discrimination in lending models… One model predicts a borrower’s ability to pay, while the second predicts protected information, such as the race or gender of the borrower. The dueling models learn from each other through dozens of adjustments until the discrimination predictor is stumped — the race or gender variable bears no meaningful relationship to the applicant’s credit score.”
  • Data-sheets for Data Sets: Experts at Microsoft Research have proposed the idea of labelling of data sets that train algorithms similar to a nutrition labelling on foods. The intent would be to mitigate against discriminatory outcomes that occur when biased data sets are used to train algorithmic models. The idea is that this will “allow users to understand the strengths and limitations of the data that they’re using and guard against issues such as bias and overfitting.”
  • Changes to Targeted Advertising Policies: As far back as 2015, Facebook and Google banned payday loan companies from advertising on their platforms. Since 2019, Twitter, Google and Facebook have made changes to the policies and systems that allow customers to target adverts. Different changes pertain to different categories of advert including for housing and job opportunities. Of particular interest from a human rights perspective were the changes that Twitter and Google made to policies concerning political advertising made in the run-up to the 2020 US presidential election. Twitter banned political ads outright in October 2019. Google limits targeting advertising in certain broad categories such as sex, gender and postcode (as against micro-targeting). The exact impact of these moves, including from a human rights perspective, is still being explored.
  • LinkedIn Fairness Tool Kit: Linked-In has developed LiFT an open-source project that detects, measures, and mitigates biases in training data sets and algorithms. The company has been using the tool itself to “compute the fairness metrics of training datasets on its platforms, such as the Job Search model.”
  • Ideal’s Reduce-Bias Guidance and tool to Reduce Bias: The recruitment services firm Ideal published a Workplace Diversity Through Recruitment: A Step-By-Step Guide and has a tool that customers can use to test and monitor for adverse impacts in its candidate grading system. Customers who collect demographic data during the course of their hiring process, can ask Ideal to instruct its algorithms to both ignore those demographics and test for and remove adverse impacts based on, among other things, compliance with the US Department of Labor’s affirmative action program, Canada’s equity programs for designated groups, and the European Union’s hiring discrimination laws.
Other tools and resources

Citation of research papers and other resources does not constitute an endorsement by Shift of their conclusions.

Red Flag 2. High-speed that places pressure on warehouse workers and logistics workers in “the last mile”

RED FLAG # 2

Offering high-speed delivery such that it places pressure on warehouse workers and logistics workers in the “last mile”.

For Example
  • Retailers offering free or low-priced express delivery to consumers in ways that place unreasonable time or wage pressure on logistics workers
  • Logistics providers that rely on low wages and precarious labor
Higher-Risk Sectors
  • Online retailers
  • Logistics Providers
Questions for Leaders
  • How does the company understand what the true costs of delivery are once decent working conditions are factored in, and where those costs are reflected in the company’s pricing model?
  • (Retailers) Does the company know whether it incentivizes consumers to ask for extremely short delivery times, without informing them of the potential decent work implications?
  • (Retailers) How does the company know whether its model leads in practice to logistics providers placing unreasonable time or wage pressures on individual workers?

How to use this resource. Group 33 Created with Sketch. ( Click on the “+” sign to expand each section. You can use the side menu to return to the full list of red flags, download this Red Flag as a PDF or share this resource. )  

Understanding Risks and Opportunities

Risks to People
  • Risks to people arise when the business models of retailers or logistics companies do not appropriately factor in the human cost of delivery. Labor cost is the most significant cost within the “last mile” (the movement of goods from a transportation hub to the final delivery destination), and is the process step on which logistics providers often focus to gain competitive advantage (McKinsey & Company 2016).
  • In many cases, consumers do not wish to pay for delivery, but at the same time assume that retailers are aware of, and factor in, the human cost of delivery; retailers are often in fact unaware, and pass this cost on to the logistics providers. In turn, logistics providers – incentivized to compete on price and speed of delivery – frequently pass the human cost of delivery onto workers.
  • Pressure on Wages and Conditions: Where retailers and logistics companies compete on delivery costs, “final mile” workers, as well as workers in warehouses, have faced below living wages, charges for missing work and denial of holiday and sick pay (Right to enjoy just and favorable conditions of work). The gig economy has been embraced by the logistics sector; human rights impacts can arise where the gig economy intersects with vulnerable workers, and particularly where companies classify workers as self-employed to avoid employer responsibilities and costs and/ or are denied the right to bargain collectively (Right to enjoy just and favorable conditions of work, including equal remuneration for work of equal value; Right to an adequate standard of living (through decent remuneration); Right to join trade unions and the right to strike).
  • Risk of Injuries and to Health: Drivers and cycle couriers can face increased risk of crashes due to pressures to deliver more goods, faster, with resulting risks to their health and lives. The effect is exacerbated where retailers engaging independent delivery drivers decline financial responsibility associated with crashes, despite exercising control over factors such as destinations, deadlines and routes. In 2018 the Guardian reported that a UK driver missed three medical specialist appointments to avoid GBP150 daily penalties for missing work, and later collapsed at the wheel and died. (Right to life; Right to health; Right to an adequate standard of living).
  • Pressure on Family Life: As retailers offer more and faster delivery options, including late night delivery, workers can see a reduction in time spent with family, and often do not see wage conditions reflecting this reality. (Right to family life).
  • Vulnerable Workers: Migrant workers can be particularly vulnerable in the logistics sector, especially when working for subcontractors who enter the retail value chain to meet seasonal demands (Right to enjoy favorable conditions of work). Impacts are exacerbated where migrant workers pay fees to recruitment agencies that place them in situations of unsustainable debt, tying them to a particular agency or job. (Right not to be subjected to forced labor).
  • Job Impacts from Automation: New technologies and increased automation impacting business models in the delivery sector can lead to large scale displacement of workers. Risk of human rights impacts can arise where this is not executed with planning or support for upskilling or redeployment of workers. (See Red Flag 21 for further information on large scale or rapid automation).
Risks to the Business
  • Business Continuity Risks: Same-day and instant delivery is likely to form 15% of the market by 2020 (McKinsey & Company 2016). As the percentage of retail companies’ revenue derived from online sales increases, retailers who have not resolved risks in their business model associated with this red flag face considerable disruption if this leads to work stoppages amongst logistics workers or consumer boycotts. Investing in capacity building by and with logistics providers with respect to worker rights makes business sense.
  • Reputational Risks: As logistics workers face overwork and poor pay and conditions to meet cost and speed requirements, risks of impacts that create reputational and ethical challenges for both retailers and logistics companies increase. Investigative articles examining the conditions of logistics workers, naming both logistics companies and retailers, have been increasing in the UK, Germany and other markets. For example, in Germany it was reported that migrant workers from Eastern Europe who were recruited by agencies to deliver parcels for or in the supply chain of large retailers during the Christmas rush, were receiving below minimum wage and not enjoying decent conditions of work.
  • Operational Risks: The value chain may become more complex and higher risk from an operational perspective as logistics providers outsource to sub-contractors to meet unreasonable requirements of price or speed.
  • Legal Risks: In some cases, logistics companies relying on contract labor are being determined by the courts to in fact have an employment relationship with logistics workers: in June 2018, an employment tribunal in the UK found 65 Hermes Parcelnet couriers to be workers, and not self-employed contractors. Reasons included the fact that workers had limited scope to negotiate their pay and terms of contract.
What the UN guiding principles say

*For an explanation of how companies can be involved in human rights impacts, and their related responsibilities, see here.

Retailers:

  • Where a retailer’s own business model facilitates or incentivizes logistics providers to impact the human rights of warehouse and/or logistics workers, it is considered to contribute to the impacts. Where the retailer takes serious steps to avoid facilitating or incentivizing these outcomes, but impacts nevertheless occur in connection with delivery of the company’s products, the retailer may be considered not to have contributed, but to be directly linked to the impact.

Logistics Providers:

  • Where logistics providers pay workers below a living wage and deny decent working conditions they cause human rights impacts.
  • Where impacts occur at the level of a logistics subcontractor, the logistics provider will at a minimum be directly linked to the impacts; it will contribute to impacts if its business model or practices incentivized or facilitated them.
Possible Contribution to the SDGs

Addressing impacts to people associated with this red flag can contribute to, inter alia:

SDG 8: Decent Work and Economic Growth, in particular: Target 8.8 on protecting, “labor rights and promot[ing] safe and secure working environments for all workers, including migrant workers, in particular women migrants, and those in precarious employment”; and Target 8.7 on eradication of forced labor.

Taking Action

Due Diligence Lines of Inquiry

Retailers:

  • What processes do we have in place for identifying the presence and extent of risks to the human rights of workers delivering our products resulting from the demands placed on them? Do we set expectations for logistics providers with regards to working conditions?
  • Do we engage with logistics providers on the issue of working conditions for workers delivering our products and seek genuine feedback on whether and how delivery terms and expectations may create unreasonable pressures?
  • Do we reward logistics providers for measures that ensure the protection of workers’ rights, or do we, in practice, consider only speed and price?
  • Do we engage at the industry level to put in place common measures that avoid retailers competing on delivery terms that place pressure on the human rights of logistics workers?

Logistics Providers:

  • How do we assess the true cost of delivery, once decent working conditions are factored in, and how can we integrate this into prices?
  • Do we seek genuine feedback from our workers about their experience providing our services, through channels they can trust?
  • Do we have in place policies and processes to provide decent working conditions? How do we embed these and track our progress?
  • Do we have a channel in place to discuss with retailers the human costs of delivery, and retailer pressures that impact workers/working conditions? Can we create one either individually or in concert with industry peers?
Mitigation Examples

*Mitigation examples are current or historical examples for reference, but do not offer insight into their relative maturity or effectiveness.

Logistics Providers :

  • Engagement Beyond Employees: Following the death of a driver, DPD conducted a strategic operational review, including a consultation with UK drivers, obtained independent, external advice and rolled out a driver code in 2018. DPD UK introduced new self-employed worker contracts aimed at providing “all drivers the flexibility to move between different employment statuses including employed, Owner Driver Workers and Owner Driver Franchisees.” (See further DPD CSR Report 2018, p. 19).
  • Royal Mail: Engages the government in relation to better labor standards across the industry.
  • Ombudsperson System: Hermes Parcelnet has adopted a Code of Conduct and developed a Social Compliance Model to underpin the embedding of its Code. As part of this, the company created a Panel to hear courier complaints and appointed a Business and Human Rights Ombudsperson who provides a) recommendations to the Panel on remedy for human rights-related complaints and b) suggestions to the audit committee for strengthening the company’s policies and processes.

Retailers:

here is limited information on good practices at the retailer- level with respect to this red flag, particularly with respect to addressing the effect of retailers’ own practices.

  • UK Logistics Initiative: A small group of retailers joined together to address working conditions at their final mile delivery logistics partners. This has involved using collective leverage to open up a conversation between logistics providers and a credible independent third party and a workshop with retailers to discuss the findings of those conversations, to develop a common understanding of the risks to people and to take action individually and where appropriate collectively. This is an informal group, convened by Shift in partnership with the British Retail Consortium.
  • Capacity Building: Marks and Spencer conducts risk assessments and audits of logistics providers. The company also brought logistics providers (amongst other suppliers) to its Modern Slavery and Human Rights Conference, offering capacity building with respect to modern slavery, including a toolkit for suppliers, and a reminder of M&S expectations (See M&S 2017 Human Rights Report, p.21).
Alternative Models

Logistics Providers: In a groundbreaking step for the gig economy, logistics provider Hermes Parcelnet engaged in a recognition deal with GMB Union pursuant to which self-employed workers can choose to become “self-employed plus” and receive benefits such as holiday pay, negotiated pay rates and union representation. In exchange, workers agreed to follow delivery routes specified by Hermes Parcelnet rather than delivering parcels in any order. (See IHRB 2019 for more information).

Other Tools and Resources

Case Example(s):

  • In 2018 the Guardian reported that a UK driver missed three medical specialist appointments to avoid GBP150 daily penalties for missing work, and later collapsed at the wheel and died.
  • In Germany it was reported that migrant workers from Eastern Europe who were recruited by agencies to deliver parcels for or in the supply chain of large retailers during the Christmas rush, were receiving below minimum wage and not enjoying decent conditions of work.
  • Shift partnered with the Behavioral Science Group at Warwick Business School to see if behavioral science could suggest some ways to “nudge” consumers towards longer delivery windows that could reduce pressures on couriers. See, Adding Human Rights to the Shopping Cart (2020).

Citation of research papers and other resources does not constitute an endorsement by Shift of their conclusions.

Más Allá del Orgullo: los Derechos de las Personas LGBTI y la Responsabilidad Corporativa de Respetar

En cada región del mundo las personas lesbianas, gays, bisexuales, transgénero e intersexuales (LGBTI) se enfrentan a algún grado de violencia, persecución o discriminación:

Desde lo que se dice en torno a una mesa familiar hasta quién llega a participar en una competencia deportiva. Desde qué persona puede hacer uso de un baño hasta cuál es sentenciada en un tribunal, o quién es víctima de esterilización forzada o sometida a procedimientos médicos nocivos. Y desde quién consigue un apartamento, una oferta de trabajo o un ascenso hasta qué persona es encarcelada, azotada o sentenciada a muerte. Los contextos de dicha violencia y discriminación son tan variados como las personas en el acrónimo LGBTI, y plantean una amplia gama de riesgos de derechos humanos para las empresas.

Las empresas tienen la responsabilidad – según los Principios Rectores sobre Empresas y Derechos Humanos de las Naciones Unidas (UNGPs, por sus siglas en inglés) – de comprender y abordar la manera en que sus acciones, decisiones, omisiones y relaciones comerciales pueden generar impactos negativos en las personas. En el caso de las personas LGBTI, eso significa considerar cómo podrían estar aumentando el riesgo que ya enfrentan por su orientación sexual, identidad o expresión de género, o características sexuales (SOGIESC, por sus siglas en inglés).

Usa este recurso para:

  • Aprender cómo las actividades y las relaciones de las empresas pueden exacerbar los riesgos enfrentados por las personas LGBTI.
  • Entender por qué los riesgos pueden variar dependiendo del contexto geográfico y cultural, y qué significa eso para las empresas de nivel global.
  • Explorar cómo las empresas pueden entender las vulnerabilidades particulares que experimentan las personas LGBTI para así identificar mejor los riesgos y priorizar la acción.
  • Revisar qué están haciendo las empresas para atender los riesgos a los derechos de las personas LGBTI, y en dónde están los vacíos en las prácticas actuales.
  • Considerar formas significativas en las que las empresas pueden interactuar con las partes LGBTI interesadas y utilizar su influencia con sus pares, socios, proveedores, gobiernos y otros.

Audio | Getting Contractual Provisions on Human Rights, Right

CLICK PLAY BELOW TO LISTEN TO THIS EPISODE

As the mandatory due diligence debate heats up in Europe and we look ahead to more countries turning the responsibility to respect into a corporate duty, companies will increasingly need to focus on setting clear expectations of their business partners. Putting the right provisions into contracts is going to become even more important.

In this conversation, Shift’s Rachel Davis and John F. Sherman III discuss a recent project of a Working Group of the Business Law Section of the American Bar Association (ABA) that is trying to get ahead of the trend of new legislation and put companies on the right path in how they approach the role of contractual requirements.

You may also read more about the Model Clauses in a viewpoint by John Sherman, here.

This episode’s speakers

RACHEL DAVIS

Rachel Davis is one of Shift’s co-founders and has led work at Shift over the last decade on standard-setting, human rights and sports, financial institutions, conflict and international law.

As Vice President, Rachel shapes our strategy and oversees a range of our collaborations with companies, governments, investors, civil society and other partners. Rachel leads Shift’s work to influence STANDARD-SETTERS of all kinds to integrate the UN Guiding Principles into the rules that govern business, including engaging with governments and the European Union on mandatory human rights due diligence. Learn more

JOHN F SHERMAN III

As Shift’s General Counsel and Senior Advisor, John F. Sherman III focuses on the role of corporate lawyers in the implementation of the Guiding Principles in their role as wise legal counselors.

John is an internationally recognized thought leader on this subject. He chairs the business and human rights working group of the International Bar Association. He writes frequently in professional and academic journals and is a sought-after speaker at legal conferences and workshops, advocating for lawyers’ role in ensuring companies do business with respect for human rights. John is a founder of the IBA CSR Committee and was its co-chair from 2008 to 2010. Learn more

Human Rights & Labor Principles: A Business Imperative

This High-Level dialogue was hosted by the UN Global Compact in the context of the 10th anniversary of the UN Guiding Principles on Business and Human Rights. It was moderated by Shift’s Vice President, Rachel Davis. Panelists include the author of the UN Guiding Principles, John Ruggie; UN High Commissioner for Human Rights Michelle Bachelet; the Secretary-General of IOE, Roberto Suarez Santos; and the Chair of the UN Working Group on Business and Human Rights, Dante Pesce.

Using Leverage to Drive Better Outcomes for People

In March 2021, Shift held the first peer-learning session of its Financial Institutions Practitioners Circle, focusing on the topic of leverage. This resource captures the key takeaways of the session. 


The traditional approach of many banks and Export Credit Agencies (together “FIs”) has been to assess risk from a credit risk perspective and to make a binary decision about whether or not they will enter into commercial relationship with a client. As such, too often those decisions have been made on the basis of risk appetite rather than considering the more complex task of risk management, engagement with the client and the application of firm sustainability expectations. More committed FIs are shifting towards an approach that emphasizes managing risks to affected stakeholders rather than a sole focus on managing potential reputational risks. In addition to setting human rights-related expectations of clients upfront, FIs now need to focus, for higher risk sections of the portfolio, on scrutinizing the appropriateness of the expectations against intended outcomes, reviewing client adherence to them and evaluating their impact.  

When FIs take this approach, we see greater alignment with the UN Guiding Principles’ focus on improving outcomes for people. Moreover it facilitates a move away from so-called “cut and run” approaches whereby the bank makes another binary decision to cut ties with clients amid reputational concerns without first attempting to use leverage. Due Diligence is a wheel after all: it doesn’t start and end at assessment. The bank has a responsibility to get to action: to use its influence (leverage) to seek to improve outcomes for adversely affected people, including, at a minimum, engagement with clients around risks. This also helps the bank to get to the “yes, and” approach to navigating higher-risk transactions, whereby the bank can more confidently take on clients or transactions that pose heightened social risk, if it is prepared to invest the resources necessary for leverage and it has a credible road map for where the client needs to get to in terms of maturity of approach and/or concrete Key Performance Indicators (KPIs). It goes without saying that an element of pragmatism needs to be brought to bear when looking to achieve this at a portfolio level. The prioritization of resources and focus at the assessment phase is particularly important for financial institutions given their challenge of scale. 

The FI’s responsibility to respect human rights includes the need to understand where it has leverage – in the multiple different forms of leverage available – and where it needs to build it; it means using this leverage to seek to prevent and address harm in order to justify continued engagement. Leading FIs are increasingly exploring and institutionalizing this process.

Here are our 6 key takeaways from our discussion about how to consider leverage for financial institutions from the perspective of the UNGPs, with practical steps that might help turn these insights into action. 

The “S” in ESG: Best Practices and Way Forward?

On July 1, 2021, Shift, Frank Bold and the Thomson Reuters Foundation hosted a discussion to explore what’s needed for companies to better measure and report on their social risks and impact. A transcription of Professor Ruggie’s keynote remarks is available here.

THE S IN ESG: BEST PRACTICES AND WAY FORWARD?

The event featured introductory remarks by Professor John Ruggie. Participants also included: Irit Tamir (Oxfam), Lauren Compere (Boston Common Asset Management), Julie Vallat (L’Oréal), Filip Gregor (Frank Bold), Tom Dodd (EU Commission), Giulia Corinaldi (TRF) and Caroline Rees. (Shift).