Supreme Court Rules that LGBTQ Employees Deserve Workplace Protections–More Progress is Needed to Combat Unfairness and Disparity


Authors: Katelyn Ringrose (Christopher Wolf Diversity Law Fellow) and Dr. Sara Jordan (Policy Counsel, Artificial Intelligence and Ethics)

Today’s Supreme Court ruling in Bostock v. Clayton County—clarifying that Title VII of the Civil Rights Act bans employment discrimination on the basis of sexual orientation and gender identity—is a major victory in the fight for LGBTQ civil rights. Title VII established the Equal Employment Opportunity Commission (EEOC), and bans discrimination on the basis of sex, race, color, national origin and religion by employers, schools, and trade unions involved in interstate commerce or those doing business with the federal government. Today’s 6-3 ruling aligns with Obama-era protections, including a 2014 executive order extending Title VII protections to LGBTQ individuals working for the federal contractors. 

In this post, we examine the impact of today’s decision, as well as (1) voluntary anti-discrimination efforts adopted by companies for activities not subject to federal protections; (2) helpful resources on the nexus of privacy, LGBTQ protections, and big data; and (3) the work FPF has done to identify and mitigate potential harms posed by automated decision-making. 

In Bostock, the Supreme Court determined that discrimination on the basis of sexual orientation or transgender status are forms of sex discrimination, holding: “Today, we must decide whether an employer can fire someone simply for being homosexual or transgender. The answer is clear. An employer who fires an individual for being homosexual or transgender fires that person for traits or actions it would not have questioned in members of a different sex. Sex plays a necessary and undisguisable role in the decision, exactly what Title VII forbids.” 

Bostock resolved the issue through analysis of three cases:

  • R.G. & G.R. Harris Funeral Homes Inc. v. Equal Employment Opportunity Commission, where Aimee Stephens worked as a funeral director at R.G. & G.R. Harris Funeral Homes.  When Aimee informed the funeral home’s owner that she was transgender, the business owner fired her, saying it would be “unacceptable” for her to appear and behave as a woman. 
  • Altitude Express v. Zarda, where Donald Zarda, a skydiving instructor in Long Island, N.J., was fired from his job because of his sexual orientation. 
  • Bostock v. Clayton County, where Gerald Lynn Bostock was fired from his job as a county child welfare services coordinator when his employer learned Gerald is gay. 

“Today is a great day for the LGBTQ community and LGBTQ workers across the nation. The United States Supreme Court decision could not have come at a better time given the current COVID-19 crisis and the protests taking place across the country. However, there still remains much work to be done, especially around the areas of data and surveillance tools. The well-documented potential for abuse and misuse of these tools by unregulated corporations as well as government and law enforcement agencies should give serious pause to anyone who values their privacy–especially members of communities like ours that have been historically marginalized and discriminated against,” says Carlos Gutierrez, Deputy Director & General Counsel of LGBT Tech. “Today’s decision will protect over 8 million LGBT workers from work discrimination based on their sexual orientation or gender identity. This is especially heartening given that 47% or 386,000 of LGBTQ health care workers, people on the frontlines of the COVID-19 battle, live in states that had no legal job discrimination protections.” 

We celebrate today’s win. However, it is now more critical than ever to address data-driven unfairness that remains legally permissible and harmful to the LGBTQ community. 

Bostock should also influence a range of anti-discrimination efforts. In recent years, many organizations have engaged in various efforts to combat discrimination even when their activities are not directly regulated by the Civil Rights Act. When implementing such anti-discrimination programs, organizations often look to the Act to identify protected classes and activities. Bostock provides clarity — organizations  should include sexual orientation and gender identity in the list of protected classes even if their activities wouldn’t otherwise be regulated under Title VII. 

Anti-Discrimination Efforts //

Title VII of the the Civil Rights Act has historically barred discrimination on the basis of sex, race, color, national origin and religion; the Civil Rights Act, including Title VII, is the starting point for anti-discrimination compliance programs. Even companies that do not have direct obligations under the Act (including ad platforms) have utilized the Act to guide their anti-discrimination efforts (see the Network Advertising Initiative’s Code of Conduct). According to the Human Rights Campaign, the number of Fortune 100 companies that have publicly pledged to non-discrimination employment policies on the basis of gender identity increased from 11% (gender identity) and 96% in 2003 (sexual orientation) to 97% and 98% respectively by 2018. 

We caution that simply not collecting or ignoring sensitive information will not always be a solution that ensures discrimination is avoided. Even without explicit data, proxy information can reveal sensitive information. Furthermore, in order to assess whether protected classed are treated unfairly, it will sometimes be important to collect information that can identify discrimination. While sensitive data collection has its benefits and risks, the lack of data available to researchers can mean that policymakers do not have the information necessary to understand disparities in enough depth to create responsive policy solutions.

Helpful Resources // 

  • “Big Data” and the Risk of Employment Discrimination, by Allan King & Marko Mrkonich, grappling with the many ways employers use correlative methods of analyzing big data, and how those efforts are in tension with Title VII, the to the extent the correlations those methods discover overlap with protected employee characteristics.
  • Big Data for All: Privacy and User Control in the Age of Analytics, by Omer Tene and Jules Polonetsky, noting that inaccurate, manipulative, or discriminatory conclusions may be drawn from perfectly innocuous, accurate data—and like any interpretative process, algorithms are subject to error, inaccuracy, and bias.
  • The Real Cost of LGBT Discrimination, by the World Economic Forum, recognizing that discrimination doesn’t just harm individuals—but also families, companies, and entire countries. 
  • Challenges for Mitigating Bias in Algorithmic Hiring, by The Brookings Institution, recognizing that, in instances of disparate impact caused by automated decision making, there are numerous bars to relief, including the fact that (1) plaintiffs may not have sufficient information to suspect or demonstrate disparate impact; (2) it is unclear whether predictive validity is sufficient to defend against a claim of disparate impact;and (3) many proposed solutions to mitigating disparities from screening decisions require knowledge of legally protected characteristics.
  • Lessons from Fair Lending Law for Fair Marketing and Big Data, by Peter Swire, explaining that fair lending laws provide guidance as to how to approach discrimination that allegedly has an illegitimate, disparate impact on protected classes; furthermore data can plays an important role in being able to assess whether a disparate impact exists. 
  • Algorithmic Fairness, by the Software & Information Industry Association, explaining that automated decision making can be designed and used in ways that preserve fairness for all, but this will not happen automatically–getting those outcomes requires designing those features and using them in ways that preserve these values. 

Unfairness by Algorithm //

While discriminatory decisions made by a human are clearly regulated, the full range of potentially discriminatory decisions made by a computer are not yet well understood. Yet algorithmic harms may be similarly pernicious, as well as more difficult to identify or amenable to redress using available legal remedies.

In a 2017 Future of Privacy Forum report, Unfairness by Algorithm: Distilling the Harms of Automated Decision Making, we identified four types of harms—loss of opportunity, economic loss, social detriment, and loss of liberty—to depict the various spheres of life where automated decision-making can cause injury. The report recognizes that discriminatory decisions and resulting unfairness as determined by algorithms can lead to distinct collective and societal harms. For example, use of proxies, such as “gayborhood” ZIP codes in algorithms or resume clues regarding LGBTQ community activism, can lead to employment discrimination and result in the same differential access to job opportunities. 

As organizations commit to LGBTQ protections, an adherence to data protection and fairness principles are one way to battle systemic discrimination. These principles include ensuring fairness in automated decisions, enhancing individual control of personal information, and protecting people from inaccurate and biased data. 

Conclusion //

Today’s decision regarding workplace protections could not be more welcome, particularly now as data from the Human Rights Campaign shows that 17% of LGBTQ people and 22% of LGBTQ people of color have reported becoming unemployed as a result of COVID-19. However, the fight for inclusivity and equality does not stop with law and legislation. Further work is necessary to ensure that data-driven programs uncover and redress discrimination, rather than perpetuate it.