|
META TOPICPARENT | name="SecondEssay" |
| |
< < | Equality in the Algorithmic Age: Adapting Anti-Discrimination Law to Machine Learning Systems | > > |
Surveillance v. Surveillance: Civilians' Tit-for-Tat and Its Problems
| | | |
< < | As machine learning models quietly arbitrate who receives a job offer, qualifies for a mortgage, or is targeted by certain educational interventions, the tools that shape social opportunity now rest less in human hands and more in the subtle calculus of predictive analytics. Whereas discriminatory intent once stood at the heart of legal inquiries, the gravest threats to equality today may arise without any deliberate animus at all. Instead, systemic biases embedded in historical data and opaque design choices can yield outcomes that disproportionately harm protected groups. This phenomenon of algorithmic discrimination demands a reckoning with current legal frameworks. While commentators often lament the law’s apparent lag behind technology, an underutilized doctrinal approach—disparate impact liability—offers a promising and, perhaps surprisingly, well-aligned conceptual resource. To address algorithmic discrimination fully, we must move beyond viewing disparate impact as a static mechanism transplanted wholesale from mid-twentieth-century contexts. Instead, we should reconceptualize it as a flexible doctrinal tool capable of engaging complex evidentiary challenges, shifting evidentiary burdens, and rewarding innovative compliance strategies. | > > | Introduction | | | |
< < | Re-Theorizing Disparate Impact in the Algorithmic Context | > > | Police brutality has historically been a social issue for the United States and is one of the most controversial, polarizing, and contentious American issues. This is largely due to the widespread emergence of news and media footage that depicts police officers shooting and killing unarmed people of color. Civilians increasingly use their smartphones to record interactions with law enforcement to surveil officers and combat police misconduct against the most vulnerable. Contrarily, the government is often criticized for overexercising its ability to surveil citizens, which has raised concerns about privacy and misuse of power. Therefore, the practice of civilians surveilling law enforcement is perceived as both a safety measure and a check on government overreach and power. | | | |
< < | Rooted in Title VII of the Civil Rights Act and extended by cases like Griggs v. Duke Power Co. and Texas Department of Housing & Community Affairs v. Inclusive Communities Project, Inc., the disparate impact doctrine reframes discrimination as a structural phenomenon rather than a product of individual ill will. This shift is crucial in the algorithmic setting, where machine learning models may incorporate statistical patterns that correlate protected traits with adverse outcomes. Although these patterns may have no intentional origin, their persistence can be as pernicious as traditional prejudice. By looking directly at outcomes—and placing the burden on deployers of algorithms to justify results that fall disproportionately on certain groups—disparate impact doctrine resonates naturally with the complexity of digital decision-making. | > > | Surveillance itself, however, has several unintended consequences and threatens privacy. While retaliatory civilian surveillance is meant to enhance safety, it only contributes to a broader culture of mistrust and overuse of surveillance. The best solution to address this challenge is to limit both excessive police and retaliatory civilian surveillance. Proper boundaries must be set where surveillance is not seen as the solution, because without boundaries, these practices risk perpetuating a vicious cycle of invasive oversight that compromises privacy and the very safety that they intend to promote. | | | |
< < | Yet simply importing older frameworks into the digital arena is insufficient. Courts and regulators should embrace the doctrine’s latent adaptability. Traditionally, disparate impact claims involve policies—tests, criteria, or eligibility thresholds—that are identifiable and discrete. Algorithms, by contrast, operate as dynamic, evolving systems, updating themselves through iterative learning processes. This volatility challenges traditional legal conceptions of a stable “practice” subject to scrutiny. A more innovative approach to disparate impact can treat algorithmic models as ongoing decision-regimes, requiring regulated entities to periodically audit their models, examine changes in their predictions, and demonstrate ongoing compliance. Rather than a one-time challenge to a static policy, algorithmic disparate impact enforcement should be envisioned as a continuing obligation to monitor and adjust. | > > | The Rise of Civilian Surveillance | | | |
< < | Overcoming Data Complexity and Transparency Challenges
One might argue that machine learning models, with their opaque architectures and proprietary features, present insurmountable evidentiary barriers. Yet courts have long managed complexity and confidentiality through carefully calibrated procedural devices. In credit scoring and standardized testing litigation, for instance, courts have subjected intricate predictive tools to scrutiny under protective orders and through neutral experts. The lessons learned there apply here. Rather than seeing these novel systems as black boxes permanently sealed against judicial inquiry, courts can require algorithmic transparency compatible with trade secret protection, using in camera reviews, differential disclosure regimes, or the appointment of court-supervised data scientists. | > > | Smartphones provide ordinary citizens with the power to be vigilantes who are capable of documenting instances of police brutality and misconduct in real time. In the last decade, this has becoming increasingly popular with viral videos, such as the killings of Eric Garner in 2014 and George Floyd in 2020, which led to public outrage and demands for accountability by law enforcement. For example, a New York Times article entitled “Black Lives Upended by Policing: The Raw Videos Sparking Outrage” provides readers with 34 cellphone and dashboard camera videos that display police brutality. One of the videos features cellphone footage of yet another unarmed black man, Alton Sterling, being tackled, held to the ground, and eventually shot by two white officers. Excessive force is a growing problem as police in the United States are said to use force against 300,000 people each year, according to a report by The Guardian. Ultimately, as more and more individuals feel defenseless in their encounters with police, civilian surveillance provides folks with a sense of empowerment. With retaliative surveillance and the ability to capture and share evidence of brutality, individuals can challenge official narratives, garner public support, and fight for justice. | | | |
< < | Critically, these procedural innovations can leverage the “business necessity” or “legitimate justification” prong of disparate impact analysis to incentivize greater algorithmic explainability and debiasing efforts. For example, if a company cannot articulate how its algorithm’s predictive features relate to job performance or creditworthiness—and fails to propose effective debiasing strategies—courts could treat that opacity as evidence that the practice is not justified. Over time, the specter of liability would encourage developers to adopt recognized fairness metrics, perform pre-deployment bias testing, and invest in “explainable AI” techniques. In short, the legal system need not passively accept black-box complexity; it can harness liability rules to foster more interpretable and equitable forms of algorithmic design. | > > | Social media platforms, such as Instagram and X, add to the impact of these videos as isolated incidents can transform into broader conversations and national controversies. For example, on X, hashtags like #BlackLivesMatter and #NoJusticeNoPeace have become a battle cry for movements seeking systemic change. According to a PBS NewsHour? Report, the instancy and convenience of social media allows users to share raw, emotionally charged content, which helps to foster solidarity and drive action amongst its viewers. | | | |
< < | Regulatory Innovation and Cross-Border Models
While courts can adapt disparate impact doctrine to algorithmic contexts, legislative and regulatory guidance is equally important. Precisely because machine learning systems may change continuously and operate across multiple jurisdictions, a static, litigation-driven approach alone might prove insufficient. Regulatory agencies—such as the Equal Employment Opportunity Commission or the Consumer Financial Protection Bureau—could issue guidelines defining acceptable levels of predictive disparity, provide safe harbors for companies that adopt best-in-class debiasing techniques, and facilitate periodic third-party audits. These administrative interventions can shift the focus from post-hoc liability to proactive compliance, encouraging companies to identify and mitigate risk ex ante. | > > | The Comfort and Consequences of Surveillance | | | |
< < | International comparisons enrich this vision. The European Union’s General Data Protection Regulation and related proposals on artificial intelligence underscore the importance of transparency, algorithmic accountability, and enforceable rights to explanation. While U.S. law has not historically mandated a “right to explanation,” disparate impact litigation—backed by tailored regulations—could de facto produce a similar effect, compelling defendants to justify and, if needed, revise their models. This approach aligns equality law with a broader transnational conversation on algorithmic governance, turning what might seem like insular domestic litigation into part of a global effort to ensure that emerging technologies do not eclipse longstanding commitments to human dignity and equal opportunity. | > > | Civilian and government surveillance can enhance safety and offer accountability, but it also poses risks of overreach and can lead to unintended consequences. Surveillance’s primary intent, to deter misconduct and offer safety, backfires when it is perceived as an intrusive weapon that threatens privacy rather than offers accountability. Filming police interactions provides folks with a sense of security and empowerment. Many believe that dashcam/bodycam footage is often manipulated and altered to produce a particular narrative that aligns with law enforcement. Therefore, civilian documentation or recording potential misconduct can deter officers from brutalizing others. This visibility forms somewhat of a protective shield and civilian surveillance can be seen as a form of resistance and retaliation. Government surveillance, however, is often motivated by national security concerns, crime prevention, and an effort to maintain “law and order.” | | | |
< < | Conclusion | > > | Surveillance comes at a cost though as it contributes to a culture of invasive monitoring that infringes individual rights, affects decision-making, and threatens privacy. (Tonghan Zhang et al., A Comprehensive Survey on Graph Neural Networks, arXiv:2212.) For civilians, overuse of surveillance causes individuals to censor their speech and lose autonomy, principles that are foundational to the United States’ “democracy.” (Christopher Slobogin & Sarah Brayne, Surveillance Technologies and Constitutional Law, 6 Ann. Rev. Criminol. 219 (2023)) Rather than provide civilians with a sense of security, the increased surveillance fuels folks’ mistrust of law enforcement, Alternatively, for law enforcement, constant surveillance can make officers hesitant to act during critical moments, which compromises their ability to make proper judgment calls and perform their duties (Randy K. Lippert & Bryce Clayton Newell, Debate Introduction: The Privacy and Surveillance Implications of Police Body Cameras, Vol.14 No.1 (2016)). Retaliative surveillance makes officers fearful of public backlash for any action they take, even those made in good faith. Therefore, it risks exacerbating police inaction, since some officers may prioritize themselves and their well-being over community engagement and public safety. These dynamics have bred a very tense relationship between officers and the civilians they’re expected to “protect.” As a result, community trust has been lost and the potential for a collaborative relationship has nearly diminished. | | | |
< < | Disparate impact was never just about intentional bias; it recognizes that structural inequalities persist even when overt prejudice fades. This is newly urgent in an era of algorithmic decision-making, where discrimination emerges not from open hostility but from subtle data patterns and opaque modeling choices. We should reconceptualize disparate impact doctrine for the digital age, using it to spur technical innovation, procedural creativity, and sustained accountability. By insisting on substantive justifications and encouraging equitable design, this updated approach ensures algorithms remain aligned with core fairness principles. Thus, it reaffirms the promise of American anti-discrimination law: that equality cannot be sacrificed for convenience or buried under complexity. In a world increasingly guided by machine learning, a reimagined disparate impact doctrine can safeguard the quest for a more just and inclusive society. | | \ No newline at end of file | |
> > | Solutions & Conclusion
There needs to be a balance between accountability and privacy. While excessive surveillance may be some form of deterrence, its misuse feeds the public’s mistrust of law enforcement. To address retaliatory surveillance, policies should govern police officers’ use of body cameras so that it is transparent, respects the privacy rights of officers, and prevents manipulation or alteration when wrongdoing arises. For civilians, governments should explore alternative ways to maintain national security and prevent crime without infringing on citizens’ inherent rights. To improve the relationship between law enforcement and civilians, states should explore community outreach programs and opportunities that aid an understanding between law enforcement and the public with a shared goal of safety. Ultimately, surveillance is a powerful tool that often contributes to an invasion of privacy and manipulation, so it’s paramount that we seek out balanced solutions that limit it and protect privacy and prioritize trust.
Sources:
Christopher Slobogin & Sarah Brayne, Surveillance Technologies and Constitutional Law, 6 Ann. Rev. Criminol. 219 (2023)
Dan Barry, Video Evidence, and a Question of Race, The New York Times (Aug. 19, 2017)
Randy K. Lippert & Bryce Clayton Newell, Debate Introduction: The Privacy and Surveillance Implications of Police Body Cameras, Vol.14 No.1 (2016)
Sam Levins, Police Use of Force Data Reveals Violent Trends, Analysis Finds, The Guardian (Aug. 28, 2024)
Tonghan Zhang et al., A Comprehensive Survey on Graph Neural Networks, arXiv:2212. |
|