Critics of the criminal enforcement system have condemned the expansion and privatization of electronic monitoring, criminal diversion, parole, and probation. But the astonishing perversion of contract involved in these new practices has gone unnoticed. Though incarceration-alternative (IA) contracting is sometimes framed as humane, historical and current context illuminates its coercive nature. IA contracting must be examined under classical contract theory and in light of the history of economic exploitation using criminal enforcement power harnessed to contract, including in the racial peonage system under Jim Crow. This Article documents this systematic underregulation through the first empirical study of legal regimes for IA contracts. To the extent that the theoretical limits of contract are not presently reflected in the common law of contract, regulatory reforms that better regulate seller and government practices might reduce the risk of exploitation.
Volume 92.4
June
2025
The “public” is everywhere and nowhere in contemporary public law. Everywhere, in that the term is constantly invoked to justify and explain existing arrangements. Nowhere, in that serious attempts to identify a relevant public and elicit its input are few and far between. Scholars and officials depict the American public as playing myriad roles in governance—checking, guiding, approving, repudiating—without offering an account of how public preferences are formed or how they exercise influence on the questions of interest. This Article seeks to identify and call attention to the foundational dilemmas underlying this disconnect, to clarify their normative contours and intellectual history, and to propose a pragmatic response—grounded in the recovery of the public’s role as an author and not just a monitor of public law.
Illinois’s Biometric Information Privacy Act (BIPA) is the country’s most powerful law governing biometric data—data generated from an individual’s biological characteristics, like fingerprints and voiceprints. Over the past decade, BIPA garnered a reputation as an exceptionally plaintiff-friendly statute. But from 2023–2024, the Illinois legislature, Illinois Supreme Court, and Ninth Circuit Court of Appeals all sided with BIPA defendants for the first time. Most significantly, in Zellmer v. Meta Platforms, Inc., the Ninth Circuit dismissed the plaintiff’s BIPA claim because the face scan collected by the defendant could not be used to identify him.
It is unclear whether these developments represent a trend or an exception to BIPA’s plaintiff-friendliness. Which path is charted will largely turn on how courts interpret Zellmer: While Zellmer established that a biometric identifier must be able to identify an individual, lower courts have construed its holding narrowly to require that the entity collecting biometric data must itself be capable of identifying, rather than it being sufficient for any entity to do so. Reading BIPA this narrowly would significantly weaken the statute’s protections.
After detailing how employer and consumer cases catalyzed this recent defendant-friendly shift, this Comment proposes a two-step framework to determine whether a biometric identifier is able to identify, falling under BIPA’s reach. Given BIPA’s broad influence, where courts ultimately land on this question will be crucial to the protection of biometric data nationwide."
Recently, many states have reacted to the growing data economy by passing data privacy statutes. These follow the “interaction model”: they allow consumers to exercise privacy rights against firms by directly interacting with them. But data brokers, firms that buy and sell data for consumers whom they do not directly interact with, are key players in the data economy. How is a consumer meant to exercise their rights against a broker with an “interaction gap” between them?
A handful of states have tried to soften the interaction gap by enacting data-broker-specific legislation under the “transparency model.” These laws, among other things, require brokers to publicly disclose themselves in state registries. The theory is that consumers would exercise their rights against brokers if they knew of the brokers’ existence. California recently went further with the Delete Act, providing consumers data-broker-specific privacy rights.
Assembling brokers’ reported privacy request metrics, this Comment performs an empirical analysis of the transparency model’s efficacy. These findings demonstrate that the transparency model does not effectively facilitate consumers in following through on their expected privacy preferences or meaningfully impacting brokers. Therefore, regulators should follow in the footsteps of the Delete Act and move beyond the transparency model.
The latest development in class action litigation is the “future stakes settlement.” Under this novel mechanism, unveiled in the settlement proposal to end a privacy law class action lawsuit against the startup Clearview AI, a defendant grants a privately traded equity stake to the class in exchange for a release of all claims.
Future stakes settlements, though similar to existing mechanisms in class action and bankruptcy law, offer distinct benefits and costs. Through a future stakes settlement, the class may recover against a cashless defendant and receive a larger payout than would be possible through a traditional cash damages fund. But this recovery is uncertain, as the value of a future stake can fluctuate. Furthermore, by transforming injured parties into shareholders, future stakes settlements pose serious moral quandaries.
Existing guidance for settlement agreements under Federal Rule of Civil Procedure 23(e) is insufficient to handle the high degree of risk associated with future stakes settlements. This Comment recommends additional standards that courts should apply when evaluating these settlements. Through these additions, courts can prevent defendant gamesmanship, ensure future stakes settlements are fair to the class, and fulfill the dual purposes of compensation and regulation in class actions.