The modern authoritarian practice of “flood[ing] the zone with shit” clearly illustrates the dangers posed by bullshitters—i.e., those who produce plausible sounding speech with no regard for accuracy. Consequently, the broad-based concern expressed over the rise of algorithmic bullshit is both understandable and warranted. LLMs like those powering ChatGPT, which complete text by predicting subsequent words based on patterns present in their training data are, if not the embodiment of such bullshitters, tools ripe for use by such actors. They are by design fixated on producing plausible sounding text, and since they lack understanding of their output, they cannot help but be unconcerned with accuracy. Couple this with the fact that their training texts encode the biases of their authors, and one can find themselves with what some have called mansplaining as a service. Because of these risks, the Legal Innovation and Technology lab at Suffolk has created FindMyCite.org, a sandbox where legal educators can explore the responsible use of Large Language Models (LLMs) for both scholarship and instruction.

The site presents tools which ask an LLM (GPT-3) to reorder the contents of a user-defined Zotero library, providing context to mitigate the problem of algorithmic BS. It also couples its output to real citations allowing users to check its work. This session will suggest exercises aimed at orienting students and helping them to understand the use of such tools. After all, practical experience working with a tool leads to understanding, and understanding is power. That power is something our students can leverage both for the benefit of their future clients and practice. We don't students to become coders, but we'd like them to learn enough to effectively use these tools and call BS when needed.

Speaker