Monday, April 17, 2023

We need to tell people ChatGPT will lie to them, not debate linguistics

 New York Magazine – The Intelligencer: “And a linguist named Emily M. Bender is very worried what will happen when we forget this…A handful of companies control what PricewaterhouseCoopers called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? Is this what we want?” 



Bender is out there asking questions, megaphone in hand. She buys lunch at the UW student-union salad bar. When she turned down an Amazon recruiter, Bender told me, he said, “You’re not even going to ask how much?” She’s careful by nature. She’s also confident and strong willed. “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.” In other words, chatbots that we easily confuse with humans are not just cute or unnerving. They sit on a bright line. Obscuring that line and blurring — bullshitting — what’s human and what’s not has the power to unravel society…”


Simon Willison: ChatGPT lies to people. “This is a serious bug that has so far resisted all attempts at a fix. We need to prioritize helping people understand this, not debating the most precise terminology to use to describe it. We accidentally invented computers that can lie to us I tweeted (and tooted) this: 

We accidentally invented computers that can lie to us and we can’t figure out how to make them stop – Simon Willison (@simonw) April 5, 2023 Mainly I was trying to be pithy and amusing, but this thought was inspired by reading Sam Bowman’s excellent review of the field, Eight Things to Know about Large Language Models. In particular this:

More capable models can better recognize the specific circumstances under which they are trained. Because of this, they are more likely to learn to act as expected in precisely those circumstances while behaving competently but unexpectedly in others. This can surface in the form of problems that Perez et al. (2022) call sycophancy, where a model answers subjective questions in a way that flatters their user’s stated beliefs, and sandbagging, where models are more likely to endorse common misconceptions when their user appears to be less educated…”


 Pearce, Russell G. and Lochan, Hema, Legal Education and Technology: The Potential to Democratize Legal Knowledge and Power (March 13, 2023). Latin American Law Review n.º 10 (2023): 63-79, Fordham Law Legal Studies Research Paper No. 4387616, Available at SSRN: https://ssrn.com/abstract=4387616

“The current technological transformation of legal education, including computer-based, interactive, and online modes of instruction, represents “one of the most dramatic technological revolutions in history, if not the most dramatic.” 

As the AI-based technological revolution accelerated dramatically in the 1990s, many commentators responded to the “commercial spread of the Internet” with utopian faith in its potential to equalize and democratize knowledge and power. 

This faith gave way to a second wave of comments criticizing the “damages… to historically subservient groups”, the threat of “disinformation” and polarization of democracy, the consolidation of power in Big Tech and authoritarian governments, and the threat to privacy in general. Today’s commentators are challenged to determine if and how to address these harms while realizing the potential benefits of AI-powered technology, especially given the impact and use of technology during the forced experimentation that took place during the COVID-19 pandemic. In assessing the potential impact of technology on legal education, this paper focuses primarily on legal education in the United States, although we will include some comparative ideas. 

Part I provides the context for our analysis – how legal education functions today to maintain hier- archy and inequality regardless of any specific reliance on technology. 

Part II examines the way law schools currently use online legal education, and its minimal impact on democratizing legal education. Part III will explore the potential of technology to improve legal education, including democratizing legal knowledge and power.”