Tuesday, April 25, 2023

You Are Not a Parrot. And a chatbot is not a human.

 “The liberty of the press is a blessing when we are inclined to write against others, and a calamity when we find ourselves overborne by the multitude of our assailants.”

 Samuel Johnson


25 December marks the 84th anniversary of the death of the prolific Czech writer, Karel Čapek. He coined the word ‘robot,’ at his brother’s suggestion, 100 years ago. It is based on the name for serf labour in Czech.

‘Robot’ went global thanks to its creator’s play, R.U.R – Rossum’s Universal Robots. It was staged in Broadway in October 1922, and in London six months later. The humanoids on stage look faintly ridiculous by today’s standards, but the dilemmas posed by Čapek’s characters are as current today as when he penned them.

The play provoked a lively debate in London in 1923. It fascinated and annoyed some of the most prominent British writers of the time, including G.B Shaw. Karel Čapek defended himself in a London newspaper: “A product of the human brain has at last escaped from the control of human hands. That is the comedy of science.”

Chapek - Are we ready for the coming AI revolution?


Meet the workers using ChatGPT to take on multiple full-time jobs


Stephen Wolfram (March 2023), “ChatGPT Gets Its ‘Wolfram Superpowers’: Early in January I wrote about the possibility of connecting ChatGPT to Wolfram|Alpha. And today—just two and a half months later—I’m excited to announce that it’s happened! Thanks to some heroic software engineering by our team and by OpenAI, ChatGPT can now call on Wolfram|Alpha—and Wolfram Language as well—to give it what we might think of as “computational superpowers”. It’s still very early days for all of this, but it’s already very impressive—and one can begin to see how amazingly powerful (and perhaps even revolutionary) what we can call “ChatGPT + Wolfram” can be. 



You Are Not a Parrot


And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this. New York Magazine – The Intelligencer: “And a linguist named Emily M. Bender is very worried what will happen when we forget this…A handful of companies control what PricewaterhouseCoopers 

called a “$15.7 trillion game changer of an industry.” Those companies employ or finance the work of a huge chunk of the academics who understand how to make LLMs. This leaves few people with the expertise and authority to say, “Wait, why are these companies blurring the distinction between what is human and what’s a language model? 

Is this what we want?” Bender is out there asking questions, megaphone in hand. She buys lunch at the UW student-union salad bar. When she turned down an Amazon recruiter, Bender told me, he said, “You’re not even going to ask how much?” She’s careful by nature. She’s also confident and strong willed.

 “We call on the field to recognize that applications that aim to believably mimic humans bring risk of extreme harms,” she co-wrote in 2021. “Work on synthetic human behavior is a bright line in ethical Al development, where downstream effects need to be understood and modeled in order to block foreseeable harm to society and different social groups.” In other words, chatbots that we easily confuse with humans are not just cute or unnerving. They sit on a bright line. Obscuring that line and blurring — bullshitting — what’s human and what’s not has the power to unravel society…”


Simon Willison: ChatGPT lies to people. “This is a serious bug that has so far resisted all attempts at a fix. We need to prioritize helping people understand this, not debating the most precise terminology to use to describe it. We accidentally invented computers that can lie to us I tweeted (and tooted) this: 

We accidentally invented computers that can lie to us and we can’t figure out how to make them stop – Simon Willison (@simonw) April 5, 2023 Mainly I was trying to be pithy and amusing, but this thought was inspired by reading Sam Bowman’s excellent review of the field, Eight Things to Know about Large Language Models. In particular this:

More capable models can better recognize the specific circumstances under which they are trained. Because of this, they are more likely to learn to act as expected in precisely those circumstances while behaving competently but unexpectedly in others. This can surface in the form of problems that Perez et al. (2022) call sycophancy, where a model answers subjective questions in a way that flatters their user’s stated beliefs, and sandbagging, where models are more likely to endorse common misconceptions when their user appears to be less educated…”


 Pearce, Russell G. and Lochan, Hema, Legal Education and Technology: The Potential to Democratize Legal Knowledge and Power (March 13, 2023). Latin American Law Review n.º 10 (2023): 63-79, Fordham Law Legal Studies Research Paper No. 4387616, Available at SSRN: https://ssrn.com/abstract=4387616

“The current technological transformation of legal education, including computer-based, interactive, and online modes of instruction, represents “one of the most dramatic technological revolutions in history, if not the most dramatic.” 

As the AI-based technological revolution accelerated dramatically in the 1990s, many commentators responded to the “commercial spread of the Internet” with utopian faith in its potential to equalize and democratize knowledge and power. 

This faith gave way to a second wave of comments criticizing the “damages… to historically subservient groups”, the threat of “disinformation” and polarization of democracy, the consolidation of power in Big Tech and authoritarian governments, and the threat to privacy in general. Today’s commentators are challenged to determine if and how to address these harms while realizing the potential benefits of AI-powered technology, especially given the impact and use of technology during the forced experimentation that took place during the COVID-19 pandemic. In assessing the potential impact of technology on legal education, this paper focuses primarily on legal education in the United States, although we will include some comparative ideas. 

Part I provides the context for our analysis – how legal education functions today to maintain hier- archy and inequality regardless of any specific reliance on technology. 

Part II examines the way law schools currently use online legal education, and its minimal impact on democratizing legal education. Part III will explore the potential of technology to improve legal education, including democratizing legal knowledge and power.”


Richard Murphy: Do We Trash the Planet or Have Things We Value, Like the NHS? That Is the Question

An examination of the economic law that says as the private sector improves productivity by destroying the planet and creating awful jobs, what’s left of the public sector must do the same.



RIPS Law Librarian / Laura Scott: “I chuckled when I recently happened upon the term “technostress” during some otherwise depressing research on librarians and burnout. Like “cyberspace” or “computer-assisted legal research,” technostress struck me as something I would need a flux capacitor and some legwarmers to experience fully. 

After all, technology is now an essential part of our daily lives at work and at home, and we librarians are often early and enthusiastic tech adopters. 

However, the coincidence of encountering this delightfully quaint-sounding word while (separately) being knee-deep in the flood of interesting articles and blog posts on ChatGPT mandated further investigation….”