Wednesday, February 22, 2023

“Law is a jealous mistress” which “requires long and constant courtship.

 “We have sacrificed sanity on the altar of atomic “inclusivity”…

Why is a grown man playing sport against teenage girls?


App founder quits Google, says company doesn’t serve users anymore.


RIPS Law Librarian – Lawyers Fail to Serve the Public and Themselves: ChatGPT to the Rescue to Placate the Jealous Mistress? by Sarah Gotschall: “As law librarians and legal research professors, we have witnessed firsthand how law students struggle with their time and labor-intensive legal research and writing assignments. And of course, the many of us who are lawyers ourselves know the struggle is real and is nothing new. It is an often repeated maxim that the “Law is a jealous mistress” which “requires long and constant courtship.

 It is not to be won by trifling favors, but by a lavish homage.” Though coined by Justice Joseph Story in 1829, the words remain true today. The law is a perennial and needy attention hog, always wanting more and more from its practitioners. Though legal technology has made many aspects of legal practice easier – word processing, online researching, electronic filing – the real work of lawyers – reading, thinking, analyzing, and writing – is just as time and energy consuming as ever. 

Perhaps it is more arduous now due to the nature of our common law system. With an ever-increasing number of published opinions, it becomes more time-consuming to locate and analyze cases to determine the most relevant legal principles for a specific situation. This can prove challenging for legal practitioners, who attempt to stay abreast of the latest case law and developments in their areas of expertise…This seems like an industry ripe for disruption! Can AI language generators like ChatGPT come to the rescue, making legal services more affordable to the public and law practice more enjoyable/less stressful for lawyers? 

It could be said that, for hundreds of years now, lawyers have abused their monopoly on the provision of legal services by failing to serve the majority of the public, despite sometimes sacrificing their own health and sanity along the way. Is it time to step aside and let the robots try!?”


Stephen Wolfram: “It’s Just Adding One Word at a Time . That ChatGPT can automatically generate something that reads even superficially like human-written text is remarkable, and unexpected. But how does it do it? And why does it work? My purpose here is to give a rough outline of what’s going on inside ChatGPT—and then to explore why it is that it can do so well in producing what we might consider to be meaningful text. I should say at the outset that I’m going to focus on the big picture of what’s going on—and while I’ll mention some engineering details, I won’t get deeply into them.

 (And the essence of what I’ll say applies just as well to other current “large language models” [LLMs] as to ChatGPT.) The first thing to explain is that what ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.” So let’s say we’ve got the text “The best thing about AI is its ability to”. Imagine scanning billions of pages of human-written text (say on the web and in digitized books) and finding all instances of this text—then seeing what word comes next what fraction of the time. ChatGPT effectively does something like this, except that (as I’ll explain) it doesn’t look at literal text; it looks for things that in a certain sense “match in meaning”. But the end result is that it produces a ranked list of words that might follow, together with “probabilities”… 

And the remarkable thing is that when ChatGPT does something like write an essay what it’s essentially doing is just asking over and over again “given the text so far, what should the next word be?”—and each time adding a word. (More precisely, as I’ll explain, it’s adding a “token”, which could be just a part of a word, which is why it can sometimes “make up new words”.)