Pages

Sunday, September 25, 2022

A Human Being Wrote This Law Review Article

 Kindness is in our power, even when fondness is not.
— Samuel Johnson, born in 1709

The guys at Axios are divulging the secrets of "smart brevity." One problem: Smart brevity isn't all that smart -  Dumber & Dumber  »


Beckett, Nabokov, and Conrad abandoned their native languages to write in another. Jhumpa Lahiri is the latest to embark on that  journey  

Census: “Between 2019 and 2021, the number of people primarily working from home tripled from 5.7% (roughly 9 million people) to 17.9% (27.6 million people), according to new 2021 American Community Survey (ACS) 1-year estimates released today by the U.S. Census Bureau. 

The New York Times, Benjamin Dreyer: “…Perfection, I’ve found, is an often elusive but not unattainable goal, and any number of the books I’ve worked on over the past three decades have made it to print without a single discernible error. And yet. In my early days, I would sulk in my office with the door closed if I found out that one of my books included a typo. 
A sentence referring to “geneology” once sent me into a blue funk for hours. As time passed I took these errata slightly less personally, but the sting lingered, if not for so long as it had at the start. For me, there is a real thrill in the great scavenger hunt of rooting out errors, whether it’s a simple “lead” where “led” is meant (that messed-up verb is, I’d say, the commonest typo to get into print) or something grander…”



Updike saved almost everything. His papers, stored at Harvard, include his golf scorecards, legal and business records, fan mail, video tapes, photographs, drawings and rejection letters. Was saving and preserving the past done so we could remember him, and he could better remember himself, and try again?



 Pete Recommends – Weekly highlights on cyber security issues, September 10, 2022 – Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. <strong>
Four highlights from this week</strong>: U.S. bank regulator warns of crisis risk from fintech proliferation; Supply chain risk is a top security priority as confidence in partners wanes; FBI Warns Individuals Employed in the Healthcare Industry of the Ongoing Scam Involving the Impersonation of Law enforcement and Government; and IST to launch new guidance on security risks of telehealth and smart home integration.


Cyphert, Amy, A Human Being Wrote This Law Review Article: GPT-3 and the Practice of Law (November 1, 2021). UC Davis Law Review, Volume 55, Issue 1, WVU College of Law Research Paper No. 2022-02, Available at SSRN: https://ssrn.com/abstract=3973961

“Artificial intelligence tools can now “write” in such a sophisticated manner that they fool people into believing that a human wrote the text. None are better at writing than GPT-3, released in 2020 for beta testing and coming to commercial markets in 2021. GPT-3 was trained on a massive dataset that included scrapes of language from sources ranging from the NYTimes to Reddit boards. And so, it comes as no surprise that researchers have already documented incidences of bias where GPT-3 spews toxic language. 

But because GPT-3 is so good at “writing,” and can be easily trained to write in a specific voice — from classic Shakespeare to Taylor Swift — it is poised for wide adoption in the field of law. This Article explores the ethical considerations that will follow from GPT-3’s introduction into lawyers’ practices. GPT-3 is new, but the use of AI in the field of law is not. AI has already thoroughly suffused the practice of law. GPT-3 is likely to take hold as well, generating some early excitement that it and other AI tools could help close the access to justice gap. That excitement should nevertheless be tempered with a realistic assessment of GPT-3’s tendency to produce biased outputs.

 As amended, the Model Rules of Professional Conduct acknowledge the impact of technology on the profession and provide some guard rails for its use by lawyers. This Article is the first to apply the current guidance to GPT-3, concluding that it is inadequate. I examine three specific Model Rules — Rule 1.1 (Competence), Rule 5.3 (Supervision of Nonlawyer Assistance), and Rule 8.4(g) (Bias) — and propose amendments that focus lawyers on their duties and require them to regularly educate themselves about pros and cons of using AI to ensure the ethical use of this emerging technology.”


Google Deepmind Researcher Co-Authors Paper Saying AI Will Eliminate Humanity

Vice: “After years of development, AI is now driving cars on public roads, making life-changing assessments for people in correctional settings, and generating award-winning art. A longstanding question in the field is whether a superintelligent AI could break bad and take out humanity, and researchers from the University of Oxford and affiliated with Google DeepMind have now concluded that it’s “likely” in new research. The paper, published last month in the peer-reviewed AI Magazine, is a fascinating one that tries to think through how artificial intelligence could pose an existential risk to humanity by looking at how reward systems might be artificially constructed…

The paper envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the super-advanced machine, which would try and harness all available resources to secure its reward and protect against our escalating attempts to stop it. “Losing this game would be fatal,” the paper says. These possibilities, however theoretical, mean we should be progressing slowly—if at all—toward the goal of more powerful AI…”


 Grigoleit, Hans Christoph, Blackboxing Law by Algorithm (June 16, 2022). Speech delivered at Oxford Business Law Blog Annual Conference on June 16, 2022

“This post is part of a special series including contributions to the OBLB Annual Conference 2022 on ‘Personalized Law—Law by Algorithm’, held in Oxford on 16 June 2022. This post comes from Hans Christoph Grigoleit, who participated on the panel on ‘Law by Algorithm’.

 “Adapting a line by the ingenious pop-lyricist Paul Simon, there are probably 50 ways to leave the traditional paths of legal problem solving by making use of algorithms. However, it seems that the law lags behind other fields of society in realizing synergies resulting from the use of algorithms. In their book ‘Law by Algorithm’, Horst Eidenmüller and Gerhard Wagner accentuate this hesitance in a paradigmatic way: while the chapter on ‘Arbitration’ is optimistic regarding the use of algorithms in law (‘… nothing that fundamentally requires human control …’), the authors’ view turns much more pessimistic when trying to specify the perspective of the ‘digital judge’. Following up on this ambivalence, I would like to share some observations on where and why it is not so simple to bring together algorithms and legal problem solving.”