The guys at Axios are divulging the secrets of "smart brevity." One problem: Smart brevity isn't all that smart - Dumber & Dumber »
Updike saved almost everything. His papers, stored at Harvard, include his golf scorecards, legal and business records, fan mail, video tapes, photographs, drawings and rejection letters. Was saving and preserving the past done so we could remember him, and he could better remember himself, and try again?
Cyphert, Amy, A Human Being Wrote This Law Review Article: GPT-3 and the Practice of Law (November 1, 2021). UC Davis Law Review, Volume 55, Issue 1, WVU College of Law Research Paper No. 2022-02, Available at SSRN: https://ssrn.com/abstract=3973961
“Artificial intelligence tools can now “write” in such a sophisticated manner that they fool people into believing that a human wrote the text. None are better at writing than GPT-3, released in 2020 for beta testing and coming to commercial markets in 2021. GPT-3 was trained on a massive dataset that included scrapes of language from sources ranging from the NYTimes to Reddit boards. And so, it comes as no surprise that researchers have already documented incidences of bias where GPT-3 spews toxic language.
But because GPT-3 is so good at “writing,” and can be easily trained to write in a specific voice — from classic Shakespeare to Taylor Swift — it is poised for wide adoption in the field of law. This Article explores the ethical considerations that will follow from GPT-3’s introduction into lawyers’ practices. GPT-3 is new, but the use of AI in the field of law is not. AI has already thoroughly suffused the practice of law. GPT-3 is likely to take hold as well, generating some early excitement that it and other AI tools could help close the access to justice gap. That excitement should nevertheless be tempered with a realistic assessment of GPT-3’s tendency to produce biased outputs.
As amended, the Model Rules of Professional Conduct acknowledge the impact of technology on the profession and provide some guard rails for its use by lawyers. This Article is the first to apply the current guidance to GPT-3, concluding that it is inadequate. I examine three specific Model Rules — Rule 1.1 (Competence), Rule 5.3 (Supervision of Nonlawyer Assistance), and Rule 8.4(g) (Bias) — and propose amendments that focus lawyers on their duties and require them to regularly educate themselves about pros and cons of using AI to ensure the ethical use of this emerging technology.”
Google Deepmind Researcher Co-Authors Paper Saying AI Will Eliminate Humanity
Vice: “After years of development, AI is now driving cars on public roads, making life-changing assessments for people in correctional settings, and generating award-winning art. A longstanding question in the field is whether a superintelligent AI could break bad and take out humanity, and researchers from the University of Oxford and affiliated with Google DeepMind have now concluded that it’s “likely” in new research. The paper, published last month in the peer-reviewed AI Magazine, is a fascinating one that tries to think through how artificial intelligence could pose an existential risk to humanity by looking at how reward systems might be artificially constructed…
The paper envisions life on Earth turning into a zero-sum game between humanity, with its needs to grow food and keep the lights on, and the super-advanced machine, which would try and harness all available resources to secure its reward and protect against our escalating attempts to stop it. “Losing this game would be fatal,” the paper says. These possibilities, however theoretical, mean we should be progressing slowly—if at all—toward the goal of more powerful AI…”
Grigoleit, Hans Christoph, Blackboxing Law by Algorithm (June 16, 2022). Speech delivered at Oxford Business Law Blog Annual Conference on June 16, 2022
“This post is part of a special series including contributions to the OBLB Annual Conference 2022 on ‘Personalized Law—Law by Algorithm’, held in Oxford on 16 June 2022. This post comes from Hans Christoph Grigoleit, who participated on the panel on ‘Law by Algorithm’.
“Adapting a line by the ingenious pop-lyricist Paul Simon, there are probably 50 ways to leave the traditional paths of legal problem solving by making use of algorithms. However, it seems that the law lags behind other fields of society in realizing synergies resulting from the use of algorithms. In their book ‘Law by Algorithm’, Horst Eidenmüller and Gerhard Wagner accentuate this hesitance in a paradigmatic way: while the chapter on ‘Arbitration’ is optimistic regarding the use of algorithms in law (‘… nothing that fundamentally requires human control …’), the authors’ view turns much more pessimistic when trying to specify the perspective of the ‘digital judge’. Following up on this ambivalence, I would like to share some observations on where and why it is not so simple to bring together algorithms and legal problem solving.”