Elon Musk proposes forming new political party amid tensions with Trump Anadolu Agency
“AI will replace all the jobs!” Is Just Tech Execs Doing Marketing
SparkToro: “…
- Folks who claim AI will destroy the labor market have claimed this radical change is “only a few years away,” “on the immediate horizon,” or “imminent,” for the last 5 years, yet we’re at historically low unemployment (yes, even accounting for underemploymentand the way the BLS counts employment). The US labor market is within a single percentage point of its post-war unemployment low, measured in 1953 at 3.4%.
- If AI is killing jobs, it’s doing so at an imperceptibly slow rate; why could that be? Is it still too early? Did other technologies take a long time to show their impacts on labor markets?
- The broad consensus from rational industry observers, analysts, economists, and even AI-hyped technologists is that the end of cheap money (i.e. higher US interest rates) has driven most of the lower-than-pre-pandemic-demand for entry-level talent (just as it has in times of inflation-fighting interest hikes of the past).
- Machine-learning, the technology underpinning AI, has been around for decades, with widespread adoption in tech companies between 2006-2013. The current generative-AI era, based on the transformer architecture model, kicked off in 2017, with significant public examples and tech adoption from 2018-2020. Most of the current, press-driven AI hype cycle, however, skyrocketed in late 2021 with OpenAI’s release of GPT-3 (longtime readers here will recall that Britney Mullershowed off techniques extremely similar to what’s now associated with modern LLMs back in July 2018).
- We’ve had 15-20 years of robust machine learning development and adoption, and another 5-10 years of broad LLM/generative AI adoption, improvement, and usage, yet labor market fluctuation has been far more dependent on other factors: the Covid pandemic itself, the post-pandemic surge and decline in tech hiring, inflation-fighting tactics by government banks, and (most recently), a renewal of early-20th-century-style tariffs and trade wars. When controlling for these events.
- The effects of previous technological advancements also took time, but the most salient examples (of farm equipment in the 1910-1920 era and the personal computer in the 1980s) showed millions of displaced workers within 5 years. AI’s slower changes bode poorly for the argument that it will have a larger impact than those events.
- Even if one assumes that AI was the only contributor to labor market changes between 2021-2025, the change has been incredibly slight, *even* in the software engineering market where it supposedly has the greatest impact. There was a greater loss (nearly 150%) in percentage of software engineering jobs between 2019-2021 than from 2021-2025.
- I found it particularly revealing that one of the most commonly cited examples of AI killing labor needs in the software field is the death of StackOverflow, and yet, a robust analysis of that site’s usage from 2008-2020shows that “What really happened is a parable of human community and experiments in self-governance gone bizarrely wrong.”…
- Leaders of AI companies, and some AI proponents, marketers, journalists, and even critics have found that when they make scary predictions about their field destroying the job market, press and media eat it up. This media coverage, because it’s scary and the AI hype cycle is in full swing, draws clicks. Those clicks lead to employees, managers, and leaders at other businesses being scared into learning and adopting AI in their businesses….”
- “By seeing knowledge as mere facts to be distilled without the struggle that leads to the ecstasy of enlightenment, my students are depriving themselves of one of the most profound delights of humanity” — Steven Gimbel on the thinker’s high
- Are recent episodes of “wild” scientific speculation the product of “badly digested versions of the work of two twentieth-century philosophers”? — Carlo Rovelli thinks so
- “The aim of the philosopher… is to tell men what they ought to think, rather than what they do think” — Henry Sidgwick is “interviewed” by Richard Marshall
- “I am not a national security threat; I’m a philosopher-citizen who desires to make sure that human creative capacities aren’t imprisoned” — George Yancy responds to the Trump administration’s banning of his books from the Naval Academy
- “The best you’re ever going to get is a catalog of possibilities, none reliable and all in some way vexed” — it is difficult to write wisely about death, but damn can Amy Olberding write about that difficulty
- There’s “what one says,” which is not the same as expressions of “what I think”. ChatGPT can handle the former, but only we are capable of the latter — Chad Engelland on the implications of this distinction
- “They should be leading from the positions of incredible safety they occupy” — a different take on the departure of fascism scholars from the US to Canada