Populism and State Power
A hard look at the use of the notion of populism in recent political discourse, starting with the curious lack of self-professed populists.
TechCrunch: “Microsoft has reaffirmed its ban on U.S. police departments from using generative AI for facial recognition through Azure OpenAI Service, the company’s fully managed, enterprise-focused wrapper around OpenAI tech. Language added Wednesday to the terms of service for Azure OpenAI Service more obviously prohibits integrations with Azure OpenAI Service from being used “by or for” police departments for facial recognition in the U.S., including integrations with OpenAI’s current — and perhaps future — image-analyzing models.
A separate new bullet point covers “any law enforcement globally,” and explicitly bars the use of “real-time facial recognition technology” on mobile cameras, like body cameras and TechCrunch, to attempt to identify a person in “uncontrolled, in-the-wild” environments.
The changes in policy come a week after Axon, a maker of tech and weapons products for military and law enforcement, announced a new product that leverages OpenAI’s GPT-4 generative text model to summarize audio from body cameras. Critics were quick to point out the potential pitfalls, like hallucinations (even the best generative AI models today invent facts) and racial biases introduced from the training data (which is especially concerning given that people of color are far more likely to be stopped by police than their white peers)…”
Stack Overflow bans users en masse for rebelling against OpenAI partnership — users banned for deleting answers to prevent them being used to train ChatGPT Tom’s Hardware. Open theft. As usual. Just like Reddit. Training sets = looting.
Writers and publishers in Singapore reject a government plan to train AI on their work Rest of World
Slop is the new name for unwanted AI-generated content Simon Willison’s Weblog
* * * Big brains divided over training AI with more AI: Is model collapse inevitable? The Register. The deck: “Gosh, here’s us thinking recursion was a solved problem.”
AI, Reducing Internalities and Externalities Cass Sunstein, SSRN. “AI-powered Choice Engines might also take account of externalities, and they might nudge or require consumers to do so as well. Different consumers care about different things, of course, which is a reason to insist on a high degree of freedom of choice, even in the presence of internalities and (to some extent) externalities. But it is important to emphasize that AI might be enlisted by insufficiently informed or self-interested actors, who might exploit inadequate information or behavioral biases, and thus reduce consumer welfare.” It’s a phishing equilibrium, so not “might” but “will,” indeed “already are.”
OpenAI offers a peek behind the curtain of its AI’s secret instructions TechCrunch