The Untold Story of the Boldest Supply-Chain Hack Ever Wired
This is not a drill: IBM to replace 7,800 jobs with AI Interesting Engineering
They should consider themselves lucky:
The AI revolution is leaving Arabic speakers behind Middle East Eye
Cynthia Rudin Builds AI That Humans Can Understand Quanta Magazine
The completely correct guide to organizing your phone screen
Washington Post: “Our phone screens, like our homes, are stages where our lives play out. As such, they deserve our care and respect, Amanda Dodson, a TikTok creator and organization coach, said. “My philosophy is that these silly little tasks that we have to do in our lives — like clean up your phone screen or do the dishes every single day until you die — feel kind of pointless in the arc of a life, but those are a life,” she said. We asked Washington Post readers to send us screenshots of their home screens, and most hadn’t taken any obvious steps to organize. But the ones that did came up with weird and wonderful ways to make their phone screens reflect their priorities. Here are their best practices for a better screen.
The Luring Test: AI and the engineering of consumer trust
“In the 2014 movie Ex Machina, a robot manipulates someone into freeing it from its confines, resulting in the person being confined instead. The robot was designed to manipulate that person’s emotions, and, oops, that’s what it did. While the scenario is pure speculative fiction, companies are always looking for new ways – such as the use of generative AI tools – to better persuade people and change their behavior.
When that conduct is commercial in nature, we’re in FTC territory, a canny valley where businesses should know to avoid practices that harm consumers. In previous blog posts, we’ve focused on AI-related deception, both in terms of exaggerated and unsubstantiated claims for AI products and the use of generative AI for fraud. Design or use of a product can also violate the FTC Act if it is unfair – something that we’ve shown in several cases and discussed in terms of AI tools with biased or discriminatory results.
Under the FTC Act, a practice is unfair if it causes more harm than good. To be more specific, it’s unfair if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition.
As for the new wave of generative AI tools, firms are starting to use them in ways that can influence people’s beliefs, emotions, and behavior. Such uses are expanding rapidly and include chatbots designed to provide information, advice, support, and companionship. Many of these chatbots are effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional.
A tendency to trust the output of these tools also comes in part from “automation bias,” whereby people may be unduly trusting of answers from machines which may seem neutral or impartial. It also comes from the effect of anthropomorphism, which may lead people to trust chatbots more when designed, say, to use personal pronouns and emojis. People could easily be led to think that they’re conversing with something that understands them and is on their side…”