Friday, April 03, 2026

US government ‘chipping away’ at press freedom

US government ‘chipping away’ at press freedomDW

 

Inheritance disputes surge to record levels as heirs fight for spoils Financial Times


Was this what became of socialism?Internationalen via machine translation


 

Protecting the press: How Section 702 of FISA must be reformed Freedom of the Press Foundation



China Is Rapidly Overtaking the United States as the World’s Scientific Superpower Futurism


Shaun Rein: “The Longer Iran War Lasts The More China Wins” The Singju Post


China Is Planning Decades Ahead on Clean Energy. The U.S. Has Other Priorities. Council on Foreign Relations


Orban’s remarks that ‘China is simply unbeatable’ in interview draw attention on the Chinese internetGlobal Times


aguar Land Rover halts production at its biggest car factory for a fortnight due to parts supply issue as wider UK vehicle outputs hit the rocks Daily Mail


UK ‘weeks away’ from medicine shortages if Iran war continues, experts say The Guardian


Facial Recognition Is Spreading Everywhere

IEEE Spectrum – “Facial recognition technology (FRT) dates back 60 years. Just over a decade ago, deep-learning methods tipped the technology into more and menacing—territory. Now, retailers, your neighbors, and law enforcement are all storing your face and building up a fragmentary photo album of your life. Yet the story those photos can tell inevitably has errors. FRT makers, like those of any diagnostic technology, must balance two types of errors: false positives and false negatives. 

There are three possible outcomes. In best-case scenarios—such as comparing someone’s passport photo to a photo taken by a border agent—false-negative rates are around two in 1,000 and false positives are less than one in 1 million. In the rare event you’re one of those false negatives, a border agent might ask you to show your passport and take a second look at your face. But as people ask more of the technology, more ambitious applications could lead to more catastrophic errors. 

Let’s say that police are searching for a suspect, and they’re comparing an image taken with a security camera with a previous “mug shot” of the suspect. Training-data composition, differences in how sensors detect faces, and intrinsic differences between groups, such as age, all affect an algorithm’s performance. 

The United Kingdom estimatedthat its FRT exposed some groups, such as women and darker-skinned people, to risks of misidentification as high as two orders of magnitude greater than it did to others. What happens with photos of people who aren’t cooperating, or vendors that train algorithmson biased datasets, or field agents who demand a swift match from a huge dataset? Here, things get murky…”