Pages

Tuesday, May 28, 2024

It is dangerously easy to hack the world’s phones

 George Miller talks movies, silent movies, Mad Max, and Furiosa (New Yorker)

   GG Governor Garry Gordon Jurasic Furiosa Park 




Economist – A system at the heart of global telecommunications is woefully insecure[no paywall] “For year security experts have warned that a technology at the heart of global communications is dangerously insecure. Now there is proof that it has been used to snoop on people in America. Kevin Briggs, an official at America’s Cybersecurity and Infrastructure Security Agency, told the Federal Communications Commission (FCC), a regulator, earlier this year that there had been “numerous incidents of successful, unauthorised attempts” not only to steal location data and monitor voice and text messages in America, but also to deliver spyware (software that can take over a phone) and influence American voters from abroad via text messages. The comments were first reported recently by 404 Media, a website that covers technology…”

During the 90s, North Korea leader Kim Jong-ll, and his son and future leader Kim Jong-Un used fake Brazilian passports to travel to Disneyland. These passports were obtained at an embassy in Prague, which allowed the family to travel without suspicion. Brazilian passports were ideal because during the 19th and 20th centuries, thousands of Asian immigrants had moved to South America. As a result, it was not an odd thing to see a person of Asian descent from Brazil.



See also Amnesty International report – A Web of Surveillance: Unravelling a murky network of spyware exports to Indonesia -“Highly invasive spyware and other rights-threatening surveillance technologies have been used to target human rights defenders, journalists and other members of civil society worldwide, as documented by an ever-growing body of research. Unfortunately, technical obstacles inherent in forensic investigations and a culture of secrecy surrounding the sale and transfer of surveillance tools keeps civil society and human rights defenders in the dark about the full extent of their deployment or use. 

This research provides a case study on how one country, Indonesia, is relying on a murky ecosystem of surveillance suppliers, brokers and resellers that obscures the sale and transfer of surveillance technology. 

The investigation also showcases the continued failure of multiple countries to regulate and provide transparency on the exports of dual-use technologies, such as spyware, and the non dual-use hardware that hosts the spyware or surveillance technology which pose serious human rights risks…”


AI report: OECD (2024), OECD Artificial Intelligence Papers, No. 16, OECD Publishing, Paris, https://doi.org/10.1787/d1a8d965-en.”Defining AI incidents and related terms,” it’s a must-read for everyone in AI. Important information:

  • An AI incident is defined as: “an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to any of the following harms: ➵ injury or harm to the health of a person or groups of people; ➵ disruption of the management and operation of critical infrastructure; ➵ violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights; ➵ harm to property, communities or the environment.”
  • An AI hazard is defined as: “An AI hazard is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to an AI incident, i.e., any of the following harms: ➵ injury or harm to the health of a person or groups of people; ➵ disruption of the management and operation of critical infrastructure; ➵ violtions to human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights; ➵ harm to property, communities or the environment.”
  • Types of harm listed by the report: ➵ Physical harm ➵ Environmental harm ➵ Economic or financial harm, including harm to property ➵ Reputational harm ➵ Harm to public interest ➵ Harm to human rights and to fundamental rights ➵ Psychological harm  The report states: “A further step would be to establish clear taxonomies to categorise incidents for each dimension of harm. Assessing the “seriousness” of an AI incident, harm, damage, or disruption (e.g., to determine whether an event is classified as an incident or a serious incident) is context-dependent and is also left for further discussion.”

-nj.com: “In Jersey and beyond, our law enforcement, judges and elected officials are putting both their privacy and lives on the line to serve. We must take steps in Congress and beyond to protect the well-being of those who choose to work for the people. 

New Jersey saw the acute need for privacy for our public officials in 2020, after witnessing the senseless killing of 20-year-old Daniel Anderl. Daniel’s killer targeted his mother, U.S. District Judge Esther Salas, and located her family because their address was publicly available. Tragically, she became a target because of her public service. In response, the state of New Jersey took action and passed “Daniel’s Law,” which created protections to ensure the sensitive data of public servants and their families would not be publicly accessible. The law also targeted third-party data brokers who make billions of dollars each by selling people’s personal information, including public officials’.

Now, four years later, Daniel’s Law is recklessly being undermined by data brokers yet again — this time, endangering our law enforcement. Recently, more than 18,000 New Jersey law enforcement personnel filed a class action lawsuit against LexisNexis Risk Data Management, one arm of a sprawling empire. These officers claim that LexisNexis retaliated against them after exercising their right to remove personal, identifying information under Daniel’s Law. LexisNexis allegedly froze their credit, falsely claiming the officers were identity theft victims, and seriously hurt their credit histories. This case illustrates a fundamental problem. Data brokers’ sprawling influence over the lives of American consumers undermines our safety, privacy and security. 

These brokers influence Americans’ credit history, which in turn impacts their ability to access credit, insurance services, mortgages and health care. Individuals who want to shield their data from brokers, including the law enforcement personnel from New Jersey, have a lot to lose if something goes wrong. And, their only recourse is the courts, which is both expensive and slow. We need stronger industry guardrails to protect consumers and ensure due process…”

See also Data Brokers and the Sale of Data on U.S. Military Personnel – “The data brokerage ecosystem is a multi-billion-dollar industry comprised of companies gathering, inferring, aggregating, and then selling, licensing, and sharing data on Americans as well as providing technological services based on that data. 

After previously discovering that data brokers were advertising data about current and former U.S. military personnel, this study sought to understand (a) what kinds of data that data brokers were gathering and selling about military servicemembers and (b) the risk that a foreign actor, such as a foreign adversary government, could acquire the data to undermine U.S. national security. 

This study involved scraping hundreds of data broker websites to look for terms like “military” and “veteran,” contacting U.S. data brokers from a U.S. domain to inquire about and purchase data on the U.S. military, and contacting U.S. data brokers from a .asia domain to inquire about and purchase the same. 

It concludes with a discussion of the risks to U.S. military servicemembers and U.S. national security, paired with policy recommendations for the federal government to address the risks at hand.” 

Data brokers are undermining country’s safety, privacy and security ROI-nj.com