Monday, January 15, 2024

Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems

 


       Bestselling in 2023 in ... the US 

       At Publishers Weekly Jim Milliot reports -- with numbers ! -- on the twenty-five bestselling titles in the US in 2023, in Women Ruled the 2023 Bestseller List
       Eight titles sold over 1,000,000 copies, with two Colleen Hoover titles topping the list. 
       I haven't seen any of the top 25 titles. 




Pete Recommends – Weekly highlights on cyber security issues, January 6, 2024

Pete Recommends – Weekly highlights on cyber security issues, January 6, 2024– Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on the increasingly complex and wide ranging ways technology is used to compromise and diminish our privacy and online security, often without our situational awareness. 

Four highlights from this week: Delete your digital history from dozens of companies with this app; How hackers can ‘poison’ AI; Meet ‘Link History,’ Facebook’s New Way to Track the Websites You Visit; and Google Groups is ending support for Usenet to combat spam.


NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems

NIST: “Adversaries can deliberately confuse or even “poison” artificial intelligence (AI) systems to make them malfunction — and there’s no foolproof defense that their developers can employ. Computer scientists from the National Institute of Standards and Technology (NIST) and their collaborators identify these and other vulnerabilities of AI and machine learning (ML) in a new publication. 

Their work, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations(NIST.AI.100-2), is part of NIST’s broader effort to support the development of trustworthy AI, and it can help put NIST’s AI Risk Management Framework into practice. The publication, a collaboration among government, academia and industry, is intended to help AI developers and users get a handle on the types of attacks they might expect along with approaches to mitigate them — with the understanding that there is no silver bullet. 

“We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” said NIST computer scientist Apostol Vassilev, one of the publication’s authors. “We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses.”


Deloitte rolls out artificial intelligence chatbot to employees

FT.com – read free: “Deloitte is rolling out a generative artificial intelligence chatbot to 75,000 employees across Europe and the Middle East to create power point presentations and write emails and code in an attempt to boost productivity. 

The Big Four accounting and consulting firm first launched the internal tool, called “PairD”, in the UK in October, in the latest sign of professional services firms rushing to adopt AI

However, in a sign that the fledgling technology remains a work in progress, staff were cautioned that the new tool may produce inaccurate information about people, places and facts. Users have been told to perform their own due diligence and quality assurance to validate the “accuracy and completeness” of the chatbot’s output before using it for work, said a person familiar with the matter. 

Unlike rival firms, which have teamed up with major market players such as ChatGPT maker OpenAI and Harvey, Deloitte’s AI chatbot was developed internally by the firm’s AI institute. The roll out highlights how the professional services industry is increasingly adopting generative AI to automate tasks…”


Here’s what you’re really swallowing when you drink bottled water

Washington Post [read free]: “People are swallowing hundreds of thousands of microscopic pieces of plastic each time they drink a liter of bottled water, scientists have shown — a revelation that could have profound implications for human health 

A new paper released Monday in the Proceedings of the National Academy of Sciences found about 240,000 particles in the average liter of bottled water, most of which were “nanoplastics” — particles measuring less than one micrometer (less than one-seventieth the width of a human hair). For the past several years, scientists have been looking for “microplastics,” or pieces of plastic that range from one micrometer to half a centimeter in length, and found them almost everywhere. The tiny shards of plastic have been uncovered in the deepest depths of the ocean, in the frigid recesses of Antarctic sea ice and in the human placenta

They spill out of laundry machines and hide in soils and wildlife. Microplastics are also in the food we eat and the water we drink: In 2018, scientists discovered that a single bottle of water contained, on average, 325 pieces of microplastics. But researchers at Columbia University have now identified the extent to which nanoplastics also pose a threat.”