Wednesday, August 02, 2023

Shadow State: How consultants infiltrated government - ChatGPT


Reporters Angus Grigg and Jessica Longbottom dive into the opaque world of government consulting, where firms push ethical boundaries and cost the taxpayer billions of dollars each year: little transparency, almost no accountability.

While PwC has attracted headlines over its use of confidential government information to help its clients avoid tax, there's been less scrutiny of one of Canberra's biggest players: KPMG.

Through forensic examination and whistleblower accounts, Grigg and Longbottom reveal consulting giant KPMG has faced accusations of repeatedly "wasting" public money while contracted by the Department of Defence.

 Shadow State: How consultants infiltrated government



How researchers broke ChatGPT and what it could mean for future AI development 

ZDNet: “As many of us grow accustomed to using artificial intelligence tools daily, it’s worth remembering to keep our questioning hats on. Nothing is completely safe and free from security vulnerabilities. Still, companies behind many of the most popular generative AI tools are constantly updating their safety measures to prevent the generation and proliferation of inaccurate and harmful content.  

Researchers at Carnegie Mellon University and the Center for AI Safety teamed up to find vulnerabilities in AI chatbots like ChatGPTGoogle Bard, and Claude — and they succeeded. In a research paper to examine the vulnerability of large language models (LLMs) to automated adversarial attacks, the authors demonstrated that even if a model is said to be resistant to attacks, it can still be tricked into bypassing content filters and providing harmful information, misinformation, and hate speech. This makes these models vulnerable, potentially leading to the misuse of AI.”