Friday, January 27, 2023

WHITE COLLAR CRIME RISK ZONES: How ChatGPT Hijacks Democracy


WHITE COLLAR CRIME RISK ZONES The New Inquiry 

Fast Company – “If more people are going to drive electric cars, we need many more EV charging stations. But where to put them? That analysis requires a lot of calculations: figuring out where the current chargers are stationed and where substations and electrical infrastructure is already built out, not to mention identifying which corridors receive the most road traffic. The Biden administration wants to prioritize building a public charging network that fills gaps in rural, disadvantaged locations, which means determining where those areas are as well.  Building new wind turbines requires a similar amalgamation of data: Where do turbines already exist? 

What is the mean annual wind speed in a certain region? Are there airports nearby that need to be avoided, or protected land like national parks? A new tool from the U.S. Department of Energy’s Argonne National Laboratory puts all of that information—and more—on one map, essentially pinpointing where across the country clean energy infrastructure can be developed. Called the Geospatial Energy Mapper, or GEM, the interactive tool contains more than 190 different mapping layers, so a user can search areas for EV charging stations, solar power plants, and more…”



How ChatGPT Hijacks Democracy The New York Times: “Launched just weeks ago, ChatGPT is already threatening to upend how we draft everyday communications like emails, college essays and myriad other forms of writing. Created by the company OpenAI, ChatGPT is a chatbot that can automatically respond to written prompts in a manner that is sometimes eerily close to human. 

But for all the consternation over the potential for humans to be replaced by machines in formats like poetry and sitcom scripts, a far greater threat looms: artificial intelligence replacing humans in the democratic processes — not through voting, but through lobbying. ChatGPT could automatically compose comments submitted in regulatory processes. It could write letters to the editor for publication in local newspapers. It could comment on news articles, blog entries and social media posts millions of times every day. It could mimic the work that the Russian Internet Research Agency did in its attempt to influence our 2016 elections, but without the agency’s reported multimillion-dollar budget and hundreds of employees. Automatically generated comments aren’t a new problem. 

For some time, we have struggled with bots, machines that automatically post content. Five years ago, at least a million automatically drafted comments were believed to have been submitted to the Federal Trade Commission regarding proposed regulations on net neutrality. In 2019, a Harvard undergraduate, as a test, used a text-generation program to submit 1,001 comments in response to a government request for public input on a Medicaid issue. Back then, submitting comments was just a game of overwhelming numbers. Platforms have gotten better at removing “coordinated inauthentic behavior.” Facebook, for example, has been removing over a billion fake accounts a year. But such messages are just the beginning. 

Rather than flooding legislators’ inboxes with supportive emails, or dominating the Capitol switchboard with synthetic voice calls, an A.I. system with the sophistication of ChatGPT but trained on relevant data could selectively target key legislators and influencers to identify the weakest points in the policymaking system and ruthlessly exploit them through direct communication, public relations campaigns, horse trading or other points of leverage…”