Pages

Thursday, April 27, 2023

Social Media as a Bank Run Catalyst

 

Social Media as a Bank Run Catalyst

Social media fueled a bank run on Silicon Valley Bank (SVB), and the effects were felt broadly in the U.S. banking industry. We employ comprehensive Twitter data to show that preexisting exposure to social media predicts bank stock market losses in the run period even after controlling for bank characteristics related to run risk (i.e., mark-to-market losses and uninsured deposits). Moreover, we show that social media amplifies these bank run risk factors. During the run period, we find the intensity of Twitter conversation about a bank predicts stock market losses at the hourly frequency. This effect is stronger for banks with bank run risk factors. At even higher frequency, tweets in the run period with negative sentiment translate into immediate stock market losses. These high frequency effects are stronger when tweets are authored by members of the Twitter startup community (who are likely depositors) and contain keywords related to contagion. These results are consistent with depositors using Twitter to communicate in real time during the bank run.

That is from a new paper by J. Anthony Cookson, et.al.  Via the excellent Kevin Lewis



Brian Slesinsky on AI taxes (from my email)

My preferred AI tax would be a small tax on language model API calls, somewhat like a Tobin tax on currency transactions. This would discourage running language models in a loop or allowing them to “think” while idle.

For now, we mostly use large language models under human supervision, such as with AI chat. This is relatively safe because the AI is frozen most of the time [1]. It means you get as much time as you like to think about your next move, and the AI doesn’t get the same advantage. If you don’t like what the AI is saying, you can simply close the chat and walk away.

Under such conditions, a sorcerer’s apprentice shouldn’t be able to start anything they can’t stop. But many people are experimenting with running AI in fully automatic mode and that seems much more dangerous. It’s not yet as dangerous as experimenting with computer viruses, but that could change.

Such a tax doesn’t seem necessary today because the best language models are very expensive [2]. But making and implementing tax policy takes time, and we should be concerned about what happens when costs drop.

Another limit that would tend to discourage dangerous experiments would be a minimum reaction time. Today, language models are slow. It reminds me of using a dial-up modem in the old days. But we should be concerned about what happens when AI’s start reacting to events much quicker than people.

Different language models quickly reacting to each other in a marketplace or forum could cause cascading effects, similar to a “flash crash” in a financial market. On social networks, it’s already the case that volume is far higher than we can keep up with. But it could get worse when conversations between AI’s start running at superhuman speeds.

Financial markets don’t have limits on reaction time, but there are trading hours and circuit breakers that give investors time to think about what’s happening in unusual situations. Social networks sometimes have rate limits too, but limiting latency at the language model API seems more comprehensive.

Limits on transaction costs and latency won’t make AI safe, but they should reduce some risks better than attempting to keep AI’s from getting smarter. Machine intelligence isn’t defined well enough to regulate. There are many benchmarks and it seems unlikely that researchers will agree on a one-dimensional measurement, like IQ in humans.

[1] https://skybrian.substack.com/p/ai-chats-are-turn-based-games

[2] Each API call to GPT4 costs several cents, depending on how much input you give it.
Running a smaller language model on your own computer is cheaper, but they are lower quality, and it has opportunity costs since it keeps the computer busy.