Monday, December 15, 2025

Old Teslas Are Falling Apart

Old Teslas Are Falling Apart Futurism


Nordic people know how to beat the winter blues. Here’s how to find light in the darkest months


FOR BEST RESULTS, TAKE IT WITH COFFEE AND RED WINE WHILE SITTING IN THE SUN:  Dark Chocolate Compound Linked To Slower Aging


Book Review: Harnessing the Power of Dreams and Nightmares

In “Nightmare Obscura,” scientist Michelle Carr argues that our dreams are essential pillars of who we are.


Artificial Intelligence and the Future of Work

National Academies of Sciences, Engineering, and Medicine. 2025. Artificial Intelligence and the Future of Work. Washington, DC: The National Academies Press. Advances in artificial intelligence (AI) promise to improve productivity significantly, but there are many questions about how AI could affect jobs and workers. 

Recent technical innovations have driven the rapid development of generative AI systems, which produce text, images, or other content based on user requests – advances which have the potential to complement or replace human labor in specific tasks, and to reshape demand for certain types of expertise in the labor market. 

Artificial Intelligence and the Future of Work evaluates recent advances in AI technology and their implications for economic productivity, the workforce, and education in the United States. 

The report notes that AI is a tool with the potential to enhance human labor and create new forms of valuable work – but this is not an inevitable outcome. Tracking progress in AI and its impacts on the workforce will be critical to helping inform and equip workers and policymakers to flexibly respond to AI developments.



Every Legal Team Needs to See This LLM Leak

Brainyacts: 1. A user pulled out an internal company document just by prompting. Let that sink in. A determined user was able to extract, through the chat interface, an internal memo that was never meant to be disclosed. Not “state secret” level, but still: a company document that describes how the model is trained and how it should behave came out the front door via text. No hacking. Just prompting. If you’re not at least a little freaked out by that, you’re not thinking hard enough about what your own deployments might be leaking or exposing over time.

2. Your system prompt is part of your compliance architecture, whether you like it or not.

  • Quick reminder: the system prompt is the invisible layer that sits between your users and the model.
  • Your employees type a prompt → it passes through the system prompt → the model’s answer passes back through that same system prompt.
  • That hidden layer is where you (or your vendor) control things like tone, friendliness, how robust the response is, what’s off-limits, and a lot of other behavioral nuance.
  • If your org has written or tweaked that system prompt, congratulations: you’ve just created a new surface area of liability and governance. That text is now part of your internal control stack. My guess? Most teams haven’t treated it that way yet.

3. Prompting isn’t “asking questions.” It’s steering the engine. Every prompt nudges the model’s reasoning path and risk tolerance. There are deeper levels of prompting: framing, context-setting, role instructions that can materially change what the model will and won’t do. Any user in your org can do this, often without realizing how much they’re steering. That’s power, but it’s also a governance problem.

4. The model infers intent and identity, and that cuts both ways. Claude doesn’t actually know who’s on the other side. It guesses based on what’s written. That means an employee can “speak as” a colleague, a client, a regulator, or a fictional role and the model will adjust its behavior accordingly. There’s value in that (testing scenarios, simulating counterparties), but there’s also obvious room for mischief, misrepresentation, and internal confusion if you don’t put rails around it.

5. The real risk isn’t just what the AI might do. It’s how you deploy it. The big frontier here isn’t “rogue AI.” It’s:

  • what data you’re feeding these models,
  • how your system prompts are written,
  • how third-party models are wired into your stack, and
  • how little formal oversight exists at that deployment layer.

This is way bigger than having a polite “AI usage policy” on your intranet. This is infrastructure-level compliance and governance. And it’s coming for everyone. To make this as practical as possible, I also created a one-page AI Deployment Risk Playbook that you can download and share with your leadership team. It’s a concise PDF designed for GCs, CISOs, CTOs, KM leaders, and anyone responsible for governing AI inside their organization. Download the one-page guide here