Wednesday, November 30, 2022

Czech Museum of Literature

Knowledge is having the right answers. Intelligence is asking the right questions. Wisdom is knowing when to ask the right questions.


Chalmers: "Dolly Parton wrote Jolene and I Will Always Love you in 1 day. It took shadow treasurer 19 days to write that question."


       What used to be -- for almost seven decades -- the National Literature Memorial has now moved and recently re-opened as the Czech Muzeum literatury, and at Radio Prague International Ruth Fraňková reports on it, in Museum of Literature opens in Prague








  1. “Utilitarian longtermism is objectionable. Longtermism sans consequentialism is another matter” — Elliott R. Crozat (Purdue Global) considers deontological longtermism
  2. “Most of Earth thus mobilized toward figuring out what is widely thought to be the easiest problem of the three: the line between Anna and not Anna” — a story by Patrick House about how to delineate the boundaries of consciousness
  3. “Aesthetic value makes the world worthwhile, and… a good life is lived in pursuit and reflection of that aesthetic value” but “evil forces a significant qualification to aestheticism” — Tom Cochrane (Flinders) defends aestheticism but lets some moralizing in
  4. “Your love of pleasure, Callicles / Is like a jar that always leaks / Like a jar that leaks and then gets filled again. / Leaking, filling, running wild / Like a tyrant, like a child / Ceaseless wanting is, in fact, a kind of pain” — a song by Luisa Cichowski about the dispute in Plato’s Gorgias between Socrates & Callicles over the place of pleasure in the good life
  5. “The last unit we cover is on ‘The Ethics of Horror,’ and we discuss whether there is something morally dubious about watching and enjoying horror” — Kenneth L. Brewer (UT Dallas) describes his course on the philosophy of horror films


  1. Feel like you’re not good enough to be an academic? Turns out it’s because your parents weren’t good enough at encouraging you — a new study finds that “the less encouragement a doctoral student received from their parents in childhood and adolescence, the more likely they were to suffer impostor feelings”
  2. “It might sound strange, or even offensive, to suggest that writing about threats to free speech could make people afraid of speaking. The thing is, we know this is how behavior works in other domains” — Eve Fairbanks on the gap between talk of cancel culture and its reality

THIS IS BOTH IMPRESSIVE AND DISTURBING: Meta researchers create AI that masters Diplomacy, tricking human players.

I’ve been playing Diplomacy since middle school, and there are no random elements like dice or cards — it’s all human interaction.

Even before Deep Blue beat Garry Kasparov at chess in 1997, board games were a useful measure of AI achievement. In 2015, another barrier fell when AlphaGo defeated Go master Lee Sedol. Both of those games follow a relatively clear set of analytical rules (although Go’s rules are typically simplified for computer AI).

But with Diplomacy, a large portion of the gameplay involves social skills. Players must show empathy, use natural language, and build relationships to win—a difficult task for a computer player. With this in mind, Meta asked, “Can we build more effective and flexible agents that can use language to negotiate, persuade, and work with people to achieve strategic goals similar to the way humans do?”

The resulting model mastered the intricacies of a complex game. “Cicero can deduce, for example, that later in the game it will need the support of one particular player,” says Meta, “and then craft a strategy to win that person’s favor—and even recognize the risks and opportunities that that player sees from their particular point of view.”

At the same time, this technology could be used to manipulate humans by impersonating people and tricking them in potentially dangerous ways, depending on the context. Along those lines, Meta hopes other researchers can build on its code “in a responsible manner.”

Meta claims to have taken steps to prevent Cicero from being abused (or from becoming abusive, I suppose), but the source code is also open-sourced on GitHub.

From the comments at Ars: “Unbelievable. Training a non-conscious AI to ruthlessly deceive, manipulate and enlist humans towards the achievement of an arbitrary value function no matter the cost, as one of the first AI functions to develop. It’s like these guys want a paperclip-optimising singularity apocalypse.”

And yet: META WORKS ON AN AI, INSTEAD PRODUCES a “random bullshit generator.”