Michel Houllebecq, the bestselling French novelist and provocateur, has a knack for predicting disasters. His sex-tourism novel Plateforme (2001) featured a terrorist incident at a resort in Thailand that was eerily similar to the 2002 Bali bombings. Soumission (2015) was released on the day of the al-Qaeda-linked Charlie Hebdo massacre in Paris; the novel’s subject (an Islamist takeover of France) made the coincidence distinctly uncomfortable. Now Houellebecq’s most recent book, Sérotonine (2019), appears to have foreseen the ‘gilet jaune’ (‘yellow vest’) protests that have rocked France since November.
Houellebecq, prophet or troll?
What Literary Agents Want Now
“On the hunt for a less problematic Simone de Beauvoir who can speak for literally all women at once and also not be so serious. This seems like it should be simple?” (Yes, this is a satirical list … kind of.) – The New York Times
Five lessons from Notre Dame
By
now, most people who operate in the world of misinformation are well aware of
the conspiracy theories associated with the fire at Paris’ Cathedral of Notre
Dame this week.
The
hoaxes came swiftly and unabashedly: assertions the fire was deliberately
started, that there were chants of “Allahu Akbar” outside the church and that a
Yellow Vest protester could be seen in a tower during the fire (he was a
firefighter).
The good news is that fact-checkers and reporters who cover
online misinformation were all over this story. FactCheckEU worked to debunk
a number of falsehoods, as did BuzzFeed
News in the United States. The Verge’s Casey Newton noted
the speed with which the misinformation spread, but said none of it “truly
went viral.” Other coverage came from NBC,
The Washington Post,
Politico Morning Tech and
CNN.
(Poynter-owned) PolitiFact and other fact-checkers did
a solid job explaining how the “lone man” in the tower was really a
firefighter. PolitiFact also flagged
a doctored photo on Facebook of Muslims laughing in front of Notre Dame. (We do
question, however, the use of the “Pants-on-Fire” rating in both cases, given
the event at hand.)
These kinds of hoaxes spread after almost every big breaking
news story. So what were the lessons learned this time?
First, Twitter continues to be ground zero for misinformation
— and the company is doing very little to try to stop it. Daniel
reported on how, while fact-checkers were able to outscale several of the
Notre Dame hoaxes on Facebook — with which many fact-checkers partner to
reduce the spread of false posts — Twitter fakes dominated the
conversation. And that’s unlikely to change unless the company tackles
misinformation more aggressively.
Speaking of Twitter, it failed to enforce its own policies about
imposter accounts in time to stop the spread of false claims. A fake CNN
account claimed that it had “confirmed” the fire was an act of terrorism.
That’s bogus, as is another claim from a fake Fox News account about U.S. Rep.
Ilhan Omar (D-Minn.), but it
still took Twitter a while to take both accounts down.
Second, misinformers don’t need to create a complete fabrication
to go viral online. As noted by several
fact-checkers
covering the Notre Dame fire, real news articles were posted out of context to
try to spread false, Islamophobic narratives about the event. In one example,
Twitter users shared a
real 2016 story from El Mundo about four people being detained near Notre
Dame as if it had happened this week.
Third, the Notre Dame fire illustrated how misinformation
doesn’t operate in a vacuum — it often jumps between platforms and languages,
becoming more mainstream in the process. BuzzFeed did a great job of outlining
this in a
timeline of the misinformation, in which it reported how hoaxes about the
origin of the fire eventually made their way into talking points on mainstream
American news shows.
Fourth, platforms’ systems aimed at contextualizing conspiracies
with truthful content can backfire. Like The
New York Times, many outlets covered the fact that a YouTube algorithm
mistakenly displayed information about the Sept. 11, 2001, terrorist attacks
alongside livestreams of the fire. After ample reporting on the
bug, YouTube removed the recommendations. But Nieman Lab’s Joshua Benton asked
a very good question: “Could those information boxes under YouTube
conspiracy videos add legitimacy instead of reduce it?”
Finally, here was a case where misinformation could be quickly
debunked because authorities were quick to suggest the fire was likely
accidental, and possibly related to a refurbishment project in the cathedral.
The Paris public prosecutor, Rémy Heitz, said “nothing
indicates” that the fire was started on purpose. But it's a double-edged
sword; in the absence of an official cause of the fire, conspiracy
theories filled in the gaps.
Authorities have said they expect a long and detailed
investigation to find out what really happened, which means conspiracy theories
will probably continue to spread online. But with the sheer amount of early and
aggressive debunking, they may have been at least somewhat contained.
. . . technology
- During elections in Indonesia and India, “the problem of disinformation is growing much faster than our ability to combat it,” Axios reported. And Quartz reported that even TikTok, a social media platform known for its silly video compilations, is warning users about a potential onslaught of misinformation.
- Wired covered a Ted Talk by Twitter co-founder Jack Dorsey in Vancouver. It’s worth reading, though Dorsey’s quotes on plans to fix problems with the platform are frustratingly vague. An example: “We could do a bunch of superficial things to address what you’re talking about, but we need to go deep.” Fast Company has a meaty write-up as well.
- The Washington Post launched a WhatsApp channel to update readers on the Indian election. That strategy seems to borrow from what fact-checkers have been doing for two years.
. . . politics
- Plenty of work has been done to uncover how Russian state operatives used fake social media accounts to try to sway American voters in favor of Donald Trump during the 2016 election. But The Washington Post looked into how those same tactics were used to target Bernie Sanders supporters after he lost the Democratic nomination to Hillary Clinton.
- Earlier this month, The New York Times reported that Russian agents were using real people’s Facebook accounts to publish political ads or spread false stories. Now, The Atlantic describes Ukraine’s presidential election this weekend as not just a laboratory for Russian information warfare tactics, but “also a proving ground for possible solutions.”
- Yet another study elucidates the effect of fact checks on people who receive misinformation. This research shows that supporters of politicians spreading misinformation became more skeptical once the information was corrected, but that having the facts didn’t necessarily change how they felt about the politicians.
. . . the future of news
- Discussions around the world about whether governments should more closely regulate harmful content have “concerning implications” for free speech, Jessica Brandt, the head of policy and research for the German Marshall Fund's Alliance for Securing Democracy, wrote in Axios. Related: Ipsos MORI found that three-quarters of Britons think fake news should be a crime — but not everyone thinks it’s a good idea.
- Speaking of regulation, House Speaker Nancy Pelosi (D-Calif.) told Recode’s Kara Swisher this week that Big Tech’s self-regulating days “probably should be” over and suggested Congress could remove legal liability protections for the platforms for third-party speech, which is the source of much misinformation.
- In Spain, 16 media outlets have united to fact-check the general election later this month. The collaboration, which was created in partnership with First Draft, the organization behind similar projects like CrossCheck, Comprova and CrossCheck Nigeria, even created a code of principles guiding the fact-checkers’ coverage.
Each week, we analyze five of the top-performing fact checks on
Facebook to see how their reach compared to the hoaxes they debunked. Read more
about this week’s numbers, and how fact-checkers tried to quell the tide of
misinformation about the Notre Dame fire, here.
- Les Décodeurs: "Fake news on the origin of the fire of Notre-Dame de Paris" (Fact: 7.2K engagements // Fake: 3.2K engagements)
2.
Factcheck.org:
"A
Phantom Ocasio-Cortez Quote on Gun Ownership" (Fact: 3.2K engagements
// Fake: 1.4K engagements)
3.
20
Minutes: "Fire
in Notre-Dame-de-Paris: Disappearance of the rosettes, origin of the fire"
(Fact: 1.4K engagements // Fake: 8.4K engagements)
4.
Full
Fact: "Lemon
juice, coconut oil and stopping all sugar intake won’t cure cancer" (Fact:
1.4K engagements // Fake: 1.4K engagements)
5.
Maldito
Bulo: "Hoaxes
and misinformation about the fire of the Notre Dame de Paris cathedral"
(Fact: 1.3K engagements // Fake: 663 engagements)
When
it comes to misinformation surrounding India’s election, we wonder how
fact-checkers even decide what to check. The country is awash in fakery and
disinformation spread
on WhatsApp, as well as doctored or selectively edited videos on social
media.
This
week’s choice is an example of the latter.
“No,
the head of India’s opposition Congress party did not promise to provide
farmland on the moon,”
the Agence France-Presse wrote in a fact check. The 24-second clip posted
to Facebook purports to show Indian National Congress party president Rahul
Gandhi at a public rally telling the crowd that he will give farmers fields on
the moon to grow crops.
Actually,
AFP reported, the video is an edited version of an old speech in which Gandhi is
ridiculing incumbent Narendra Modi for making impossible pledges to the
nation’s farmers, and he uses moon farming as an example of the kinds of
unrealistic promises he would expect Modi to say.
What we liked: Out-of-context video clips are a staple
of political deception. They’re fairly easy to make and they seem so real,
given that they show real people and events. AFP dug this hoax out, as well as
the original video, and showed how they were actually the same event — just
selectively edited.
1.
Factcheck.org
published
this article with the help of Tech & Check alerts, an automated
fact-checking system developed by the Duke Reporters’ Lab.
2.
Why
are we susceptible to “fake news”? A cognitive scientist explained
in Nieman Lab.
3.
Canada’s
election this fall will be vulnerable to foreign disinformation campaigns, a
professor and surveillance expert wrote
in the Globe and Mail this week. He questions whether the country is
prepared for it. Also in Canada: No, Prime Minister Justin Trudeau did
not ask Nigeria to send a million immigrants.
4.
Voters
in Australia are concerned, too, about misinformation surrounding their
candidates, according to a
News Corp. poll.
5.
Reuters
profiled
Indonesian fact-checkers ahead of this week’s presidential election.
6.
Writer
Anna Merlan has a new book about conspiracy theories in America. Mother
Jones interviewed
her this week.
7.
Arabic-speaking
journalists: This
video will show you how to use Mapchecking.com to fact-check statements
about crowd sizes.
8.
CJR
checked
in with a fake news writer in Macedonia: “Facebook and Google have made it
impossible to run and monetize fake news operations from pages in Macedonia or
from the Balkans and beyond.”
9.
Swag
news: Snopes now
has a gift shop!
10. A QAnon banner was
recently seen at, of all places, a 5K race in Jacksonville Beach, Florida.
via Daniel and Susan