MARK BUCKLEY. Some Home Truths
There are some things which are true, and some which are not.
There are many things which are debatable, or contentious, or even
undecided. But the true things will always be true. Our media habitually
believes that stupid, nonsensical, or just plain wrong opinions deserve
to be treated with the same weight as those things which are true.
Continue reading
Continue reading
New Survey on Technology Use by Law Firms: How Does Your Firm Compare?– Nicole L. Black recommends firm conduct a technology audit to review the need for software updates, to identify and replace outdated technology and applications, and to plan and implement migrating operations such as document management and time and billing systems to cloud computing.
Sharia law is already here — the IRS must respond The Hill
Chinese Queensland Danger-Rescue Neighbourhood Watch.
TikTok, a.k.a. Douyin in Communist China
TikTok is an iOS and Android social media video app for creating and sharing short lip-sync, comedy, and talent videos. The app was launched in 2017 by ByteDance, for markets outside of China. ByteDance has previously launched Douyin (Chinese: 抖音) for the China market in September 2016. ByteDance · List of most-followed TikTok ... · Musical.ly · Internet censorship in China
Australia is
facing a looming cyber emergency, and we don’t have the high-tech workforce to
counter it
Australia’s social scientists and intelligence agencies may need a more tech-savvy workforce to protect the nation
Peel back the layers of Twitter with this freshly updated tool - Poynter – A person or organization’s social media account reveals two layers of information – “The first is, obviously, the contents of the posts themselves. The second — much harder to obtain at a glance — is the structure and patterns contained within those messages. The latter can often enhance or, more common than you’d think, negate the former. A football star, for example, may say that he’s working hard and sleeping well and focusing on getting prepped for The Big Game. But a quick analysis of his Twitter account may reveal that he’s tweeting at all hours of the day and night (including during practice) about a variety of topics that have nothing to do with his team. Normally, this bird’s eye view (I know, cliché, but I can’t resist a Twitter pun) of information is interred in Twitter’s API, available only to those adept at coding and analysis, or to those willing to navigate the myriad tools that claim to be the best at extracting useful information from the site. But data analyst Luca Hammer just relaunched his Account Analysis tool, a longtime favorite of mine, to make it even easier to explore Twitter accounts on macro and micro scales.
An AI that writes convincing prose risks mass-producing fake news MIT Technology Review – Fed with billions of words, this algorithm creates convincing articles and shows how AI could be used to fool people on a mass scale – “…The researchers set out to develop a general-purpose language algorithm, trained on a vast amount of text from the web, that would be capable of translating text, answering questions, and performing other useful tasks. But they soon grew concerned about the potential for abuse. “We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily,” says Jack Clark, policy director at OpenAI. Clark says the program hints at how AI might be used to automate the generation of convincing fake news, social-media posts, or other text content. Such a tool could spew out climate-denying news reports or scandalous exposés during an election. Fake news is already a problem, but if it were automated, it might be harder to tune out. Perhaps it could be optimized for particular demographics—or even individuals…”
Newsrooms gear up to cover 2020 misinformation
With
a little more than a year to go before the 2020 election, U.S. newsrooms are
gearing up for what they expect will be a deluge of misinformation aimed at
influencing, dividing and confusing voters.
The
efforts fall, roughly, into two categories: Covering misinformation as a beat
to alert readers to hoaxes and trends in false information; and learning or
improving verification skills to ensure that news stories don’t reproduce or
amplify falsehoods.
In
an example of the former, The Washington Post last month announced that it had
assigned reporter Isaac Stanley-Becker to what it called a new “digital
democracy” beat focused on “the largely unregulated and increasingly
dominant role of the internet in driving U.S. politics.” One of his early
pieces was a
smart take on how Republicans who express concern about President Donald
Trump’s interactions with Ukrainian President Volodymyr Zelensky are becoming
targets of disinformation campaigns.
This
summer, The New York Times, in
a piece laying out its fact-checking operation and its plans to cover
online disinformation, said that it will again create a “tip line” readers can
use to flag material they think is intended to mislead. A similar venture
during the midterms resulted in about 4,000 submissions, it said.
And,
of course, our newsletter co-author Daniel Funke recently
launched a new misinformation beat for (Poynter-owned) PolitiFact, focusing
on falsehoods from and about the 2020 candidates’ campaigns.
We
saw similar
efforts to bolster coverage of misinformation before the 2018 midterms, but the
presidential cycle is expected to bring new methods and intensity to the
manipulation attempts.
In
the verification space, the nonprofit First Draft has launched a program it
calls “Together
Now” to train newsrooms across the country in responsible reporting on
misinformation.
“We
see examples every day of problematic content winding its way into news
reports,” said Aimee Rinehart, First Draft’s director of partnerships and
development. She said it's no longer sufficient for newsrooms to have one
specialized forensics team to spot fakes — all journalists have to be trained
in skills like reverse-image searches.
“Disinformation
actors are working double time to fool the night and weekend crews," she
said. "Newsrooms can no longer rely on a 9-to-5 news cycle.”
Storyful,
a social media intelligence and news agency, is doing work in both categories –
helping its newsroom partners report on misinformation and identify
manipulations.
The
company, owned by News Corp, recently launched a unit called Investigations by
Storyful that will partner with news organizations to use its social media
analysis – data about what people are doing and saying online – to identify
trends, including problematic ones.
An
example, said Darren Davidson, Storyful’s editor-in-chief, was a recent Wall
Street Journal story exposing how people are getting around Facebook’s ban
on selling guns in its marketplaces by pretending to just sell the gun cases.
But the gun cases are posted at inflated prices, an indication that the
postings have become “code” for the sale of actual guns. After the Journal’s
story, 15
senators called on Facebook to halt the practice.
Storyful
will also work with newsrooms to help identify fake content or verify that
photos and videos on social media are legitimate, which will be a particular
need in the 2020 campaign, Davidson said in a phone interview.
“There’s
a lot of concern in newsrooms about being gamed or conned as the election cycle
ramps up,” he said.
Have
you heard of other news organizations staffing up their misinformation teams?
Let us know at factually@poynter.org.
. . . technology
·
Facebook
has a new problem in Brazil: false job offers. For a month, Agência
Lupa (in Portuguese only) monitored 35 posts that carried fake hiring
promises and had more than 107,000 interactions. Those posts promised users a
job, but required them to leave a comment to get to "the promised
offer." After a while, a bot would pop up and contact users, sending them
an external link. Those who clicked on the link were asked to re-enter their
login and passwords, but didn't notice they were in a fake Facebook login page.
Through this fraudulent action, much personal data was stolen and some profiles
hijacked.
·
Anti-vaccination
billboards full of misinformation are being funded through Facebook
fundraisers, despite s crackdown by the platform on such practices, NBC
News reported. Facebook closed one fundraiser after NBC asked about the
practice but several remained active, the network’s story said.
·
More
on Facebook: BuzzFeed News’ Craig Silverman published
an investigation about how a company in the U.S. scammed users by selling
them bogus free trial offers for low-quality products and paying them to rent their
accounts to post ads.
. . . politics
·
Democratic
presidential candidate Elizabeth Warren ran
a Facebook ad saying the company’s CEO Mark Zuckerberg had endorsed Donald
Trump for re-election. He hadn’t, of course, but her point was to call
Facebook’s bluff on its policy of not fact-checking political ads.
·
Politico
reported that the Democratic National Committee sent an urgent email to
presidential campaigns the day before Tuesday’s debate warning them to stay on
guard against foreign manipulation efforts and disinformation from Trump and
his allies.
·
Vice
wrote about how the U.S. Census Bureau is working with technology companies
to limit the spread of misinformation about the 2020 census. But some false
reports have already gained traction online.
. . . the future of news
·
ABC
News mistakenly
ran a video from a 2017 shooting event at a gun range in Kentucky with a
story about Turkish attacks in northern Syria. The network apologized for the
report. The New
York Times quoted First Draft’s Claire Wardle as saying such mistakes are
“relatively easy” to avoid.
·
If
you’re counting on machine learning to help people automatically identify
misinformation, we have some bad news. Two
papers from MIT researchers found that, while machines are good at
detecting when content is created by other machines, they’re pretty bad at
determining whether something is true or false.
·
The
Globe and Mail in Toronto talked to several experts about why people fall for
false information and share it. One researcher at New York University is
conducting a brain imaging study to investigate why people older
than 65 are six to seven times more likely to share false news than their
younger counterparts.
Last
Monday, when the Spanish Supreme Court sentenced former leaders of the Catalan
independence movement to lengthy
prison terms, the streets of Barcelona became the stage for violent
protests. Inevitably, social media was rife with false information.
In
about 24 hours, Maldita.es
and Newtral,
two fact-checking organizations based in Madrid, caught and managed to debunk
at least eight pieces of misleading content that had gone viral.
It was false, for example, that a 5-year-old boy and a man who had a heart attack at El Prat Airport died because protesters wouldn't let an ambulance get to them. It was also false that shares in the Spanish stock market fell as a result of the ruling, and that business owners threatened employees who thought about going on strike.
It was false, for example, that a 5-year-old boy and a man who had a heart attack at El Prat Airport died because protesters wouldn't let an ambulance get to them. It was also false that shares in the Spanish stock market fell as a result of the ruling, and that business owners threatened employees who thought about going on strike.
Spanish
fact-checkers also pointed out a series of unproven claims attributed to
presidential candidates. The country will have its second election in six
months in a few weeks.
What we liked: Maldita.es and Newtral not only worked quickly but also showed they are capable of fact-checking content in different contexts, using different sources: the public health system, the stock market and the presidential campaigns. Maldita even managed to put all its fact checks on one page, which made it easier to read and distribute.
What we liked: Maldita.es and Newtral not only worked quickly but also showed they are capable of fact-checking content in different contexts, using different sources: the public health system, the stock market and the presidential campaigns. Maldita even managed to put all its fact checks on one page, which made it easier to read and distribute.
1. The first
samba about fake news will be sung in February, during Rio de
Janeiro's widely known Carnival, by São Clemente Samba School
participants.
2. The Washington Post Fact Checker has
updated its tally of false or misleading claims from Trump: 13,435 over 993
days.
3. First Draft has
a new guide for verifying online information.
4. As pro-democracy protests in Hong Kong
continue, scholars warn that disinformation
is worsening.
5. Bellingcat
wrote about a coordinated social media campaign aimed at distorting events
in the Indonesian province of Papua.
6. Writing for Poynter.org, Josie
Hollingsworth of PolitiFact reported
on how Spanish fact-checkers are preparing for the fourth national election
in four years.
7. Witness Media Lab held a workshop on
deepfake videos in Brazil. Here
are some takeaways.
8. Could hiding likes on Facebook reduce
the spread of misinformation? Research
suggests it’s possible.
9. Nearly two months after Daniel reported
on how mass shooting threats were spreading in private messaging apps, Reuters
wrote about how a similar rumor sparked fear in an Indiana community.
10. Daniela Flamini of Poynter wrote about what
misinformation researchers may find in the 32 million URLs Facebook has
shared with them.
Correction from last week:
RealClear Media (not RealClearPolitics) runs a Facebook page that
publishes far-right memes and Islamophobic smears. The former is the company
that owns the latter. We regret the error.
via
Daniel, Susan and Cristina