MESSAGE
DATE | 2021-02-21 |
FROM | Ruben Safir
|
SUBJECT | Subject: [Hangout - NYLXS] We are on the tipping point...
|
wsj.com
Online Speech Is Now an Existential Question for Tech
Christopher Mims
12-15 minutes
Every public communication platform you can name—from Facebook, Twitter
and YouTube to Parler, Pinterest PINS -0.10% and Discord—is wrestling
with the same two questions:
How do we make sure we’re not facilitating misinformation, violence,
fraud or hate speech?
The more they moderate content, the more criticism they experience from
those who think they’re over-moderating. At the same time, any statement
on a fresh round of moderation provokes some to point out objectionable
content that remains. Like any question of editorial or legal judgment,
the results are guaranteed to displease someone, somewhere—including
Congress, which this week called the chief executives of Facebook,
Google and Twitter to a hearing on March 25 to discuss misinformation on
their platforms.
For many services, this has gone beyond a matter of user experience, or
growth rates, or even ad revenue. It’s become an existential crisis.
While dialing up moderation won’t solve all of a platform’s problems, a
look at the current winners and losers suggests that not moderating
enough is a recipe for extinction.
Facebook is currently wrestling with whether it will continue its ban of
former president Donald Trump. Pew Research says 78% of Republicans
opposed the ban, which has contributed to the view of many in Congress
that Facebook’s censorship of conservative speech justifies breaking up
the company—something a decade of privacy scandals couldn’t do.
Parler, a haven for right-wing users who feel alienated by mainstream
social media, was taken down by its cloud service provider, Amazon Web
Services, after some of its users live-streamed the riot at the U.S.
Capitol on Jan. 6. Amazon cited Parler’s apparent inability to police
content that incites violence. While Parler is back online with a new
service provider, it’s unclear if it has the infrastructure to serve a
large audience.
During the weeks Parler was offline, the company implemented algorithmic
filtering for a few content types, including threats and incitement,
says a company spokesman. The company also has an automatic filter for
“trolling” that detects such content, but it’s up to users whether to
turn it on or not. In addition, those who choose to troll on Parler are
not penalized in Parler’s algorithms for doing so, “in the spirit of
First Amendment,” says the company’s guidelines for enforcement of its
content moderation policies. Parler recently fired its CEO, who said he
experienced resistance to his vision for the service, including how it
should be moderated.
A scene from the riot at the U.S. Capitol on Jan. 6. Some users of
Parler live-streamed the event.
Photo: Olivier DOULIERYAFP/Getty Images
Now, just about every site that hosts user-generated content is
carefully weighing the costs and benefits of updating their content
moderation systems, using a mix of human professionals, algorithms and
users. Some are even building rules into their services to pre-empt the
need for increasingly costly moderation.
The saga of gaming-focused messaging app Discord is instructive: In
2018, the service, which is aimed at children and young adults, was one
of those used to plan the Charlottesville riots. A year later, the site
was still taking what appeared to be a deliberately laissez-faire
approach to content moderation.
By this January, however, spurred by reports of hate speech and lurking
child predators, Discord had done a complete 180. It now has a team of
machine-learning engineers building systems to scan the service for
unacceptable uses, and has assigned 15% of its overall staff to trust
and safety issues.
This newfound attention to content moderation helped keep Discord away
from the controversy surrounding the Capitol riot, and caused it to
briefly ban a chat group associated with WallStreetBets during the
GameStop stock runup. Discord’s valuation doubled to $7 billion over
roughly the same period, a validation that investors have confidence in
its moderation strategy.
The prevalence problem
The challenge successful platforms face is moderating content “at
scale,” across millions or billions of pieces of shared content.
Before any action can be taken, services must decide what should be
taken down, an often slow and deliberative process.
Imagine, for example, that a grass-roots movement gains momentum in a
country, and begins espousing extreme and potentially dangerous ideas on
social media. While some language might be caught by algorithms
immediately, a decision about whether discussion of a particular
movement, like QAnon, should be banned completely, could take months on
a service such as YouTube, says a Google spokesman.
One reason it can take so long is the global nature of these platforms.
Google’s policy team might consult with experts in order to consider
regional sensitivities before making a decision. After a policy decision
is made, the platform has to train AI and write rules for human
moderators to enforce it—then make sure both are carrying out the
policies as intended, he adds.
While AI systems can be trained to catch individual pieces of
problematic content, they’re often blind to the broader meaning of a
body of posts, says Tracy Chou, founder of content-moderation startup
Block Party and former tech lead at Pinterest.
Take the case of the “Stop the Steal” protest, which led to the deadly
attack on the U.S. Capitol. Individual messages used to plan the attack,
like “Let’s meet at location X,” would probably look innocent to a
machine-learning system, says Ms. Chou, but “the context is what’s key.”
Facebook banned all content mentioning “Stop the Steal” after the riot.
Even after Facebook has identified a particular type of content as
harmful, why does it seem constitutionally unable to keep it off its
platform?
It’s the “prevalence problem.” On a truly gigantic service, even if only
a tiny fraction of content is problematic, it can still reach millions
of people. Facebook has started publishing a quarterly report on its
community standards enforcement. During the last quarter of 2020,
Facebook says users saw seven or eight pieces of hate speech out of
every 10,000 views of content. That’s down from 10 or 11 pieces the
previous quarter. The company said it will begin allowing third-party
audits of these claims this year.
SHARE YOUR THOUGHTS
How should platforms moderate content without censoring their users?
Join the conversation below.
While Facebook has been leaning heavily on AI to moderate content,
especially during the pandemic, it currently has about 15,000 human
moderators. And since every new moderator comes with a fixed additional
cost, the company has been seeking more efficient ways for its AI and
existing humans to work together.
In the past, human moderators reviewed content flagged by machine
learning algorithms in more or less chronological order. Content is now
sorted by a number of factors, including how quickly it’s spreading on
the site, says a Facebook spokesman. If the goal is to reduce the number
of times people see harmful content, the most viral stuff should be top
priority.
A content moderator in every pot
Companies that aren’t Facebook or Google often lack the resources to
field their own teams of moderators and machine-learning engineers. They
have to consider what’s within their budget, which includes outsourcing
the technical parts of content moderation to companies such as San
Francisco-based startup Spectrum Labs.
Through its cloud-based service, Spectrum Labs shares insights it
gathers from any one of its clients with all of them—which include
Pinterest and Riot Games, maker of League of Legends—in order to filter
everything from bad words and human trafficking to hate speech and
harassment, says CEO Justin Davis.
Mr. Davis says Spectrum Labs doesn’t say what clients should and
shouldn’t ban. Beyond illegal content, every company decides for itself
what it deems acceptable, he adds.
Pinterest, for example, has a mission rooted in “inspiration,” and this
helps it take a clear stance in prohibiting harmful or objectionable
content that violates its policies and doesn’t fit its mission, says a
company spokeswoman.
Services are also attempting to reduce the content-moderation load by
reducing the incentives or opportunity for bad behavior. Pinterest, for
example, has from its earliest days minimized the size and significance
of comments, says Ms. Chou, the former Pinterest engineer, in part by
putting them in a smaller typeface and making them harder to find. This
made comments less appealing to trolls and spammers, she adds.
The dating app Bumble only allows women to reach out to men. Flipping
the script of a typical dating app has arguably made Bumble more
welcoming for women, says Mr. Davis, of Spectrum Labs. Bumble has other
features designed to pre-emptively reduce or eliminate harassment, says
Chief Product Officer Miles Norris, including a “super block” feature
that builds a comprehensive digital dossier on banned users. This means
that if, for example, banned users attempt to create a new account with
a fresh email address, they can be detected and blocked based on other
identifying features.
The ‘supreme court of content’
Facebook CEO Mark Zuckerberg recently described Facebook as something
between a newspaper and a telecommunications company. For it to continue
being a global town square, it doesn’t have the luxury of narrowly
defining the kinds of content and interactions it will allow. For its
toughest content moderation decisions, it has created a higher power—a
financially independent “oversight board” that includes a retired U.S.
federal judge, a former prime minister of Denmark and a Nobel Peace
Prize laureate.
In its first decision, the board overturned four of the five bans
Facebook brought before it.
Facebook has said that it intends the decisions made by its “supreme
court of content” to become part of how it makes everyday decisions
about what to allow on the site. That is, even though the board will
make only a handful of decisions a year, these rulings will also apply
when the same content is shared in a similar way. Even with that
mechanism in place, it’s hard to imagine the board can get to more than
a tiny fraction of the types of situations content moderators and their
AI assistants must decide every day.
Ask WSJ: Clubhouse, Parler and What’s Next for Social Media
Join Wall Street Journal Personal Tech Editor Wilson Rothman in
conversation with Senior Personal Tech Columnist Joanna Stern, Tech
Columnist Christopher Mims and Tech Reporter Heather Somerville on
Wednesday, Feb. 24 at 5 p.m. Eastern. Ask your questions here.
But the oversight board might accomplish the goal of shifting the blame
for Facebook’s most momentous moderation decisions. For example, if the
board rules to reinstate the account of former president Trump, Facebook
could deflect criticism of the decision by noting it was made
independent of its own company politics.
Meanwhile, Parler is back up, but it’s still banned from the Apple and
Google app stores. Without those essential routes to users—and without
web services as reliable as its former provider, Amazon—it seems
unlikely that Parler can grow anywhere close to the rate it otherwise
might have. It’s not clear yet whether Parler’s new content filtering
algorithms will satisfy Google and Apple. How the company balances its
enhanced moderation with its stated mission of being a “viewpoint
neutral” service will determine whether it grows to be a viable
alternative to Twitter and Facebook or remains a shadow of what it could
be with such moderation.
—For more WSJ Technology analysis, reviews, advice and headlines, sign
up for our weekly newsletter.
Write to Christopher Mims at christopher.mims-at-wsj.com
Copyright ©2020 Dow Jones & Company, Inc. All Rights Reserved.
87990cbe856818d5eddac44c7b1cdeb8
--
So many immigrant groups have swept through our town
that Brooklyn, like Atlantis, reaches mythological
proportions in the mind of the world - RI Safir 1998
http://www.mrbrklyn.com
DRM is THEFT - We are the STAKEHOLDERS - RI Safir 2002
http://www.nylxs.com - Leadership Development in Free Software
http://www.brooklyn-living.com
Being so tracked is for FARM ANIMALS and extermination camps,
but incompatible with living as a free human being. -RI Safir 2013
_______________________________________________
Hangout mailing list
Hangout-at-nylxs.com
http://lists.mrbrklyn.com/mailman/listinfo/hangout
|
|