Facebook, Twitter, and
YouTube were pressed in Congress Wednesday over their reliance on artificial
intelligence and algorithms to keep their powerful platforms clear of violent
extremist posts.
In a Senate Commerce Committee hearing,
executives of the world’s top social media companies were praised for their
efforts so far to eliminate Islamic State, Al-Qaeda and other jihadist content
from the internet.
But critics say that extremist groups continue
to get their propaganda out to followers via those platforms, and call for
tougher action.
Another concern is that the continued ability to use anonymous accounts, while
benefiting pro-democracy activists battling repressive governments, will also
continue to empower extremists.
The current efforts by the companies to remove
content and cooperate with each other in doing so are strong but “not
enough,” Senator Ben Nelson said.
YouTube is automatically removing 98 percent
of videos promoting violent extremism using algorithms, said Public Policy
Director Juniper Downs.
But Senator John Thune, Chairman of the
Commerce Committee, asked Downs why a video
which showed the man who bombed the Manchester Arena in June 2017 how to build
his bomb has repeatedly been uploaded to its website every time YouTube deletes
it, as recently as this month.
Carlos Monje, director of Public Policy and
Philanthropy for Twitter, said that even with all their efforts to fight
terror- and hate-related content, “It is a cat-and-mouse game and we are
constantly evolving to face the challenge.”
“Social media companies continue to get beat
in part because they rely too heavily on technologists and technical detection
to catch bad actors,” said Clint Watts, an expert at the Foreign Policy
Research Institute in the use of the internet by terror groups.
Last year Google, Facebook, Twitter and
Microsoft banded together to share information on groups and posts related to
violent extremism, to help keep it off their sites.
No comments:
Post a Comment