Facebook recently shared a series of updates on its progress to combat white
supremacist, violent extremist, hate organizations, and terrorist groups on its
platform. As a part of its progress, Facebook is working with the metropolitan
police to detect the live streaming of any terrorist or violent videos going on
the platform. This will help alert the police much sooner before the attack
actually takes place.

Facebook says it?s working with government and law enforcement officials in the
US and the UK to train its computer vision algorithms on footage with the help
of firearms training programs in the future. According totheFinancial Times,
[https://www.ft.com/content/40a5cd30-d961-11e9-8f9b-77216ebe1f17]Facebook will
also be providing body cameras to the U.K.?s Metropolitan Police for free and
will in turn have access to video footage shared with the U.K. Home Office.

?Some of these changes predate the tragic terrorist attack in Christchurch, New
Zealand, but that attack, and the global response to it in the form of the
Christchurch Call to Action, has strongly influenced the recent updates to our
policies and their enforcement?, mentioned Facebook in ablog post
[https://newsroom.fb.com/news/2019/09/combating-hate-and-extremism/]. The
changes made by Facebook largely affect its Dangerous Individuals and
Organizations Policy which has been designed to keep people safe and prevent
real-world harm. Facebook says the Christchurch attack highlighted where
Facebook needed to improve its detection and enforcement against the violent
extremist content.

Facebook also co-developed a nine-point industry plan in partnership with
Microsoft, Twitter, Google, and Amazon. This nine-point industry plan highlights
the steps its taking to stop the abuse of technologies and terrorist content on
the platform. Moreover, with the help of its automated techniques, Facebook has
identified a wide range of groups as terrorists organizations based on their
behavior and not just the ideologies. It further doesn?t allow these groups to
operate on its platform. Company?s earlier techniques mainly focused on
terrorist groups like ISIS and Al Qaeda, and they managed to remove over 26
million pieces of content related to these groups. After the expansion of these
techniques to a wider range of dangerous organizations, Facebook has banned over
200 white supremacist organizations from its platform, using a combination of AI
and human expertise.

Facebook has also expanded its team and it now includes 350 people with
expertise in law enforcement, national security, counter-terrorism intelligence
and more. ?We are committed to being transparent about our efforts to combat
hate..we know that bad actors will continue to attempt to skirt our detection
with more sophisticated efforts and we are committed to advancing our work and
sharing our progress?,states

Original Article: