Five global technology companies have pledged to limit terrorist material online.
Facebook, Amazon, Google, Twitter and Microsoft will develop shared tools to detect and remove terrorist or extremist content among other measures.
The pledge was made at a Paris summit, which was called after the terror attack in Christchurch, New Zealand left 51 people dead.
The March attack was live-streamed on social media.
French President Emmanuel Macron hosted the event with New Zealand Prime Minister Jacinda Ardern, who has been calling for technology executives to sign a pledge as the “Christchurch Call”.
What was pledged?
They will develop crisis protocols to respond to emerging or active events such as a terror attack.
The companies said they would also commit to publishing “transparency reports” on the detection and removal of terror or violent extremist content.
Before the event Facebook announced curbs on its streaming feature.
The tech giant said there would be a “one-strike policy” banning those who violate new Facebook Live rules.
- NZ and France seek to curb online extremism
- Why cut off social media in Sri Lanka?
- UK plans social media and internet watchdog
In a statement, Facebook said that anyone sharing “violating content” like a statement from a terrorist group without context would be blocked from using Facebook Live for a set period, such as 30 days.
Ms Ardern called the measures a “good first step”.