Why Facebook Did Not Recognize The Video From Christchurch Terror

Terrorism Act In New Zealand

The artificial intelligence would have to be supplied with more data, claims the social network as a justification.

Facebook’s artificial-intelligence software, which is said to detect violence in live streams on the platform, did not react to the video of the Christchurch massacre. “To do that, we need to supply our systems with large amounts of data from just such content,” the online network said Thursday.

That was difficult, “because such events are thankfully rare,” it said. Another challenge for the software is to distinguish real violence from the transfer of video game scenes. “For example, if our systems alarmed thousands of hours of live stream video games, our reviewers could miss the important real-world videos” that could alert Facebook helpers.

The assassin, who killed 50 people in attacks on two mosques in Christchurch, New Zealand last Friday, broadcast the attack in real time to the Facebook Live service. The company reiterated earlier data that the 17-minute live stream was seen by fewer than 200 users and the first user notice reached the online network twelve minutes after the end of the broadcast. After the end of a live stream, a recording remains available.

It remains unclear how long the original video of the attacker was online before it was removed from Facebook. The online network explained that the hint would have been processed faster if someone had reported the video during the live stream. The original video has been seen around 4,000 times – but later contributed to the fact that several users have uploaded copies of other services.

While Facebook’s software blocked 1.2 million attempts to re-upload the video in the first 24 hours, it also leaked about 300,000 uploads. Among other things, this is due to the fact that it had to do with over 800 modified versions of the video.