Soon after the New Zealand’s Christchurch terrorist attacks wherein the terrorist shared a Live video of the shooting on Facebook, the social media giant has announced that it has tighten rules over Live videos.
Before the new rules, if someone posted content that violated our Community Standards — on Live or elsewhere, Facebook took down their post. If they keep on violating the standards the users were blocked from using Facebook for a certain period of time, which also removed their ability to broadcast Live. And in some cases, Facebook banned them from using their services altogether, either because of repeated low-level violations, or, in rare cases, because of a single egregious violation (for instance, using terror propaganda in a profile picture or sharing images of child exploitation).
The company today announce a ‘one strike’ policy to Live in connection with a broader range of offenses. From now on, anyone who violates most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.
And since tackling these threats also requires technical innovation to stay ahead of the type of adversarial media manipulation that came into light after the Christchurch shooting when some people modified the video to avoid detection in order to repost it after it had been taken down. Thus, to improve image and video analysis technology, Facebook is also investing $7.5 million in new research partnerships with leading academics from three universities -- The University of Maryland, Cornell University and The University of California, Berkeley.
The universities and Facebook will work together to detect manipulated media across images, video and audio, and distinguish between unwitting posters and adversaries who intentionally manipulate videos and photographs.
According to Facebook, this collaboration will be critical for the company’s broader efforts against manipulated media, including deepfakes (videos intentionally manipulated to depict events that never occurred).