Facebook (FB - Free Report) recently announced it will ban misleading manipulated media, including edited images and deepfakes, ahead of the U.S. presidential elections in 2020.
Monika Bickert, Facebook’s vice president of global policy management, issued the policy while she is set to represent Facebook at a House Energy and Commerce hearing on manipulated media scheduled for Jan 8.
Facebook’s Criteria for Deepfake Detection
Facebook’s approach to manipulated videos is critical to its efforts to reduce misinformation on the social network.
The company’s new policy will prohibit videos that are edited or synthesized for clarity or quality and are not easy to identify as fake.
Additionally, the new standards call for manipulated videos to be removed from the platform if a video is altered by techniques such as AI and machine learning including Photoshop or another standard video-editing program in a way that makes it appear authentic.
Moreover, such videos including graphic violence, voter suppression and hate speech, which violate community standards laid down by Facebook, will be removed.
However, the new policy will not extend to videos edited for satire or parody, or to omit or change the order of words.
Notably, the deepfakes ban comes after Facebook was heavily criticized last year for refusing to remove an altered video of speaker Nancy Pelosi. The video wasn’t created by AI but was likely edited using readily available software to slur her speech.
Other platforms were also caught in the crossfire following the Pelosi video, including Twitter (TWTR - Free Report) . In November 2019, Twitter began crafting its own deepfakes policy and requested feedback from users concerning the platform’s future rules. The company is yet to issue any new guidance on manipulated media.
Facebook’s Initiatives for 2020 Against Deepfakes
Facebook’s policy comes just in time as President Donald Trump signed the historic $738 billion National Defense Authorization Act (NDAA) for fiscal year 2020 on Dec 20, 2019. The NDAA contains provisions to foster research on deepfake detection technologies, in addition to comprehensive provisions considering the threat deepfakes pose to national security.
Per the announcement, Facebook has collaborated with 50 global experts fact checking in over 40 languages with technical, policy, media, legal, civic and academic backgrounds to help in detecting deepfakes.
Facebook had announced a Deep Fake Detection Challenge (DFDC) to produce more research and open source tools to detect deepfakes in September 2019.
In the same month, Alphabet’s (GOOGL - Free Report) Google also released a free, huge dataset of deepfake videos to help researchers develop new methods to remove fake content.
Notably, Facebook also joined Amazon (AMZN - Free Report) and Microsoft to help researchers develop tools for better detection. These partners pledged $10 million and released 5,000 videos to help developers.
Amazon’s cloud computing division Amazon Web Services (AWS) is working with DFDC partners to explore hosting complicated datasets for deepfake detection on the cloud service using its Amazon S3 scalable infrastructure.
Moreover, Facebook has also partnered with Reuters news agency to help newsrooms identify deepfakes and manipulated media through a free online training course.
However, the effectiveness of such policies and measures to curb misinformation and manipulative media is unknown as numerous posts are uploaded on the social media platform each day, which is difficult to manage.
Facebook has a Zacks Rank #4 (Sell).
You can see the complete list of today’s Zacks #1 Rank (Strong Buy) stocks here.
Just Released: Zacks’ 7 Best Stocks for Today
Experts extracted 7 stocks from the list of 220 Zacks Rank #1 Strong Buys that has beaten the market more than 2X over with a stunning average gain of +24.6% per year.
These 7 were selected because of their superior potential for immediate breakout.
See these time-sensitive tickers now >>