Mahi Sall, Advisor, Fintech-Bank Partnerships, Payments and Financial Inclusivity
January 25th, 2023
Facebook announced late Monday that it would ban “deepfakes,” which are AI-manipulated videos that distort reality, often simulating real people in fake situations.
The social media giant announced the changes in a company executive blog post, saying it will remove deepfakes and other types of heavily manipulated media from its platform.
Specifically, the company laid out two main criteria for removing content under the new rules.
The first is that the company will remove content posted on Facebook if has been edited in ways that would “likely mislead someone into thinking a subject of the video said words that they did not actually say,” according to the post written by Monika Bickert, Facebook’s vice president of global policy management. Secondly, the platform will ban media if it’s the product of AI or machine learning that “merges, replaces, or superimposes content onto a video, making it appear to be authentic.”
Facebook came under fire last year for allowing a manipulated video of Speaker Nancy Pelosi that made it appear as though she was drunk by altering her speech to slur her words. At the time, Facebook said the video went through its fact-checking process, which does not require content to be true to be allowed on the platform. The company said it displayed a note with additional context about the video, telling users that it was false.
Under its new rules, Facebook told Recode it still would not take down the Pelosi video, saying that it does not meet the standards of the new policy.
“Only videos generated by artificial intelligence to depict people saying fictional things will be taken down. Edited or clipped videos will continue to be subject to our fact-checking program. In the case of the Pelosi video, once it was rated false, we reduced its distribution,” the spokesperson told Recode.
Whether videos are deepfakes or not, they’re all subject to Facebook’s fact-checking system. If content is proven to be false, it can be flagged with a note labeling the content as such, and Facebook will deprioritize it in its News Feed.
In an email, Omer Ben-Ami, the co-founder of Canny AI (the Israeli advertising startup that last year helped artists produce a viral deepfake of Zuckerberg on Instagram, which Facebook opted to keep up) said Facebook’s new policy seemed “reasonable.” However, he cautioned that his company and others, “use this technology for legitimate reasons, mainly for personalization and localization of content.”
He said it was unclear why the policy only applies to content manipulated by artificial intelligence.
Overall, there are some exceptions to Facebook’s new rules: They don’t apply to videos that are parody or satire, nor do they ban videos edited “solely to omit or change the order of words” someone is saying.
The National Crowdfunding & Fintech Association (NCFA Canada) is a financial innovation ecosystem that provides education, market intelligence, industry stewardship, networking and funding opportunities and services to thousands of community members and works closely with industry, government, partners and affiliates to create a vibrant and innovative fintech and funding industry in Canada. Decentralized and distributed, NCFA is engaged with global stakeholders and helps incubate projects and investment in fintech, alternative finance, crowdfunding, peer-to-peer finance, payments, digital assets and tokens, blockchain, cryptocurrency, regtech, and insurtech sectors. Join Canada's Fintech & Funding Community today FREE! Or become a contributing member and get perks. For more information, please visit: www.ncfacanada.org
Support NCFA by Following us on Twitter!Follow @NCFACanada |
Leave a Reply