World

Combating Digital Deception In 2024 And Beyond

Steven Smith, Head of Protocol at Tools For Humanity.

You might call 2024 the “year of the deepfake.” News cycles have been flooded with scams involving AI-manipulated videos, sounds and images, from the rise of fake celebrity endorsements to the dissemination of fake information. What was once a niche novelty has exploded into a pervasive and alarming threat.

As generative AI continues to accelerate, distinguishing between real and fake content online is becoming nearly impossible. As these technologies advance, identifying deception will become even harder, posing serious risks to individuals, businesses and societies.

To safeguard against these growing risks and preserve public trust, we urgently need solutions to keep real internet users and companies safe, reduce fraud and stop the spread of misinformation.

A Growing Threat To Online Trust

According to the FBI, nearly 40% of online scam victims in 2023 were targeted with deepfake content. A 2024 Medius study found that over half of finance professionals in the U.S. and U.K. have been targets of a deepfake-powered financial scam, with 43% falling victim to such attacks. The cryptocurrency sector has been particularly affected, with deepfake-related incidents increasing by 654% from 2023 to 2024.

Traditional finance is no exception. In the summer of 2024, New York Attorney General Letitia James warned about investment scams using deepfake videos of celebrities like Warren Buffett and Elon Musk to lure investors. Similarly, a Hong Kong finance worker was swindled out of $25 million after deepfakes of a “chief financial officer” and other employees convinced him to transfer funds.

In a world where more aspects of our daily lives are moving online—from work meetings to telehealth appointments to banking and financial planning—the ability to trust that the people we interact with are who they claim to be has never been more crucial. The stakes are simply too high.

Current Solutions To Solving The Deepfake Problem

Currently, the regulatory space surrounding deepfakes is fragmented at best. In the U.S., there is no federal law that comprehensively addresses the creation, dissemination and use of deepfakes. While some states like Florida, Texas and Washington have enacted their own legislation and Congress is currently considering regulations, these measures are still in their early stages.

Beyond regulations, a growing number of technological defenses are entering the market to help tackle this challenge. Google DeepMind recently made its AI text watermark tool open source, meaning anyone can use it. However, it’s not foolproof; it primarily identifies AI-generated text but does not yet extend to audio or video manipulations.

Facebook and Instagram are testing new facial recognition tools to quickly restore compromised accounts and identify fake celebrity endorsements. While promising, these efforts are still in the pilot phase and are limited in scope. They can help spot deepfakes in certain situations—including synthetic media involving user-generated content, videos, livestreams and cross-platform sharing—but do not offer a holistic solution.

McAfee has also launched a tool that helps users identify whether audio in videos on platforms like YouTube or X (formerly Twitter) is real or not. Similarly, a Google Chrome extension from Hiya uses AI to determine if the voice in on-screen video or audio is legitimate or fake. While these tools can be useful for detecting some audio-based deepfakes, they still only address a narrow subset of the problem. AI-manipulated videos and images, which make up a major portion of deepfakes, can sneak by undetected.

We need more advanced and ubiquitous tools to tackle this issue effectively.

Filling The Gap In Deepfake Detection Tools

We need more advanced solutions capable of quickly and accurately detecting deepfakes, especially as they become more sophisticated. These tools must be integrated into social media platforms, video hosting sites and financial systems to protect both consumers and businesses.

Governments, tech companies, financial institutions and law enforcement must also work together more effectively to combat deepfake fraud. This means creating standardized strategies and protocols for deepfake detection, sharing best practices and building stronger partnerships to mitigate the risks associated with this technology.

What’s Next?

Deepfakes represent one of the most significant threats to digital security and public trust today. However, no single industry can address this challenge alone. With the growing sophistication and reach of these technologies, urgent action and investment are needed from both the private and public sectors—including governments, tech companies and consumers.

One thing is certain: Proving the authenticity of individuals online in a way that is both privacy-preserving and accessible will be crucial for safeguarding all internet users. Without this ability, the digital landscape will remain increasingly vulnerable to manipulation and deceit.

The time to act is now before this problem becomes insurmountable.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?



Source link

Related Articles

Back to top button