The Dark Side Of Deepfake: Unveiling The Threats To Society
Professor Danielle Citron was concerned when she first came across the deepfakes published on Reddit. The professor was concerned that they could be used to destabilize democracy. Then she is worried about a potentially dangerous genie has escaped the bottle.
It can create high-quality media which is unrecognizable when compared to neural networks. It can even change faces or simulate speech. Get more info.
1. Cybercrime
Deepfakes can be used for the purpose of fraud, revenge or manipulating -- specifically to influence or misinform the political stance of a person. Like other cybercriminal activities such as extortion, they may be carried out by individuals or by organizations.
A fake video showing a CEO criticizing a firm may cause its stock to drop. Social engineering scams often use the same technique. Deepfakes can be used to impersonate people to access sensitive data for example, the case of a U.K. Energy firm that was victimized by a scam called phishing scam that used a voice impersonating the chief executive.
Cybercriminals also make use of their abilities to cover up their crimes. This is referred to as plausible deniability, and it allows criminals to ignore any evidence of their actions however convincing it may be. This could mean distributing children's pornography and human trafficking, or online stalking. These can cause problems through the theft and sale of data or launching denial-of-service attacks on websites.
2. Information
There's been plenty of false information, but disinformation can be an even Get More Info serious threat. This involves false information spread to deliberately mislead or promote an agenda or view. This can also be employed to cover up malicious activities.
It could be anything from a fabricated news article to a video showing the person performing a task that they never did. It could be anything including audio or video recordings or text messages.
As an example, a text message received from a contact appearing to be a distress call could be transmitted to a bank and result in a bank robbery. An audio recording of the voice of an individual can be used as evidence to steal their identity.
The most perilous threat for Deepfake is revenge porn, in which victims are put in a humiliating or compromising situation. Research has shown that women are especially vulnerable to this type of sexual exploitation However, it could be made to target anyone. The experts predict this DIY deepfake technology will eventually become so sophisticated that it's easy for anyone to make realistic fake videos for anyone.
3. Political Influence
Deepfake, both as an actual technology as well as a term can be used to refer to fake videos, images and audio. Artificial intelligence has been used to achieve evil purposes, such as revenge porn and other malicious intent.
In the past, altering videos took some time and skills, making it impossible to anyone other than the most skilled and well-funded people to modify. Nowadays, any person with a computer and having access to an online training course can create a convincing fake.
These fakes could be used to harass, discredit, intimidate or even destabilize people, governments and corporations. These fakes could cause events that are international in nature, such as a video of a world president pressing the huge red button to launch an armageddon style nuclear missile launch. Another risk that is more dangerous is a fake scam in which criminals use the victim's voice to trick them into transferring money or passwords over the telephone. One scam the fake CEO contacted the CEO of an energy company and asked him to transfer millions of euros from the company to an offshore bank account.
4. Scams
Deepfakes can be used to deceive on an enormous scale. They can be used to blackmail or embarrass celebrities, to elicit extortion and anger, or alter stock prices by faking stories or making fake videos of CEOs making dangerous remarks.
The fake photos and videos can be created by altering the content of existing ones, or by using AI software that makes people appear to be doing or saying things they never did. These videos can be altered to appear out of context, such as a video which shows Nancy Pelosi in a scene which appears to indicate she's drunk, or in an alternate universe. Get more info.
Researchers are attempting to create software to detect deepfakes and warn people not to believe the fakes. Reality Defender is an antivirus/spam hybrid that scans media for incoming messages and sends suspicious manipulations into areas of quarantine. It operates in a similar way to the way Gmail filters out spam. The tool, created by the Deeptrace company will attempt to detect deepfakes, in the same manner that texts are authenticated before they're delivered.