Imagine & Deepfake will do it for you

By Lokmat English Desk | Updated: June 7, 2024 21:35 IST2024-06-07T21:35:02+5:302024-06-07T21:35:02+5:30

These are the days of influencers and influencer marketing. While an influencer has a large and engaged social media ...

Imagine & Deepfake will do it for you | Imagine & Deepfake will do it for you

Imagine & Deepfake will do it for you

These are the days of influencers and influencer marketing. While an influencer has a large and engaged social media following and can influence opinions, behaviours, and purchase decisions of their followers, influencer marketing involves endorsements and product placement from influencers, people and organizations who are experts in their field. Can influencers also influence negatively? Can influencer marketing impact products negatively? Difficult to believe? AI can do that.

Can disinformation on online platforms sway voting behaviour, and party support? A US non-profit Freedom House says yes. Says AI can create and spread disinformation faster and cheaper too. Deepfakes are the medium, once a niche area of computer science. Deepfakes technology uses artificial intelligence (AI) and machine learning (ML) to create highly realistic and convincing fake images, videos, or audio recordings.

The core technology behind deepfakes is typically Generative Adversarial Networks (GAN)s which consist of two neural networks: a generator and a discriminator. The generator creates fake data, while the discriminator tries to detect if the data is fake or real. Over time, the generator improves its ability to create realistic data as it learns to fool the discriminator. Deepfakes require extensive datasets to train the AI. For creating a deepfake video of a person, thousands of images or video frames of that person are used to train the model. The more data available, the more convincing the deepfake can be. Common applications include face swapping, where the face of one person is superimposed onto another person's body in a video, and voice synthesised, where AI mimics a person's voice by analyzing speech patterns and tones.

These synthetic media are often indistinguishable from real ones, making them a powerful tool for various applications, both benign and malicious. Difficult to identify and even if one were to identify, levying penalties is difficult as there are no guidelines. In the recently concluded elections, there have been cases of microtargeting where voters were targeted with disinformation to spreading false narratives.

Midjourney is a generative AI programme and service created and hosted by the San Francisco–based independent research lab Midjourney, Inc. It generates images from natural language descriptions, called prompts, similar to OpenAI's DALL-E and Stability AI's Stable Diffusion. With these tools openly available, how safe should people feel? How authentic is any of the information available in public domain? If deepfake videos of female opposition politicians emerge in bikinis or male opposition leaders emerge in compromising positions, or deepfake voiceovers instigate violence, what should one do? DeepMedia, a company developing tools to detect synthetic media, estimates that at least 500,000 video and voice deepfakes were shared on social media sites globally in 2023.

Deepfake technology has immense opportunities. However, it is a potential threat to democracy, peace, individual privacy, and the way the world functions. A manipulated video of Russian President declaring full-scale war against Ukraine was aired on Russian television recently. However, some potential damage was thwarted when Kremlin confirmed that the video was a fake. But did you know that after fact-checkers labelled the video as “fake”, another deepfake video of the leader appeared insisting it was real. The 2023 State of Deepfakes report says such videos have increased more than five times (or 550%) since 2019. About 98% of total deepfake videos are porn, and 99% of them target women. We are aware of the case that involved actress Rashmika Mandanna, whose face was superimposed on that of a Gujarati influencer. In similar incidents, deepfake explicit videos of actresses Alia Bhatt, Kajol, Aishwarya Rai, and Katrina Kaif raised alarms about the invasion of privacy.

What can be done? Are the social media platforms liable for third-party content posted on their sites if they do not act? Can a watermark be added to show that it is made with AI? Should there be a global framework for regulating AI? Research into detection algorithms that detect deepfakes by analysing inconsistencies or artifacts in the media must be fast-tracked. There is nothing more effective than public awareness.

Open in app