Wednesday, Dec 25, 2024
English News
  • Hyderabad
  • Telangana
  • AP News
  • India
  • World
  • Entertainment
  • Sport
  • Science and Tech
  • Business
  • Rewind
  • ...
    • NRI
    • View Point
    • cartoon
    • Columns
    • Olympics
    • Education Today
    • Reviews
    • Property
    • Lifestyle
E-Paper
  • NRI
  • View Point
  • cartoon
  • Columns
  • Reviews
  • Education Today
  • Property
  • Lifestyle
Home | News | Rewind Seeing Isnt Believing

Rewind: Seeing isn’t believing

It’s unclear exactly which way deepfakes will develop in the future but it’s clear that visual modification will be there for a while and there are no quick fixes to address the issue yet

By Telangana Today
Published Date - 8 June 2024, 11:55 PM
Rewind: Seeing isn’t believing
whatsapp facebook twitter telegram

By Vishesh Khanna

Deepfake technology is a relatively new development in the changing field of digital technology. It has attracted a lot of interest and is being debated vigorously because of its serious implications for security and privacy. The term ‘deepfakes,’ a blend of ‘deep learning’ and ‘fake,’ describes extremely complex artificial intelligence (AI)-generated content. These are usually images, videos or audio files that have been manipulated to authentically show people talking or doing things they never did.


Deepfake technology was initially created for creative and entertaining reasons, but it has quickly advanced and become an important concern because of its ability to manipulate and deceive. One of the main concerns with deepfakes is that they can damage credibility and trust by producing incredibly realistic fakes that are almost identical to real recordings. These fabricated media have the potential to be used maliciously to disseminate false information on a never-before-seen scale, discredit people and influence public opinion.

• The first known use of deepfakes in a military confrontation occurred during the Russia-Ukrainewar. During this war, examples of deepfake videos included using video game clips to prove the existence of the legendary fighter pilot ‘The Ghost of Kyiv,’ a fake video of Russian President Vladimir Putin declaring peace with Ukraine, and breaking into a Ukrainian news site to display a fake message claiming to be from VolodymyrZelenskyy, indicating surrender

Deepfakes have significant moral and social implications, especially about privacy invasion and security risks. The technology’s capacity to overlay one individual’s appearance onto the body of another person creates serious privacy problems because it makes it harder to distinguish between fact and fiction and may result in the unauthorised use of private photos or movies. Deepfakes can pose serious global security dangers since they can be used to influence financial markets, pose as public people or instigate wars by spreading false information.

The need to address the moral, legal and technological issues raised by deepfakes is growing as AI’s capabilities develop. Remedial measures that include improved detection techniques, legal structures and public education initiatives are essential for reducing the negative effects of deepfake technology on security, privacy and public confidence. Achieving a balance between innovation and precautions against potential misuse is still a crucial task when navigating the intricate world of deepfakes and related privacy and security issues.

• The number of deepfake videos on the internet doubled to 14,678 in 2021 from 2018. It was predicted that around 5 lakh voice and video deepfakes would flood social media in 2023

Deceptive Innovation
The extensive potential for criminal activities arises with the utilisation of deepfake technology. While technology itself isn’t inherently hazardous, it serves as a tool capable of facilitating various crimes impacting individuals and society collectively. Deepfake technology can be used to carry out the following illegal actions:

• Stealing identities and fabricating virtual representations

Deepfakes enable theft of identity and virtual forgeries, which are serious crimes with potentially devastating consequences for those affected, and society as a whole. Using deepfake technology to pilfer someone’s identity, create deceptive impressions of people or influence public opinion can seriously harm a person’s credibility and integrity while spreading misleading information. These offences fall under the purview of Sections 66 and 66-C (penalties for theft of identity) of the Information Technology Act, 2000, allowing for their prosecution. Additionally, Sections 420 and 468 of the Indian Penal Code, 1860, could be invoked to address these matters.

• Spreading misinformation against government

Deepfakes pose a serious threat to the community at large when employed to disseminate deceptive or inaccurate data, challenge the legitimacy of governments, or incite animosity and discontent toward them. The widespread dissemination of false or misleading information possesses the capability to create doubt and undermine public trust, acting as a mechanism to influence public perceptions or impact political results. These offences may come under the purview of Section 66-F (Cyber Terrorism) and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022, as stipulated by the Information Technology Act, 2000, enabling their prosecution. Additionally, invoking Section 121 (waging war against the Government of India) and Section 124-A of the Indian Penal Code, 1860, could be used.

• Defamation and hate speech

Using deepfakes to disseminate hate speech or other harmful content can seriously hamper people’s reputations and general well-being, creating a dangerous environment on the internet. These offences could potentially face prosecution under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022, as specified in the Information Technology Act, 2000. Moreover, Sections 153-A and 153-B (related to speech affecting public tranquillity) and Section 499 (pertaining to defamation) of the Indian Penal Code, 1860, may also be invoked.

• Obscenity and pornography

With the use of this technology, fake pictures or films showing people acting or saying things that never happened can be produced. Additionally, deepfakes could be employed maliciously for misinformation campaigns, political propaganda or non-consensual pornography. When deepfakes are employed to disseminate misleading information or sway public perception, they can have detrimental effects on both society as a whole and the individuals whose photos or likenesses are utilised without authorisation. Crimes falling under Section 66-E (regarding punishment for breaching privacy), Section 67 (dealing with punishment concerning the dissemination or transmission of explicit material in electronic format), Section 67-A (related to punishment for publishing or transmitting material containing sexually explicit acts in electronic form) and Section 67-B (addressing penalties for disseminating or transmitting electronic content portraying minors involved in sexually explicit acts or pornography) of the Information Technology Act, 2000, can be prosecuted.

Furthermore, invoking Sections 292 and 294 (concerning Punishment for sale, etc, of obscene material) of the Indian Penal Code, 1860, and Sections 13, 14 and 15 of the Protection of Children from Sexual Offences Act, 2012, (POCSO) could be utilised in safeguarding the rights of women and children in this context.

• The absence of explicit regulations in the IT Act of 2000 regarding deepfakes, artificial intelligence, and machine learning poses challenges in effectively overseeing the implementation of these technologies

The existing legal structure in India concerning cybercrimes linked to deepfakes is insufficient to fully tackle this issue. The absence of explicit regulations in the IT Act regarding deepfakes, AI and machine learning poses challenges in effectively overseeing the implementation of these technologies. It may be necessary to amend the IT Act to adequately control offences related to deepfakes. This would involve adding provisions that particularly address the usage of deepfakes and outline the consequences of their misuse. This may entail harsher punishments for those who produce or distribute deepfakes with the intention of harming others, as well as stronger legal protections for those whose likenesses or photos are used without their permission.

It is imperative to acknowledge that the development and application of deepfakes pose a global threat, requiring likely international collaboration and coordination to effectively manage their use and prevent privacy violations.

What the World’s Doing

On June 5, the Australian government introduced the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024 which intends to impose severe criminal penalties for distributing sexually explicit content without consent, including material generated or altered using artificial intelligence or other technologies. Sharing digitally created or modified sexually explicit material without consent is a harmful and profoundly distressing form of abuse. This malicious behaviour can degrade, humiliate and dehumanise victims. It disproportionately targets women and girls, reinforcing damaging gender stereotypes and contributing to gender-based violence. Offenders face stringent penalties, with up to 6 years of imprisonment for sharing non-consensual deepfake sexually explicit content. The Bill also includes two aggravated offences aimed at repeat offenders and the creators of such content. Both of these aggravated offences carry a potential sentence of up to seven years in prison.

• A significant step taken by the EU was the creation of an AI office on February 21 to facilitate the implementation of guidelines related to the detection of deepfake content

The European Union (EU) regulates deepfakes through its AI Act. Under Article 52(3) of the said act, it requires transparency from the creator. It means the creator of a deepfake must furnish the required information about its origin and the method used for creating it. It aims to inform consumers about the content they are viewing and make them less prone to deception. Recently, more than 25,000 tech enthusiasts from 145 countries deliberated on regulating deepfakes at the UN conference on Artificial intelligenceheld in Geneva.

India currently does not have any statutes, regulations or policies that expressly govern the use of AI. However, with the rapid misuse of deepfake technology, the Ministry of Electronics and Information Technology (MeitY) intends to put strong controls in effect.

Recommendatory Framework

Firstly, regulation of deepfake technology providers that will hold them accountable for any misuse or harm caused by their platforms or software. This would compel providers to take necessary precautions to prevent malicious usage and could potentially make them legally liable for any damages resulting from their technology.

Secondly, treating malicious use of deepfake as a criminal offence. India would need to enact specific laws or amend existing ones to explicitly classify malicious deepfake creation, distribution or use for harmful purposes as criminal offences. These laws would outline the criteria defining illegal activities related to deepfakes. Establishing appropriate penalties for individuals or entities engaged in malicious deepfake activities is crucial. This might include fines, imprisonment, or both, depending on the severity of the offence and the resulting harm caused.

Thirdly, creation of a decentralised peer review network thatwill make use of community-driven verification and validation processes to prevent the spread of deepfakes. A decentralised peer review network could involve a community of users, experts and volunteers who contribute to the verification process. They would analyse and assess content flagged as potentially manipulated or suspicious.

Lastly, building international consensus to ensure ethical usage of deepfakes. Geopolitical tensions increase when foreign governments or organisations associated with foreign state institutions use deepfakes and misinformation. Increased international cooperation and diplomatic efforts are required to alleviate tensions and avoid conflicts. Although agreements have been reached at the regional level, no legally binding global agreements have been established to address the problems associated with information conflicts and the spread of misinformation produced by deepfakes.

The rapid development of deepfake technologies presents serious risks to the integrity of all video content. Numerous possible societal and economic hazards are presented by it, including the manipulation of legal frameworks, economic systems, political processes and scientific integrity. With the use of this technology, people are more susceptible to intimidation, extortion and defamation — especially women. These concerns are increased by the fact that deepfake technologies primarily involve replacing the faces of victims with actors from pornographic videos.

Increased likelihood of coming across content that has been manipulated leads society to adopt a more sceptical attitude regarding all audiovisual media. Evidence using audiography will be scrutinised more closely and require more confirmation. As a result, it would be harder to prove the credibility of authentic materials. Thus, the widespread occurrence of deepfakes intensifies a gradual deterioration in trust. Deepfake technology is developing in a dynamic and unpredictable way. It’s unclear exactly which way it will develop in the future, but it’s clear that visual modification will be there for a while. There are no quick fixes when it comes to the dangers posed by deepfakes. Analysing these risks effectively requires continuous study. The Indian government may take the lead in guiding this developing process.

Allowing for AI innovation to happen in your country is going to be important, otherwise you risk getting left behind. So getting regulation in a way you can promote and you can embrace innovation while mitigating the harms is the balance that countries are grappling with

– SundarPichai, Google CEO

Vishesh Khanna

(The author is undergraduate law student at National Law University, Odisha)

  • Follow Us :
  • Tags
  • Article 52(3)
  • cyber terrorism
  • deepfake
  • digital technology

Related News

  • Efforts on to reach 100 MT coal production by 2030, says Singareni CMD

    Efforts on to reach 100 MT coal production by 2030, says Singareni CMD

  • Santosh Trophy: Kerala stay perfect with third win

    Santosh Trophy: Kerala stay perfect with third win

  • Maoists accuse Chhattisgarh security forces of fake encounter; claim five victims were villagers

    Maoists accuse Chhattisgarh security forces of fake encounter; claim five victims were villagers

  • Nine bonded labourers from Odisha rescued from brick kiln in Mancherial

    Nine bonded labourers from Odisha rescued from brick kiln in Mancherial

Latest News

  • Cartoon Today on December 25, 2024

    6 hours ago
  • Sandhya Theatre stampede case: Allu Arjun questioned for 3 hours by Chikkadpallly police

    7 hours ago
  • Telangana: TRSMA pitches for 15% school fee hike and Right to Fee Collection Act

    7 hours ago
  • Former Home Secretary Ajay Kumar Bhalla appointed Manipur Governor, Kerala Governor shifted to Bihar

    7 hours ago
  • Hyderabad: Organs of 74-year-old man donated as part of Jeevandan

    7 hours ago
  • Opinion: The China factor in India-Nepal relations

    8 hours ago
  • Editorial: Modi’s Kuwait outreach

    8 hours ago
  • Telangana HC suspends orders against KCR and Harish Rao

    9 hours ago

company

  • Home
  • About Us
  • Contact Us
  • Contact Us

business

  • Subscribe

telangana today

  • Telangana
  • Hyderabad
  • Latest News
  • Entertainment
  • World
  • Andhra Pradesh
  • Science & Tech
  • Sport

follow us

© Copyrights 2024 TELANGANA PUBLICATIONS PVT. LTD. All rights reserved. Powered by Veegam