Home / Editorial

International Law

AI (Artificial Intelligence)

    «    »
 14-Aug-2024

Source: The Hindu 

Introduction 

The year 2024 began with concerns about new security threats, particularly those arising from Artificial Intelligence and its various forms. Security specialists worldwide anticipated a wave of attacks, with the 33rd Summer Olympic Games in France seen as a prime target. While the absence of major incidents during the Games was a relief, experts warn that vigilance must be maintained as new digital threats continue to emerge. The peaceful conclusion of the Paris Olympics is a triumph for security managers, but it does not signal an end to the need for constant vigilance in the face of evolving security challenges. 

The Rise of AI-Enabled Disinformation 

  • The year 2024 witnessed an unprecedented surge in disinformation campaigns, largely facilitated by advancements in Artificial Intelligence (AI).  
  • The Taiwan elections in January 2024 served as a stark example of AI attack, with the political landscape inundated by fake posts and videos that caused widespread confusion among the electorate.  
  • While China was initially suspected as the source, the incident highlighted a crucial point: in today's digital age, the true origins of disinformation are often obscured. 

The Proliferation of Deepfakes 

  • Deep fakes, comprising digitally manipulated video, audio, or images, have become increasingly prevalent and sophisticated.  
  • These AI-generated forgeries have repeatedly made headlines, creating a miasma of disinformation that is often difficult to dispel.  
  • The damage caused by deep fakes is frequently irreversible, as the truth often emerges long after the false narrative has taken hold in the public consciousness. 

The Intersection of Cyber Attacks and AI 

  • The combination of cyber-attacks and AI-enabled disinformation has emerged as a potent threat to national security.  
  • The ongoing conflict in Ukraine serves as a case study in how opposing sides can employ these tactics to disrupt critical infrastructure, including telecommunications and power grids. 
  • This hybrid warfare approach has caused significant havoc and underscores the need for a comprehensive defense strategy that addresses both cyber and AI-enabled threats. 
  • The CrowdStrike Outage 
    • In 2024, a software update glitch affecting Microsoft Windows provided a chilling preview of the potential consequences of a large-scale cyber attack. 
    • What began as a localized issue in the United States quickly spread globally, disrupting flight operations, air traffic control systems, and stock exchanges worldwide.  
    • The Indian Computer Emergency Response Team (CERT-IN) issued a critical severity rating for the incident, highlighting its far-reaching implications. While not a deliberate attack, this event demonstrated the vulnerability of our interconnected digital infrastructure. 
  • Other Past Cyber Attacks 
    • The 2017 WannaCry ransomware attack, which infected over 230,000 computers in 150 countries, causing billions of dollars in damages. 
    • The Shamoon Computer Virus attack in 2017, which primarily targeted oil companies and was dubbed the "biggest hack in history" at the time. 
    • The 'Petya' Malware attack, also in 2017, which severely impacted banks, electricity grids, and numerous institutions across Europe, the United Kingdom, the United States, and Australia. 
    • The Stuxnet attack of 2010, which physically degraded over 200,000 computers and specifically targeted Iran's nuclear program, demonstrating the potential for state-sponsored cyber warfare. 

What are the AI Legislations Around the World? 

European Union: 

  • The European Union enacted the EU AI Act on 9th December, 2023, establishing the first legally binding framework for AI regulation globally. 
  • The EU AI Act adopts a risk-based approach, categorizing AI systems based on their potential risk to humanity. 
  • Minimal-risk AI applications, such as recommendation systems, are exempt from mandatory rules. 
  • High-risk AI systems, including those used in medicine, education, and hiring, are subject to strict requirements including:  

a) Risk-mitigation systems

b) High-quality data sets 

c) Activity logging 

d) Detailed documentation 

e) Clear user information 

f) Human oversight 

g) Robust accuracy and cybersecurity measures

  • All AI systems must clearly disclose to users that they are AI-powered. 
  • AI systems deemed to present "unacceptable risk," such as social scoring and workplace emotion-sensing devices, are prohibited. 
  • The EU AI Act aims to balance regulation with fostering innovation and competitiveness in the AI sector. 

United Kingdom: 

  • The UK has adopted a "pro-innovation" approach to AI regulation, eschewing new and complex guardrails. 
  • Existing regulators are tasked with interpreting and implementing the UK's core AI principles:  

a) Safety 

b) Transparency 

c) Fairness 

d) Accountability 

e) Contestability

  • The UK government has established "central functions" to create a common understanding of AI risks among regulators. 
  • The UK government has stated it will not pass any immediate laws on AI regulation. 
  • The UK hosted the International Summit for AI Safety in November 2023, resulting in the Bletchley Declaration signed by 28 countries. 
  • The Bletchley Declaration aims to establish a shared understanding of AI risks and emphasize international collaboration in risk management. 

China: 

  • China enacted AI regulations beginning in 2021, focusing primarily on generative AI and deepfakes. 
  • Chinese regulations impose controls on AI-powered recommendation systems, including:  

a) A ban on dynamic pricing 

b) Strict transparency requirements for users interacting with AI-generated content

  • China's AI governance approach aims to balance safety protections with innovation incentives. 
  • Chinese AI regulations are primarily directed towards private companies rather than state entities. 
  • China's AI rules emphasize maintaining social order and combating misinformation and disinformation. 
  • Chinese AI regulations require adherence to "socialist core values" as defined by the state. 

What are AI legislations in India? 

  • The Indian legal system currently lacks codified laws, statutory rules, or regulations specifically governing the use of Artificial Intelligence (AI). 
  • The establishment of a comprehensive regulatory framework for AI is deemed essential to guide stakeholders in the responsible development, deployment, and management of AI technologies in India. 
  • Certain sector-specific frameworks have been identified for the development and use of AI in India: 
    • In the financial sector, the Securities and Exchange Board of India (SEBI) issued a circular in January 2019 addressing reporting requirements for AI and Machine Learning (ML) applications and systems utilized by Stockbrokers, Depository Participants, Recognized Stock Exchanges, and Depositories. 
    • In the healthcare sector, the strategy for the National Digital Health Mission (NDHM) recognizes the necessity of developing guidance and standards to ensure the reliability of AI systems in health-related applications. 
  • On 9th June, 2023, the Ministry of Electronics and Information Technology (MEITY) proposed that AI may be subject to regulation in India in a manner similar to other emerging technologies, with the primary aim of protecting digital users from potential harm. 
  • MEITY has stated that the perceived threat of AI replacing human jobs is not imminent, citing the following reasons: 
    • Current AI systems are primarily task-oriented.  
    • Existing AI technologies lack the sophistication required to fully replace human labor.  
    • AI systems do not possess human reasoning and logic capabilities. 
  • The absence of a comprehensive AI-specific regulatory framework in India necessitates the development of guidelines and standards to ensure responsible AI practices across various sectors. 
  • The proposed regulation of AI in India is intended to align with the country's broader digital governance strategy and to address potential risks associated with AI technologies. 
  • The development of AI regulations in India is expected to involve consultation with relevant stakeholders, including industry experts, academics, and policymakers. 
  • Any future AI regulatory framework in India is likely to consider international best practices while addressing country-specific needs and challenges. 
  • The regulation of AI in India may require amendments to existing laws or the enactment of new legislation to effectively address the unique characteristics and implications of AI technologies. 

What are the Current Threats Posed by Cyber Fraud and Hacking? 

  • While AI disinformation poses a significant global threat, cyber threats remain an immediate and persistent danger to individuals. 
  • The incidence of cyber fraud and hacking has increased exponentially in recent years, affecting a growing number of victims. 
  • Fraudsters commonly pose as delivery company agents to obtain personal information for malicious purposes, presenting a threat to daily life. 
  • There is a notable increase in false credit card transactions, wherein perpetrators obtain personal information to defraud unsuspecting individuals. 
  • The compromise of business emails is becoming increasingly prevalent. 
  • Phishing, a widespread form of cyber fraud, involves the theft of personal information such as customer IDs, credit/debit card numbers, and PINs. 
  • Spamming, defined as the receipt of unsolicited commercial messages through electronic messaging systems, is a pervasive issue. 
  • Identity theft has emerged as one of the most serious and widespread cybersecurity threats. 
  • Democratic governments worldwide are endeavoring to implement systems to address digital threats. 
  • Industry and private institutions lag behind in cybersecurity measures, rendering them particularly vulnerable to digital attacks. 
  • The implementation of firewalls, anti-virus defenses, and backup/disaster recovery systems, while necessary, is insufficient to fully address cybersecurity risks. 
  • Many corporate CEOs lack adequate preparation to address digital threats effectively. 
  • The appointment of a Chief Information and Security Officer is advisable to assess and advise on organizational cybersecurity measures. 
  • Awareness of digital threats constitutes the initial step in combating cyber and AI-directed threats. 
  • The unauthorized use of Generative AI content has become a common tool in digital bullying. 
  • Prevention of digital threats requires substantial effort and appropriate budgetary allocations in both private and public sectors. 
  • Potentially dangerous digital technologies necessitate increased and specific attention from those in positions of authority, particularly in democratic societies. 
  • Awareness of digital bullying and other forms of manipulation is crucial to prevent the escalation of such situations. 
  • Addressing digital threats requires coordinated action among various stakeholders. 
  • Nations, especially democracies, face attacks from novel sources in the digital realm. 
  • There exists an urgent need to counter digital surveillance, disinformation, bullying, and manipulation to ensure societal resilience and survival. 

Conclusion  

In 2024, the rise of AI and cyber threats highlights the urgent need for strong security measures and clear regulations. While AI-driven disinformation and advanced cyber attacks are significant risks, it's crucial for governments and businesses to stay alert and proactive. To protect against these evolving threats, we need better cybersecurity practices and effective laws governing AI. Ongoing cooperation and adaptation will be key to keeping our systems secure and resilient.