Home   »   The EU’s Artificial Intelligence Act

Editorial of the Day: The EU’s Artificial Intelligence Act (The Hindu)

Context: The article is discussing the recent preliminary deal reached by members of the European Parliament on a new draft of the European Union’s Artificial Intelligence Act. It highlights the Act that aims to regulate the use and development of general-purpose artificial intelligence systems like OpenAI’s ChatGPT. The article mentions that the original draft of the Act was first created two years ago, and this new draft is an updated version that takes into account the latest developments in the field of AI. It also discusses the stipulations mentioned in the new AI Act, it talks about black boxes, the risk categories of AI, on how the popularity of ChatGPT accelerated and changed the process of bringing in regulation for artificial intelligence.

The EU’s Artificial Intelligence Act Background

What is Artificial Intelligence?

  • Artificial Intelligence (AI) is a concept that refers to the ability of machines to accomplish tasks that historically required human intelligence.
  • It includes various technologies such as machine learning, pattern recognition, big data, neural networks, and self-algorithms.
  • AI involves complex processes such as feeding data into machines and making them react as per different situations. The goal of AI is to create self-learning patterns where the machine can give answers to never-before-answered questions in a way that a human would.
  • Some examples of AI in action include:
    • Virtual personal assistants such as Siri, Alexa, and Google Assistant, which use natural language processing and machine learning to perform tasks such as setting reminders, playing music, and answering questions.
    • Recommendation systems used by online retailers and streaming platforms, which use machine learning algorithms to suggest products or content based on user preferences and behaviour.
    • Fraud detection systems used by banks and credit card companies, which use machine learning to analyse transaction data and identify patterns that may indicate fraudulent activity.
    • Autonomous vehicles, which use sensors, machine learning algorithms, and other AI technologies to navigate roads and avoid obstacles.
    • Smart home devices, such as thermostats and security systems, which use machine learning to learn user behaviour and preferences and adjust settings accordingly.

Evolution of AI: Timeline

The evolution of Artificial Intelligence (AI) can be traced back to the mid-20th century when computer scientists first started developing machines that could perform tasks requiring human intelligence.

  • The birth of AI: In 1956, the field of AI was born when John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon organized the Dartmouth Conference, which is considered the birthplace of AI.
  • Rule-based systems: In the 1960s and 1970s, AI research focused on developing rule-based systems that could mimic human reasoning and decision-making.
  • Expert systems: In the 1980s, expert systems were developed that could perform specialized tasks by emulating the decision-making processes of human experts.
  • Machine learning: In the 1990s, machine learning algorithms were developed that could learn from data and improve their performance over time.
  • Big data: In the 2000s, the rise of big data and cloud computing made it possible to process massive amounts of data and train more powerful AI models.
  • Deep learning: In the 2010s, deep learning algorithms were developed that could analyze large amounts of data and recognize patterns with unprecedented accuracy.
  • Neural networks: In recent years, the development of neural networks has led to breakthroughs in natural language processing, image recognition, and other areas.

Branches of Artificial Intelligence:

Branch of AI Description
Expert systems AI systems that use knowledge and rules to make decisions
Machine learning A technique that enables machines to learn from data and improve their performance
Deep learning A subfield of machine learning that uses neural networks to analyze large amounts of data
Natural language processing A technique that enables machines to understand and process human language
Computer vision A technique that enables machines to interpret and understand visual information
Robotics A field that combines AI with engineering to create machines that can perform physical tasks in the real world
Fuzzy logic A technique that allows for imprecision and uncertainty in decision-making
Evolutionary algorithms A technique that uses evolutionary principles to optimize AI systems
Swarm Intelligence A technique that models collective behavior in groups of animals to solve problems
Artificial neural networks A technique that uses interconnected nodes to process and analyze data

India and AI:

India is rapidly emerging as a major player in the field of Artificial Intelligence (AI). The country has a large pool of talented engineers, data scientists, and researchers who are working on cutting-edge AI projects.

  • As per the recent Global AI Index report, India is ranked 20th among 172 countries in terms of AI readiness and deployment. The report takes into account various factors such as talent, infrastructure, policy environment, research, and development, and commercial applications to assess the AI readiness of countries.
    • India’s ranking in AI has improved compared to the previous years.
    • The report highlights India’s strong talent pool and its potential to become a global leader in AI research and development.
    • However, it also notes that there is a need for more investment in AI infrastructure and policies to support the growth of AI in India.
  • Contribution to GDP: It is estimated that AI will add 957 billion dollars to India’s GDP by the year 2035 boosting India’s annual growth by 1.3% points.
  • Key developments related to AI in India:
    • National AI Strategy: In 2018, NITI Aayog published a draft national AI strategy with the aim of positioning the country as a global leader in AI research and development. The strategy focuses on five key areas: healthcare, agriculture, education, smart cities, and infrastructure.
    • AI Research Institutes: The Indian government has set up several research institutes dedicated to AI, such as the Indian Institute of Technology (IIT) Hyderabad, the Centre for Artificial Intelligence and Robotics (CAIR), and the Centre for Excellence in Artificial Intelligence.
    • Startup Ecosystem: India has a vibrant startup ecosystem, and several AI startups have emerged in recent years. Some of the notable startups include Haptik, Niki.ai, and Mad Street Den.
    • Corporate Investments: Several major corporations have also invested in AI research and development in India. For example, Google has set up an AI lab in Bangalore, while Microsoft has established an AI research lab in Bangalore and an AI engineering hub in Hyderabad.
    • RBI’s Initiative: Reserve Bank of India (RBI) has issued guidelines for the use of AI in the banking sector, which include requirements for explainability and transparency in AI decision-making. The Securities and Exchange Board of India (SEBI) has also proposed guidelines for the use of AI in the capital markets, which include requirements for data quality, model validation, and explainability.
    • AI Applications: AI is being used across various sectors in India, such as healthcare, agriculture, finance, and education. For example, AI is being used to improve crop yields and predict weather patterns in agriculture, while it is being used to improve patient outcomes and reduce healthcare costs in healthcare.

Decoding the Editorial

The article discusses how the AI Act got formulated and the need to regulate Artificial Intelligence.

  • Formulation of the AI Act:
    • The AI Act was formed by the European Union in 2021 through a lengthy and consultative process.
    • The process began in 2018 with the creation of the High-Level Expert Group on Artificial Intelligence (AI HLEG), which was tasked with advising the European Commission on ethical and legal issues related to AI.
    • The AI HLEG published its Ethics Guidelines for Trustworthy AI in 2019, which formed the basis for the development of the AI Act.
    • The European Commission then launched a public consultation on AI in 2020, which received over 1,200 responses from a wide range of stakeholders including businesses, academics, and civil society organizations.
    • Based on the feedback received, the European Commission drafted the AI Act in 2021, which was then subject to further consultation and negotiation with the European Parliament and Council.
  • Objectives of the Act:
    • The AI Act is intended to provide a comprehensive framework for regulating AI in the European Union.
    • It aims to ensure that AI is developed and used in a responsible and ethical manner.
    • The Act seeks to strike a balance between promoting the adoption of AI and mitigating the potential risks and harms associated with certain uses of the technology.
    • The goal is to create a framework that promotes transparency, trust, and accountability in AI, while also protecting the safety, health, fundamental rights, and democratic values of the European Union.
    • Similar to General Data Protection Regulation (GDPR), the aim of the AI Act is to establish Europe as a global leader in AI while ensuring that AI in Europe is developed and used in accordance with the values and rules of the European Union.
      • GDPR was a landmark data protection law passed by the European Union in 2018 that established strict rules on data protection and privacy.

Key Provisions of the Draft AI Act:

  • Definition of AI: The draft Act provides a broad definition of AI, which includes software that is developed using various techniques to generate outputs such as content, predictions, recommendations, or decisions that can influence the environments they interact with.
    • This definition encompasses a range of AI technologies, including those based on machine learning, deep learning, knowledge-based approaches, and statistical approaches.
  • Classifying AI Technologies: One of the key features of the draft Act is its approach to classifying AI technologies based on the level of risk they pose to the “health and safety or fundamental rights” of individuals.
    • The draft Act identifies four risk categories: unacceptable, high, limited, and minimal.
    • The classification is based on the potential harm that the AI technology could cause and the level of trustworthiness and reliability required for the technology to be used safely and ethically.
  • Prohibition of Certain Technologies:
    • The Act prohibits the use of certain AI technologies categorized as “unacceptable risk” with very limited exceptions.
    • The prohibited technologies include real-time facial and biometric identification systems used in public spaces, systems of social scoring by governments that could lead to unjustified and disproportionate treatment of citizens, subliminal techniques to manipulate a person’s behavior, and technologies that could exploit vulnerabilities of specific groups such as the young, elderly, or persons with disabilities.
    • These technologies are considered to pose significant risks to the health and safety, as well as fundamental rights of individuals.
  • AI in the high-risk category: The AI Act focuses on AI systems in the high-risk category.
    • The Act establishes requirements for developers and users of such systems, which include biometric identification and categorization of natural persons, AI used in healthcare, education, employment, law enforcement, justice delivery systems, and tools that provide access to essential private and public services.
    • The Act aims to create an EU-wide database of high-risk AI systems and set parameters so that future technologies or those under development can be included if they meet the high-risk criteria.
    • Before high-risk AI systems can be introduced to the market, they will be subject to strict reviews known as conformity assessments, which analyze data sets fed to AI tools, biases, how users interact with the system, and the overall design and monitoring of system outputs.
    • The Act requires high-risk AI systems to be transparent, explainable, allow human oversight, and provide clear and adequate information to the user. Additionally, high-risk systems must comply with mandatory post-market monitoring obligations, such as logging performance data and maintaining continuous compliance, with special attention paid to how these programs change over time.
  • AI in the minimal risk category: AI systems in the limited and minimal risk category such as spam filters or video games are allowed to be used with a few requirements like transparency obligations.
  • Obligations on Manufacturers of Generative AI: The current draft does not provide clear obligations for manufacturers of generative AI tools, but there are debates on whether all forms of these tools should be classified as high-risk. The draft is still subject to amendments before it becomes law.

Recent Proposal on General Purpose AI like ChatGPT

  • Until recently, the European Union’s proposal for regulating AI technologies did not address general-purpose AI such as ChatGPT, which is used for various tasks like summarizing information or generating news reports.
  • The proposal only mentioned “chatbot” once. However, in light of the growing interest in generative AI, the European Parliament has been working to update its regulations to address this type of technology, which has both impressed and worried people since the unveiling of ChatGPT by OpenAI.

Need for AI Act:

  • Increased Risks:
    • As artificial intelligence technologies become more advanced and ubiquitous, there are increased risks and uncertainties associated with them.
    • AI systems are now capable of performing a wide range of tasks, from simple voice assistance to complex decision-making processes such as recommending music or even detecting cancer.
    • However, as AI systems become more complex and sophisticated, there is a growing need to regulate their use and development to ensure that they are safe, transparent, and accountable.
    • The risks associated with AI can include issues such as bias, privacy violations, and the potential for harm to individuals or society as a whole.
      • For example, an AI algorithm used for hiring could discriminate against certain groups based on factors such as gender or race.
      • Similarly, an AI system used for surveillance could infringe on people’s privacy rights.
    • By regulating AI, policymakers aim to mitigate these risks and ensure that AI systems are developed and used in a responsible and ethical manner.
  • Complex AI Tools/Black Boxes: There are certain complex and unexplainable tools associated with AI.
    • These AI tools, often referred to as “black boxes,” are difficult to understand and explain even by the developers how they arrive at certain decisions or outputs.
    • This lack of transparency can lead to several issues, including wrongful arrests due to AI-enabled facial recognition, discrimination and societal biases seeping into AI outputs, and inaccurate or copyrighted material generated by chatbots based on large language models such as GPT-3 and GPT-4.
      • The example of wrongful arrests due to AI-enabled facial recognition refers to instances where facial recognition technology has incorrectly identified individuals as suspects in criminal investigations.
      • The potential for discrimination and societal biases to seep into AI outputs refers to the fact that AI systems can reflect the biases and prejudices of their creators or the data they are trained on.
      • Chatbots generating inaccurate or copyrighted material may cause potential legal and ethical issues associated with AI-generated content.

Current Stand of Global AI Governance:

The global AI governance currently stands at diverging views and approaches to regulate AI technologies.

  • The US has not yet implemented comprehensive AI regulation and has taken a sector-specific approach, with the recently released AI Bill of Rights outlining five principles for mitigating harms caused by AI.
  • On the other hand, China has enacted nationally binding regulations targeting specific types of algorithms and AI, such as a law to regulate recommendation algorithms and a piece of legislation targeting deep synthesis technology used to generate deepfakes.
    • China’s AI regulation authority has also created a registry or database of algorithms to promote transparency and understand how algorithms function.

Beyond the Editorial

Steps that India can take to regulate AI:

  • Develop a comprehensive national AI strategy: India can start by developing a comprehensive national AI strategy that takes into account the economic, social, and ethical implications of AI. This strategy should involve all stakeholders, including government, industry, academia, and civil society.
  • Establish an AI regulatory body: India can create a dedicated AI regulatory body to oversee the development and deployment of AI technologies. This body should be responsible for setting standards, monitoring compliance, and enforcing regulations related to AI.
  • Encourage responsible AI development: India can encourage responsible AI development by promoting ethical principles and best practices. This can be achieved through the development of codes of conduct and certification programs for AI developers.
  • Foster AI research and development: India can foster AI research and development by investing in AI education and training, supporting AI startups, and providing funding for AI research.
  • Collaborate with other countries: India can collaborate with other countries to develop international standards and guidelines for AI. This can help to ensure that AI technologies are developed and deployed in a responsible and ethical manner.

Sharing is caring!