Home   »   Crafting safe Generative AI systems

Editorial of the Day: Crafting safe Generative AI systems (The Hindu)

Table of Contents

Context: The article is discussing the emerging revolution in Generative AI, particularly focusing on Large Language Models (LLMs) like ChatGPT, and their potential impact on various aspects of society, including the economy and accessibility to information. It highlights the prediction that LLMs could contribute significantly to the global economy, with estimated annual values ranging from $2.6 trillion to $4.4 trillion. While the article acknowledges the importance of regulatory measures for managing the potential risks associated with this technology, it also suggests that regulation alone might not be sufficient. Instead, it proposes a broader approach that takes into consideration the broader impacts of Generative AI on society, beyond just regulatory aspects.

Background

About Generative AI

  • It refers to a category of artificial intelligence (AI) algorithms that generate new outputs based on the data they have been trained on.
  • Unlike traditional AI systems that are designed to recognize patterns and make predictions, generative AI creates new content in the form of images, text, audio, and more.

Example of Generative AI

  • ChatGPT:  ChatGPT is an AI-powered chatbot application built on OpenAI’s GPT-3.5.
    • OpenAI has provided a way to interact and fine-tune text responses via a chat interface with interactive feedback.
  • Dall-E:  Dall-E is an example of a multimodal AI application that identifies connections across multiple media, such as vision, text and audio.
  • Google Bard:  Google Bard is a new chatbot tool, designed to simulate conversations with a human and uses a combination of natural language processing and machine learning, to provide realistic, and helpful responses to questions you might ask it.
    • It uses LaMDA (Language Model for Dialogue Applications) technology.
    • It’s built on top of Google’s Transformer neural network architecture, which was also the basis for other AI generative tools, like ChatGPT’s GPT-3 language model.

Crafting safe Generative AI systems_4.1

Decoding the Editorial

Case Study:

While discussing the impact of Large Language Models (LLMs) on the economy and the society, the article mentions a specific example.

  • This involves the pilot of the Jugalbandi Chatbot in rural India, which is powered by ChatGPT.
  • The Jugalbandi Chatbot aims to function as a universal translator, allowing users to ask questions in their local languages, retrieving answers from English-language sources, and presenting the information back to users in their native language.
  • This service has the potential to democratise access to information and positively impact the economic well-being of millions of people.

Concerns and Risks associated with AI:

The article  highlights several concerns and risks associated with the AI revolution, particularly with the use of Generative AI.

  • The main concerns revolve around the potential misuse of AI-powered tools by malicious actors, resulting in various negative impacts such as spreading misinformation, disinformation, security breaches, fraud, hate speech, and other harmful activities.
  • Risks of Misuse: AI-powered tools are being used to create artificial entities that are virtually indistinguishable from humans in various forms such as speech, text, and video. These entities can be used by bad actors to perpetrate harmful activities online, leading to a variety of issues.
    • Examples of Misuse: An AI-generated image of the Pentagon burning causes disruptions in equity markets, fake social media users contributing to polarised politics.
    • AI-generated voices bypassing bank authentication, an alleged suicide linked to conversations with an AI, and AI-generated deepfakes affecting elections.
  • Growing Risk: With upcoming elections in various countries, the risk of bad actors utilising Generative AI for misinformation and election influence is increasing.

Increasing Accountability through holistic approach:

  • All the above risks emphasise the urgent need to address these concerns.
  • Policy Focus: The article mentions that a common regulatory proposal is to require digital assistants (bots) to self-identify as such and to criminalise fake media. While these measures might create some level of accountability, they may not completely solve the problem as bad actors could ignore these rules and exploit the trust generated by compliant entities.
  • Conservative Assurance Paradigm: The author suggests a more conservative assurance paradigm, which assumes that all digital entities are potentially AI bots or fraudulent entities unless proven otherwise. This approach would involve a higher level of scepticism and scrutiny towards digital interactions to ensure safety.

Identity Assurance Framework:

The author suggests the implementation of an identity assurance framework as a broader approach to improving internet safety and integrity.

  • This framework aims to establish trust between entities interacting online by verifying the authenticity of their identities. This applies to humans, bots, and businesses.

Key Principles:

  • The proposed identity assurance framework should be open to various emerging credential types from around the world, not tied to any specific technology or standard, and should prioritise privacy protections.
    • Digital wallets are highlighted as important tools within this framework, enabling selective disclosure of information and protection against surveillance.
  • Global Initiatives: Over 50 countries are already working on initiatives to develop or issue digital identity credentials, which would serve as the foundation for the proposed identity assurance framework. India’s Aadhaar system is noted as a leader in this area, and the EU is also in the process of establishing a new identity standard to support online identity assurance.
  • Information Integrity: This involves ensuring the authenticity of content and verifying its source and integrity. This is seen as a crucial aspect of online trust, and it is based on source validation, content integrity, and information validity.
  • Global Leadership and Responsibility: The author emphasises that global leaders have a responsibility to ensure the secure deployment of Generative AI. This requires reimagining safety assurance paradigms and building trust frameworks that encompass global identity assurance and information integrity. This responsibility goes beyond regulation and involves engineering online safety.

Sharing is caring!

Crafting safe Generative AI systems_5.1