
AI will not replace you, but someone using AI will | Mr. Raul John Aju | TEDxMarwadi University
Audio Summary
AI Summary
Rahul Chanaju, known as the AI kid of India, highlights that AI is growing exponentially faster than any other technology, reaching a billion users in months, not years. However, this rapid innovation outpaces governance, creating a critical gap where trust is built or lost. He illustrates this with an incident from seven or eight years ago, where a deepfake video of actor Aamir Khan supporting a political party went viral and was widely believed, even before AI was mainstream. This demonstrates the urgent need for regulations, especially now that AI is easily accessible and pervasive.
India's role is crucial in this scenario. As one of the world's fastest-growing digital economies with a large young population and developer community, India is a hub of AI innovation across sectors like healthcare, law, logistics, agriculture, and defense. The question isn't whether India will adopt AI, but how it will build AI systems that scale across society while ensuring proper control and ethical use. This is essential for maintaining trust in markets and institutions.
AI is no longer just a tool; it's becoming a fundamental infrastructure influencing critical decisions across the economy. Examples include hiring processes, loan approvals, and flagging illegal activities. Companies like Amazon and HP use AI to drastically reduce hiring times, making life-altering decisions for individuals. Therefore, regulating AI is paramount to ensure fairness and prevent misuse.
One major concern is algorithmic bias in hiring and lending. AI systems, trained on historical data, can perpetuate existing biases. For instance, if past engineering roles were predominantly filled by men, AI might disproportionately favor male candidates, denying equal opportunities based on gender, race, or other irrelevant factors. Similarly, banks use AI for loan approvals, considering not just credit scores but also bill payment histories, which could introduce biases. Given India's vast diversity, the risk of AI bias towards certain demographics or languages is even higher.
To address this, two key measures are proposed. First, there should be a "right to explanation." If an individual is rejected for a job or denied a loan by an AI, they should receive a clear explanation for the decision, just as they would from a human. This transparency allows for identifying and rectifying biases, such as racist or discriminatory criteria. Second, maintaining a "human in the loop" is crucial. Companies should not solely rely on AI for critical decisions like hiring; human oversight is necessary to prevent unintended consequences. Regular algorithm audits are also essential to ensure compliance with ethical guidelines and prevent models from penalizing applications based on historical data or protected characteristics.
Another significant problem is deepfakes and information integrity. The ease with which realistic fake videos, images, and voices can be generated poses a threat to elections, market manipulation, and individual reputations. Impersonating leaders or fabricating incidents could have severe societal consequences. To counter this, authenticity and traceability are key. Mandating labeling for all AI-generated media is proposed. This could involve embedding metadata into AI-generated content, similar to how dates and times are recorded for photographs, indicating its AI origin. Alternatively, watermarks could be used, though they are more easily removed. Platform responsibility is also vital, requiring social media companies to clearly label AI-generated content to help users differentiate between real and fake information.
Data privacy in the age of AI is a growing concern. AI models require massive amounts of personal data for training. Companies like Google and OpenAI actively seek data, even collaborating with platforms like Reddit. Users often unknowingly contribute to AI training through trends and online activities. A more alarming issue is the ability of AI models to infer information users have never explicitly shared, or even accidentally leak sensitive data. An incident where an Aadhaar card was leaked through an AI system highlights the severity of this risk. While India's Digital Personal Data Protection Act of 2023 is a positive step, it needs to be adapted to specifically address AI's data collection and usage. Transparency in training data sources and obtaining user consent for data usage are critical.
Finally, regarding AI and the future of work, AI is accelerating across all industries, from customer support and data analysis to content creation and logistics. India's large young workforce can leverage this by embracing AI. The key is to understand that AI won't necessarily take jobs, but someone using AI will. Therefore, AI education and widespread adoption of AI skills are crucial for individuals to remain competitive.
In conclusion, the presentation emphasizes three core principles: protecting basic rights to privacy, ensuring fair and ethical use of AI in daily life (like job opportunities and loans), and using AI responsibly. The focus is not on restricting innovation but on guiding its application to build a trustworthy and equitable future.