Anubha Gupta: Very warm, good afternoon to everyone. Before we proceed, I everyone. Before we proceed, I kindly request you to put your phones on the silent mode. Thank you. On behalf of Indian Council On behalf of Indian Council of World Affairs, it is my pleasure World Affairs, it is my pleasure to welcome you all to today's Panel Discussion on The AITrix- Discourses and Counter Discourses on AI in Global Politics. Artificial intelligence is suddenly everywhere and no conversation about education, health, economics or global politics happens without AI.
In light of this, today's discussion seeks to critically analyze the role of AI in global politics. The order of event is as follows. We shall begin the program with acting director general and additional secretary ICWA Ms. Nutan Kapoor Mahawar delivering her welcome remarks. The panel discussion will be chaired by Deputy Director General Manohar Parrikar Institute for Defence Studies and Analysis Mr. Ajey Lele. Joining us for the discussion are four distinguished panelists. Dr. Soham Das is an Sssistant Professor at the Jindal School of International Affairs. Prior to this, he was a post-doctoral researcher at the university of Texas, engaging in research on peace building, and collaborating with DARPA on AI/ML projects.
Dr. Anulekha Nandi, she's a fellow at Centre for Security, Strategy & Technology Observer Research Foundation. Her primary area of research includes digital innovation management and governance management and governance focusing on AI emerging tech and digital infrastructures. Joining us online Mr. James Shire. He's a Co-Director of Virtual Routes, formerly European Cyber Conflict Research Initiative. based in London, and he's a managing editor of binding managing editor of Binding Group. Previously he was an assistant professor at the Institute of Security and Global Affairs at the university of Leiden.
Mr. Aditya Sharma, he's a chief technology officer at Plaxonic Technologies, which is a global IT services consulting and business solution company specializing in digital transformation, and innovative solutions. The discussion is will be discussed will be followed by a brief Q&A session moderated by the chair.
With that may I invite Ms. Nutan Kapoor Mahawar, Acting Director General and Additional Secretary ICWA to kindly deliver her welcome remarks.
Nutan Kapoor Mahawar: Distinguished experts, members of the Diplomatic Corps, students and friends. In the movie, The Matrix, Neo asks Morpheus, what is Matrix? To which Morpheus responds, Matrix is everywhere, a computer-generated dream world. The title, AITrix, of our panel discussion today draws its inspiration from the title of the movie, Matrix, which many of you might have seen. And the word Matrix itself, as you see, is derived from the word mother, womb, or simply the source.
With the introduction of generative AI, like ChatGPT, and other advanced AI tech, the world finds itself echoing Neo's question, that is, what is AI? Is it just another technology, or does it have deeper, more profound implications for the man versus machine question for global politics, for our lives, and for mankind? And this, in effect, is a mother problematic facing our world today.
Let me begin with a few real events and stories where we see real-time AI being incorporated into global politics. Episode one, Nepal. You've all been seeing the news recently. The Gen Z rebels hunkered down in a library in Kathmandu with mobile phones and computers, used AI platforms, ChatGPT, DeepSeek, Grok, etc., to make 50 social media clips about nepokids and corruption. In the days that followed, TikTok was used to massively share the content. These AI-powered social media platforms, with the help of advanced algorithms, amplified the videos and hashtags, and were fed to users as hyper-targeted content, making them viral, ultimately fueling the widespread protest in Nepal. The episode captures the single-handed role of AI in reshaping national security, governance, marginal voices, democracies, with larger implications for global politics.
Episode two, Ukraine War. Rhombus Power, which is a company which provides real-time AI security predicted Russia's invasion nearly four months in advance, pinpointing the start of the war by late January. The team used AI to analyze vast online and satellite data, track missile movements, business activity, and built real-time heat maps, offering a quick, different view than traditional foreign policy circles. Separately, Ukraine is using AI in the integration of target and object recognition with satellite imagery to geolocate and analyze open-source data, such as social media content, to identify Russian soldiers, weapons, systems, units, or their movements.
Episode three, AI uses data to learn and make decisions. And the data is produced by humans over so many years. Thus, human bias and discrimination is inbuilt in AI. For example, when a ICWS scholar, asked DeepSeek and ChatGPT whether Arunachal Pradesh is part of India, ChatGPT replied yes. But DeepSeek, a Chinese AI chatbot, said, I'm not sure how to approach this type of question yet. Implying censoring of information. This is an example of geopolitical bias in AI. These are few episodes, and there are many more with every passing day, which are reflecting the real-time integration of AI with global politics, the theme of our panel discussion today.
The dominant discourse around AI is driven by the cold calculations of realpolitik, where AI is viewed as a tool by states to enhance their capabilities to wage war, predict war, and ensure national security. There are efforts to deploy it also for public goods, such as translation services. The dominant discourses are, however, yet to paint the true picture, revealing what the real stories and lives tell.
In the recent 80th session of the UN General Assembly, AI was a central issue of discussion. Member states voiced both optimism and concern, from calls for ethical, human-centered governance of AI and stronger safeguards for peace and security, to warnings about disinformation, repression, and widening digital divides.
Deputy Prime Minister of UK David Lammy's remark that “AI can empower freedom, or it can entrench oppression; AI can empower truths, or it can entrench lies. AI can empower law, or it can empower crime”, or UN Secretary General's remark that “AI is rewriting human existence in real time”, aptly sum up the sentiment of the discussion. Regarding India's approach to AI, the external affairs minister, Dr. S. Jaishankar, emphasized on ethical inclusive AI, respect for sovereignty, data governance, multilateralism on AI, and most importantly, India's push for the advancement of AI for the development needs of global south.
The UN Security Council too held a high level debate on AI, calling for strong global cooperation and governance without which it could deepen divides, destabilize societies, and reshape warfare in dangerous ways while recognizing the benefits it offers.
Friends, in the debate on AI, we must also remind ourselves that technology is not all that modern. It is quite inbuilt into creation. In the way humans are created, it is humans who are most efficient, most effective, most intelligent computers in all of creation.
The desktops that sit on our tables, the laptops that sit on our laps, the high performing computers of the most powerful computer labs in the world are a mere manifestation of the human computing, mental, emotional, sensory, communication skills. And that too, a very poor imitation. It is this human intelligence that cannot be permitted to be trivialized in the debate on AI. Our desktops, the algorithms which they run, the maths of it all, are meant to assist us, not take over us, our actions, our thoughts, our words, our instincts, our interactions with each other.
In short, not meant to take over the autonomy of being human. Because if it does, then AI becomes a tool for oppression of humans by humans, of humans by machine. And we will all be reduced to machine men and machine women – to borrow Charlie Chaplin's words from the climax scene of ‘The Great Dictator – And humanity's doom would be certain.
So if AI is going to have an impact on global politics, and as a result, our lives, surely human control on AI is essential and a topmost priority. Most certainly, guidelines are needed to where and how and when can AI be used.
For this, governance is essential through norm-setting, dialogue, shaping understandings, rule-making, promulgating and implementing laws as necessary, through monitoring so that a consciousness towards this end takes shape and spreads to encourage good and responsible behavior by one and all. All technology at the end of the day has to be at the service of mankind and for the good of it.
I look forward to a thought provoking discussion and I wish the panelists all the best. Thank you.
Ajey Lele: Thank you. Good afternoon to you all. Thank you so much ma'am for inviting me. My apologies for being late today. There was a prime minister's moment. So every road coming towards ICWA was blocked. You already said whatever has to be said and since we are very late, so I'll not spend much time on my initial remarks on this subject, but one or two things comes to the mind like yesterday we all heard what Donald Trump has said. This is the eighth war he has stopped. He has developed a habit of taking credit for many things. A couple of months back, he took a credit for increasing the currency rate of Bitcoins and there are two astronauts who are there in space. He said that because of me they got back. So many things Op Sindoor and all that. In spite of that, he was not able to get Nobel.
So one thing came to my mind was that in the year 2015, actually the movement towards OpenAI, one can say started, Altman and Elon Musk came out with their own conceptualization of those ideas. Then Trump 1.0 happened. In Biden administration in the month of November 2022, actually the AI which was in one can say laboratories or in the minds of the people actually made an entry into the human mind because that's the time when ChaT GPT came into being and possibly Trump failed to take a credit for that. If he would have taken a credit for that maybe he could have asked for a Nobel in physics also.
So well having said that if you really look at it the journey of AI is not that in the normal context one can say is not that old. The AI has been there for pretty long one can see a lot of historical facts which actually identify that it's not in the present century itself, in the past century also something akin to AI, not artificial intelligence people used to talk of ambient intelligence so on and so forth. So we'll not get into the definitions of it, but today since AI is there and we all are experiencing the strength of AI, it becomes extremely important to have a look at AI at a different levels.
I think today's seminar is very important from that perspective and all three panelists will take a view at various levels you got somebody from university, somebody from think tank, somebody from industry so on and so forth. So it's an interesting mix of a panel. As far as my limited understanding of AI is concerned, AI has become a very important element not only as a technology, but as a tool for geopolitics. And the moment something a technology becomes a tool for geopolitics more than the strength of a technology, there are other interests which come into being, particularly as you rightly pointed out the debate and discourse is now happening about usage of AI in military domain. And first it was thought that you will only use AI say for training purposes or for logistical purposes, but now the strength of the LLM models have become so good that people have started actually thinking of AI as a tool for a war fighting in combats also. So that has given a parallel debate and discourse on what is known as laws or a lethal autonomous weapon systems. So today the majority of a discourse at a global level, what happened in Paris, what may happen in India in coming year, one is not very sure about it. But what happened in Paris, definitely the big powers the US, the UK opposed whatever was thought by the people that you should have a certain amount of rule-based mechanism, rule-based administration.
So the moment the debate has shifted towards lethal autonomous weapon systems and people have understood the strength of AI in lethal autonomous weapon systems, particularly you can see the missile defense systems, which have been used both in Ukraine theater as well as in Gaza also, Israel has used it very effectively. So these are all the equipments where the human in the loop is no longer there and that's why the entire debate is happening whether human should be in the loop, outside the loop or whether you should have totally a zero control of a human on modern-day weapon systems. And that's where the geopolitics starts coming into being.
The moment geopolitics starts coming into being people again look at AI to give them the answers. Now what answers they are looking for, you just go back to the data, start doing data crunching and then you see something similar happening that we have seen in the nuclear domain. So now people are talking that are the countries like the US, China, some states in the EU are so called getting into the arms race associated with AI. Again, there are debates and discourses, which have started happening in disarmament areas, people are saying that this is an era where you will have the entire warfare, which will not be fought by the human beings in coming years, the warfare will be fought by machines only and there the AI will have a totally dominance.
There is a other view that AI is not only about LLMs. There is something more to it and one has to accept that reality also. But to my limited understanding right now everybody is looking for various options what sort of a position they should take. If you see the present narratives which are there, is there a requirement of a counter narratives also. As you rightly pointed out you need to have a counter narrative, which should emerge from global south as far as AI is concerned because the moment you bracket AI only into the domain of armed forces and say that this is going to be a system for the warfare, so let us have either total controls on it or let us do not have any sort of a control on it.
So the debate if gets trapped into that then there is going to be an issue, because AI has got tremendous amount of utility for societal developments and I think that is extremely important and that is why global south has to come out with its own debate in these issues as a counter narrative to what the things which are happening. There are various other important issues also, but I do not want to spend much time on these and I will just ask the first panelist to give his remarks Dr. Soham Das.
Soham Das: Hello, thank you so much for inviting me and thank you for your remarks Dr. Lele. I think I'll begin with where sir left which is AI's role on societal developments. The way I look at AI is it's more like a consciousness, it's synthetic consciousness, but nevertheless consciousness. The problem begins when we try to use it and the way we use it. AI, a commercial AI or when it comes to the usage of drones or any technology. The way it is used that sometimes leads to some questions about whether it is having some difficulty in the national security domain or not.
The second thing that comes with AI is what AI is. AI is technically information, our information in a coded data format in the internet. So when the information is placed, the way the information lies in the coded data platform, using that information for various sorts of predictive analysis to doing commercial behavioral analysis to political behavior analysis, electoral behavior analysis, going forward with influencing campaigns, polarization, targeting. Every aspect can be negative. At the same time, the same AI has got a huge enabling effect because right now we have cheap LLM models, AI models available to individuals beyond state actors and non-state actors.
So for the first time in the national security or generic security domain, we are facing a situation where, in my opinion, we are dealing with individuals in remote part of any country who have access to technology and the way it can be used can differ. This is something as a society, as individuals, as scholars, as practitioners, we need to look at and consider how this will be shaping the future of warfare as well as governance. AI affects one way is misinformation. Misinformation, deep fakes.
So the threat that these lead is beyond the traditional security threats. It goes to the non-traditional security threats, in my opinion. The non-traditional security threats, traditionally we have seen the security, military, these state apparatus focus more on the traditional aspects, a visible actor or a visible threat. But right now, there has to be a change in the thought process of what threat is. And this is not making a society or a country militarized for warmongering in nature. It is just analyzing or understanding what threat is and how to mitigate that particular threat maybe.
State capacity being challenged is another aspect of AI because traditional state capacity, the way we look at security, I think we were talking, is now going to get challenged. It is getting challenged. Misinformation campaign or I was reading this particular report about Philippines election, the way hashtags being created, APIs created and tweeting accounts being launched right during the electoral season. And I worked with this particular group of organizers. They are part of an organization called Think5. I was working with them and they found that these tweeting accounts, these Instagram accounts surfaced during the election.
Similar thing during Bangladesh. A lot of these accounts suddenly prop up and they try to channel the narrative in certain way. It is very much seen in the case of Israel-Palestine conflict as well. I am not suggesting what is right or wrong. I am no one to that. But we are just seeing that this is happening, the narrative campaign or drones. So, a payload being attached to a 13-inch drone, which is like one of the big ones, it started with seven, can fly and destabilize a traditional military setup. It is on a physical level warfare.
On a narrative campaigning, a hashtag being propping up 582 times during an election time during one single manner can obviously affect the way people think or work or perceive. So, these are the new situations which are coming in and that can have deep impact on the way security apparatus work. I will not take more time. I will just focus on two more things. One is the North-South dimension. There is GDPR in Europe, General Data Protection Regulation. It is setting a global standard as we see.
That is great. However, it also raises question about if a global standard is set and where does that leave us as in the Global South? Because a lot of data infrastructure requires a lot of investment. Maybe a country like India would be able to do it. But what about the other Global South countries? And if that is a horizontal inequality in case of data infrastructure, then where does that lead us to data colonialism in the future? Because already, some facial recognition systems read European, the Caucasian face system better and it cannot really work with the ethnic differences of our facial recognition type. So, a lot of work has to go on into those aspects. A lot of work has to, a lot of technological advancement has to happen.
So, this comes to the last topic that I just want to touch upon very briefly, which is about training. Training, because AI is itself a very democratic setup, and it reaches the individual levels, training of AI has to be well entrenched. It cannot remain at the high roads of the developed cities of our country, like Delhi, Bombay. I mean, it cannot be only these quarters, it has to be deep entrenched, because differential modernization leads to grievances. That's a well-established theory in conflict studies literature. That's Donald Horowitz, talking about it and grievances leading to mobilization, Harfinger, 10 years of work, and we have seen that has happened, that has that happens across societies and countries. We are perhaps going to face a situation of where there is a differential modernization in terms of AI knowledge and capability.
And if that happens, we might face a situation decades down the line, that we will face differential modernization leading to grievances and mobilization in a stage or a space which we have not seen before. So, to ensure that doesn't happen, training has to happen, ethical usage of AI has to be taught, as well as learning about AI, making data models, or at the same time, harnessing data in our own country and not depending on another global north country, I think is the most important step forward. Thank you, sir.
Ajey Lele: Thank you so much. In a very short span of time, you really highlighted one or two very important issues, particularly the societal issues are definitely going to come forward. Already, there has been a chatter happening about losses of jobs and all that the similar type of pattern, which was visible when computers made the inroads. But right now, there is a good amount of concern amongst the people that there could be a loss of a job because of the AI. And not only in India, it will happen globally also.
But the other side of the story is that now people are saying that not my generation, but the Gen Z has got additional responsibility because we got trained in a particular thing. And we earned our livelihood based on that training. But now the present generation has got to train and retrain themselves into different skill sets. And AI become that sort of a skill set. So if you train yourself in the new skill sets, then definitely there is more opportunities instead of job losses also. These types of ideas are definitely there.
Then particularly the issue which you raised about data, that's an extremely important issue. Like you all know that the cliched argument data is new oil. Actually, now data has become that sort of a thing. And there are a lot of issues which are happening around the world as far as data is concerned. Particularly, as you mentioned in the security domain, again, data becomes important. So it's not a data of an individual system. It's all over data, which you will require to train your algorithm.
Because it's very easy to say that AI will change the entire domain of security systems. But if you don't have a sufficient amount of data, if there is no amount of data which is shared by the security organizations, to actually the makers of the algorithm, then it is going to be a challenge. So I think there are a lot of interesting things to go forward. We will deal with some of those things during the question answer session.
I will move to the next participant, Dr. Nandi.
Anulekha Nandi: Hello, and thank you very much, sir. Thank you to ICWA for having me here today. And thank you, sir, for setting the context and previous remarks by my co-panelists. And building on that, essentially, AI is a suite of capabilities that confers certain opportunities and risks. And in terms of balancing those opportunities and risks, if we look at some of the global governance initiatives and regimes, we find that over time, there has been a convergence in terms of principles, however, of fragmentation when it comes to practice. So no one is saying that we do not need transparency, accountability, robustness, access, equitability, AI for economic development.
Even the US AI Action Plan, which is very strong on its pro-innovation agenda, kind of contains requirements for robustness and equitability. And as we see over time, there has been a sort of an ethics boom, if we may. In 2023, there was a study of almost 200 ethical guidelines and AI governance policies by academic institutions, private companies, civil society organizations.
Between 2015 and 2020, 117 principles kind of came into being by various different organizations. And the study essentially found that the common themes within those principles were those of transparency, equity, accountability, autonomy, explainability, etc., privacy responsibility as well. However, the challenge that persists and remains is one of translating these high level principles to kind of encoding them at low level technical and organizational measures.
So basically, we know what we have to do, we are still yet to figure out how do we do it? How do we kind of ensure that we kind of have that right balance between innovation and risks? And how do we essentially operationalize those principles, particularly as this challenge now comes to be enmeshed within wider geopolitical realities.
And within this complex unfolding, we currently find four different regimes in terms of how countries and states are approaching global governance of AI. On one hand, we have the United States, which has a complete, if we may say, monopoly over the whole AI stack, whether it be data models, and more essentially compute. And the recent administration's thrust has been more on innovation and deregulation, deregulation across the board, even environmental regulations for establishing more data centers.
And however, if we look at the AI action plan, we find that there is a huge thrust on open source AI. So essentially a nod to the phenomenal success and penetration of DeepSeek by China, essentially. Particularly as even US hyperscalers like Microsoft, AWS have also included DeepSeek as a part of their suite of offerings.
And so on one hand, we have the US's pro-innovation deregulatory agenda, light on governance. On the other hand, on the other end of the spectrum, we have China, and both released these plans in the space of four days in July 2025, where China calls on the world to focus more on AI governance, leverage multilateralism, particularly using governance in many ways as a strategic tool, leveraging its existing relationship within the BRI and the Digital Silk Road initiatives. And this is important because the governance of AI is essentially a value-laden exercise.
It determines how the technologies are developed and implications that they will have as they come to be used downstream. And this is important because the recent Politburo study session in China, the entire focus of that study session on AI was to extend technological cooperation with other Global South countries, and kind of sets the ground for the diffusion of social, economic, and institutional norms, particularly as China also gains ground in some of the international standard-setting bodies. This year also saw a kind of unified voice for the Global South in many ways, if we take the BRICS declaration on the governance of AI.
So where we have themes of digital sovereignty, data governance, equitable access, and using AI to accelerate development outcomes as some of the key priority areas. However, as AI resources are essentially concentrated between the two superpowers of the US and China, to what extent will the Global South be able to kind of articulate its voice or have its say in the global AI agenda remains to be seen.
And finally, we have the EU with its strong regulations, particularly the AI legislation and the AI Act, which is a part of a suite of wider AI initiatives. And as we have seen with the GDPR, because it's one of the strongest laws in force, it by definition diffuses. So companies kind of adhere, businesses and companies adhere to the strongest law in force so as to not to default on others. So there is a form of regulatory diffusion there.
They've kind of come up with a template of how do we do AI regulation and of taking a particular risk-based approach. Many aspects of the AI Act came into force in the past two months, particularly on transparency documentations, copyright disclosures, voluntary codes of practice, which helps providers reduce some of the administrative burdens. So these kind of four approaches give an overview of how geopolitical priorities kind of come to be enmeshed within different approaches of doing AI governance.
Of course, at the UN level, we have multilateral initiatives like the recently launched scientific body, as well as the global dialogue on AI and so on. However, given that AI is so essentially concentrated, the resources, particularly computing power, and the models that we now use are so concentrated in these two technopoles of the US and China, how do we make multilateralism work in a way that kind of answers to wider problems or wider concerns or priorities of the global south and India included?
And while India has taken, if we look at the G20 declaration of 2023, the New Delhi declaration, India has kind of adhered to a pro-innovation approach while kind of keeping trust, safety, and responsibility also center stage. So how do we kind of marry these priorities, particularly as we have to keep relying on two of these, essentially, these superpowers, both economically and technologically? So the AI Impact Summit obviously kind of offers a platform to do that, kind of foreground some of those concerns and agendas. But as well, I think this also presents, particularly as the tussle between US and China has exposed unique opportunities for middle powers to also come together.
Countries which have not expansive, but niche substantive advantages when it comes to AI to come together and kind of pool strategic resources and kind of define what AI looks like from an alternative perspective. So I think going forward, I think in the near future, the AI Impact Summit offers an important platform to kind of foreground these ideas at one level, but also kind of looking at possibly a coalition of middle powers which provides opportunities to develop and formulate an alternative vision of global cooperation in this domain. So thank you for that.
Ajey Lele: Thank you so much. I think global governance is the key issue in the entire debate which is happening on AI everywhere. And you rightly pointed out BRICS, and more than that, I think G20 India made a very significant contribution. And that's the reason I think there is a view that when you're talking about counter narratives, Indian need to take a lead as far as you can say, Global South is concerned where all countries need to come together, and they should have one voice. There are a lot of issues which are related with particularly the policy formulation. Surprisingly, if you see that China is always an odd person out, but this time China is not an odd person out.
As far as Paris was concerned, it was only the US and UK who were against any sort of a rule based system one can say, but China was not because China has got its own concerns also. China knows that the technology has not developed or matured fully. They have faced certain amount of a backlash of the technology also, since he has used the issue of facial recognition techniques and all that.
If I'm not wrong in Hong Kong, when there were certain amount of demonstrations, people used to move around with the umbrellas, idea was to hide their faces so that the facial recognition technique cannot be used. So China is pretty cautious as far as having a rule based mechanism or any sort of a global governance model. But right now they are keeping very low and that's a very interesting thing to have a look at it.
We have the online speaker. Third speaker is online. Thank you so much.
Welcome to you.
James Shire: Thank you very much. And it's a pleasure to be here among these distinguished panelists. I will also aim to keep my remarks short in the interest of time. And I'm sorry, I can't be there with you in person. My name is James Shire. I'm Co-Director of Virtual Routes. This is a UK based organization, a nonprofit that does research and education in security and technology, focusing on AI and cybersecurity. Our focus is primarily on the UK and Europe. And as our previous panelists have said, there's a lot going on in this area, but the questions of the impact of AI on geopolitics and on societies are global in scope. And so that will be my focus.
So I want to start with a summary of what especially generative AI is. And this comes from a international relations scholar called Henry Farrell. He summarized generative AI as technologies of cultural production. And essentially, the way AI can produce text, it can produce videos, it can produce images. These are ways in which we can speed up, we can scale cultural production, whether this is documents in organizations, social media, interactions online, or fake videos going all around the world. So these are different kinds of cultural production.
And when we get to this understanding, it says, well, okay, well, what kinds of cultural production are being prioritized, are being emphasized? And what kinds are being sidelined or ignored? And this leads us to a key aspect of generative AI, and that is its impact on inequality. And there are theorists that it would exacerbate global inequalities across all layers of the AI stack. And we can go through those very briefly, one by one.
To start with, at the compute layer, we have both the production of the chips needed for AI and also the development of data centers. These are major economic transformations in our economies. Europe is plowing vast amounts into building data centers, so is the US, so are the Gulf states, so are many places around the world. And these are changing not just the local economies, but also the consumption of resources globally. As AI scales so will the resources being taken by these data centers leading to potential inequalities and exacerbated climate change in the sense.
We then get to data which Dr. Soham Das already covered very well, highlighting that the training data required for AI models leads clearly to inequalities in terms of how that data is treated. I'll draw here briefly on a distinction made by Dr. Joy Buolamwini who writes a lot about AI and facial recognition on the risks of inclusion versus the risks of exclusion. On one hand, the risk of exclusion means that certain people, certain communities don't have their data included in the training process for AI models and therefore their culture, their needs, their requirements are not prioritized. And you can see this in questions of how far large language models work in different natural languages.
There's lots of minority languages around the world where they can't use AI to do machine translation, whereas in English because they're all Chinese it works very well indeed because of the amount of material out there. That's a risk of exclusion. The risk of inclusion on the other hand is that once you have minority communities being included, their training, their data used as training data, you can increase surveillance of those communities, you can enact repressive laws against those communities in many states around the world. So Dr. Buolamwini highlights these twin risks from different ways of using data for AI.
Then we get towards model weighting and the API and marketization of AI. The main distinction between machine learning pre-ChatGPT and post-ChatGPT is, as a panelist suggested, the fact that it now is available on an individual level we can all just open our phones or devices and interact with AI. This presents massive economic opportunities but also major risks to inequality that I'll conclude with.
Before I do so, I want to say a brief word on the distinction between inequality between states, which we've already heard quite a lot about on the panel, and the advancement of different states, maybe China and the US, the superpower competition, the role of EU as a regulatory actor, but very far behind in trying to develop its own technological capacity, and the risks of excluding many states in the Global South from these developments. That's a clear geopolitical risk. There's also moving beyond the state level analysis, inequality between people at the global level.
One of my favorite studies of the impact of AI focuses on the gig economy. It focuses on algorithmic platforms to connect people, whether it's housing rentals, whether it's food delivery apps, whether it's taxi hailing apps. These people are facing very similar concerns all around the world. They don't necessarily mind which state they are in. That is less important to them than the platform they are interacting with. The inequality is served to them through that platform, what kinds of opportunities they have for an alternative way of making money, but what kinds of constraints these platforms exert on them wherever they are in the world.
And so I'll conclude now by just pointing a little bit further into the future. We see these technologies of cultural production massively scaling our ability to learn with AI, to make media with AI, to pretty much do everything. And there might come a point at which we have saturation. The majority of content being produced by most people around the world is done so with the assistance or purely by AI. At that point, the value shifts massively. And the value is really in this human content, these human interactions, and there's a much lower level of cheaper, more accessible AI content.
And that leads the inequality question to say, well, actually it's not inequality in terms of who has access to AI and that's where we're posing the question at the moment. The inequality question is, well, if AI is kind of the second best, then who has access to these more human interactions rather than their AI replacements? And that, I think, is a shift in the inequality question that we haven't yet got to yet on our conversations. I'll leave it there for now. Thank you very much for inviting me. I look forward to the rest of the discussion.
Ajey Lele: Thank you so much. I think it was a very cogent argument. You started with the generative AI, identified the basic challenges with the chip manufacturing. We all understand what are the issues related with supply chains. There are issues with real-world elements also. So these are a lot of things which are happening around the chips required for AI also. Then the issue of data, I think it's a very important issue which you brought forward about inclusion and exclusion. There are a lot of societal issues which are associated with it. So I think there is a lot of food for thought as far as your presentation was concerned.
Now let me shift to the last presenter, Mr. Aditya Sharma.
Aditya Sharma: Thank you, ICWA. Thank you, my co-panelists, for giving a brief about how AI is impacting society, AI governance. They have covered many things. I would like to share my perspective on how AI is shaping industry or how is it helping the economic development. From my personal experience, I personally feel that AI is a productivity boost for people like me or many individuals who take it in a positive way.
For me, it is a general-purpose utility like electricity or internet. Many people can't live without AI right now. How it is shaping various industries. When we talk about economic development, it has broader impact across all the sectors. I can go one by one starting from our key sectors like agriculture, MSMEs or healthcare. When we talk about healthcare, earlier we used to talk about preventive healthcare. For me right now as a country or as a overall like from the technological point of view, we should talk about from prevention to predictive.
There may be chances like you might have heard about many cases wherein Apple Watch has triggered many people and they got to know that something ECG pattern or heart rate pulse are not right and they got to know and they went to the hospital immediately. It has saved a lot of lives and how is it possible? It's because of the data that they have trained their models accordingly. Even like last month I was giving a demo to a client overseas. Just to give a brief like I come from an IT service and a product company. We have our in-house product as well as we work on other client project.
So while we were giving a demo on a customer support module, they were unable to figure it out whether they are talking to an AI agent or a human. That's how advanced AI has become and it's not only restricted to English, it can be in any language whether Hindi, Tamil, Telugu or Arabic. That demo was particularly in Arabic and you couldn't find a flaw over there. That's how advanced AI has become and it can be applied across different industries. When we talk about other segments like how economic development can be achieved for a country like ours wherein MSME, I personally feel like if India has to grow and many people would agree to that like our MSME and small industry has to grow as well.
Over there, if we talk about high precision engineering wherein we produce high quality products which are as accurate as to now in nanometer. So over there, AI has a lot of influence. For example, like they can track with the help of IOTs. They can track the last part and they can reduce the defects in their product and the CAD models that they can design like die cast can be more precise. All those things are possible with AI right now. Similarly, when we talk about agriculture, especially North India, we have a lot of farms wherein we can use different patterns like what type of crop yields can be enhanced by using different patterns.
Many people are talking about uses of drones in agriculture but how those drones are getting trained. That's how we need to look at and when we talk about different models or inclusion like innovations, I personally say we need to think from the positive side, like when we talk about AI, many people are skeptical, even I was skeptical. Although we were the first one of the few people like to start on AI projects way back in 2018, wherein we implemented AI in a dosing plants or in public utilities. But with the advent of AI, especially after as one of the panelists has just mentioned what post-GPT and pre-ChatGPT, like will there be any job losses?
I was skeptical like how the shape of IT industry will be in the coming course. But after looking and working closely, I realized we need to look at the positive side and see the opportunities. The companies need to reinvent, re-innovate instead of just saying invent, innovate. Our formula should be how industries can re-innovate and how they can re-invent. Why? Because if we are thinking that with the past skills we will survive this wave, I don't think so. People need to upgrade their skills. They need to stick to the basics.
From IT point of view, if somebody is not following their algorithms, their data structures, and they're just simply relying on just coding patterns, they won't be able to survive this wave. They need to say, I'm not sure whether many people know about different LLMs, which are available right now. There's a word called vibe coding. And over there, vibe coding is something where people do the programming. They use AI for programming and coming their code. But who will correct them whether the code which is getting produced by AI is right or wrong? For that, they need to have a deeper knowledge.
For example, you can produce a small e-commerce engine like Amazon or a smaller version easily within a couple of hours. And we did it ourselves just to see how it can be implemented in initial days. We used an LLM called Anthropic Claude for that. The code quality, because there are different patterns that you need to realize, the code quality, the algorithm, the approach, the flow, everything was pixel perfect, I would say. Even it was able to generate and give context how it was different from the regular programming. The same work couldn't be completed easily in a month's time.
And using a couple of resources, we did it in a couple of hours at that time just to see a demo. So that's how it is being transformative. But now we need to see from the other angle, like if obviously if somebody is going to… there're going to be job losses, as a couple of panelists have mentioned. So how we can survive this wave? Remember like one of the other panelists has just mentioned about when computers were introduced, everybody were skeptical whether there will be job losses. Again, the same question is raised by many people. I would say there will be, but at the same time, there will be further opportunities. There will be opportunities wherein we would require people in agriculture sector who can use AI in drone implementation, in increasing the efficiency of our farmers, how yield can be enhanced.
Similarly, when we talk about manufacturing sector, MSMEs, how fleet, like the transportation, like say government is pushing for golden quality trail or say Delhi-Mumbai corridor. Over there, we can see how, or already it's happening in many cases wherein they're optimizing their supply chain, they are reducing their fuel cost and reducing the emissions. So all those can be achieved easily. Then when we talk about AI governance and ethics, obviously, what action government needs to take if we need to revolutionize or implement AI in India, particularly, I would say there has to be bright AI policies.
Obviously, when we talk about India, like already, I would say we are already behind US or China, as Dr. Nandy has mentioned. They have big models, like few people got offended like two years back when founder of ChatGPT or OpenAI, Sam Altman was here in India and was part of a discussion. He simply said like, it's difficult for a country like India to develop their own model, LLM like OpenAI or ChatGPT. To a certain extent, although many people got offended, but I would agree to that, like many people took it as a challenge, but training in LLM require a lot of resources. It requires a lot of investments, for example, and above all, they require access to the data. We don't have neither at this point of time, nor the resources.
And at the same time, the ecosystem has to be that supportive, wherein people should be able to support entrepreneurial startups for making deep tech products, because there are a lot of opportunities. But at the same time, instead of just focusing on short-term gains, wherein they are making 2x or 3x return, we need to wait and work in a way wherein there may be failures. We need to be ready for that. At the same time, once we get succeeded in that implementation, there will be returns in range of 10x or 20x. But for that environment, the whole ecosystem has to be broad minded, wherein, because deep tech, why I mentioned, because there are a lot of possibilities.
People are not talking about AIOps, like how AI servers can be managed. There is a possibility because we need to look into divide problem into different micro levels instead of just focusing on just creating LLM. We need to see how infrastructure can be made over here. There are companies which are working on building robust data center at the same time. Instead of just going say many people were criticizing, we need to get a positive like when introduced indigenous chip recently but somewhere we need to start. That's we need to take it in a positive way like now we are starting.
Obviously, once we are starting we will reach our destination. We need to have a clear aim wherein we need to focus in a way wherein the broader problem can be divided into sub problems like there can be areas wherein companies should focus on clearly making an infrastructure. At the same time the implementation part which is called as inferencing like how data elements can be inferred and make a contextual AI. Companies like Ola started with Krutrim. That's a good call. At the same time which was indigenous if many people don't know like it was available in different languages. At the same time Zoho is also coming with their own data model too.
So obviously somewhere if we are starting we will reach our destination and yes when we talk about in how AI can be used in defense, there are ways wherein, I don't know like many companies many countries are trying to implement wherein they're tracking each and every record like say when we talk about border areas like what conversation are happening. Say if you remember when people were using generative AI for creating different images. There was a notification in a public domain wherein many agencies reminded people not to upload their private photos because once they are gone it's a there in their databases.
So, obviously, you never know like how those things can be utilized. So once those things are there so many companies are making a system wherein they are tracking the conversation which are happening in around border areas. The news item, the local news and how those patterns can be utilized for tracking the drug peddlers or arms smuggling. So those things are possibility and companies and countries are trying to implement that. So that's one part.
Yes another concern that we need to focus on although companies are saying like a lot of investment would be required in data centers, but at the same time for a country like ours, because we need to focus on energy consumption by data centers, which are training the AI models because they consume a lot of energy and how those efficiently can be managed because like in India, companies are focusing on the coolants to reduce the consumption of energy in those AI data centers. So that's one of the possibilities that we should focus on.
Ajey Lele: Thank you so much, I think you had very important points and the good part of your presentation was that there's a lot of positivity and how to make progress further. I think, particularly on precision engineering, healthcare, agriculture, again, in the era of climate change, your crop patterns are changing. So definitely what you say is very correct that we require additional help and AI could be one of the assistants over here. Energy issues, other issues are very important. So I'll not spell much on it, but the catch line from his presentation, I can say that now he is saying roti, kapda, makan and AI.
Now we'll open the floor for the discussion. Any questions, comments, please keep it short, we've got 15 minutes or so. We've got around 15 minutes, we'll start with the back.
Unidentified Participant: Hello, good afternoon to the distinguished panel, myself I am Kotwani, currently pursuing Political Science major at Kirori Mal College University of Delhi. First of all, I would like to extend my heartfelt gratitude to the distinguished panel for sharing such a wonderful insight with you. My question is particularly with Dr. Soham sir and the entire panel. Sir, your talk about disparity in digital infrastructure and differential modernization. So with now AI becoming central to national security strategies, how can middle powers like India navigate between AI superpowers such as United States and China? So basically, what should the focus areas for India in the context of AI from a surface perspective?
Soham Das: As a middle power, again, I think I will take cue from where Mr. Sharma left, that we have to be positive about this particular aspect that the network models that have to be trained, the students or the entire industrial framework that has to be trained, we have already started that. And at the same time, let's be honest with ourselves, we have had a huge section of the populace who are trained in computers, who have been doing engineering, that can be testing to whatever, but we have that youth, entire youth, which is trained in computers. And we have that, we have been doing that.
So if the training changes in the way of making India and focusing on ourselves to AI generation and modeling network modeling to anything else, we do not think it is going to be that big of an issue. The question becomes on the aspect of the GDPR and governance, data dependency or infrastructure, where the government has to take proactive role in building data infrastructure hubs in the country. That requires investment and energy. If that happens, I think we will not be having that big of a difficulty.
Ajey Lele: Basically, as he has pointed out, infrastructure is going to be the key. We know how to make the software. The problem is with the hardware. Somebody else was there? Yeah, please. Yeah, yourself.
Nandini Khandelwal: Good evening. Thank you for the insightful discussion. My name is Nandini Khandelwal, Research Analyst at ICWA. My question is for Dr. Nandi. When you talked about the global governance on a multilateral level, so beyond US, EU or China strategy towards AI governance, you talked about middle powers and how they can play a unique position while pulling their strategic resources. So can you shed light on if there is a development with respect to the middle powers and what are those middle powers? And if India has any role to play in that, in the global governance over AI? Thank you.
Anulekha Nandi: Thank you for that question. And when I say middle powers, we see countries like UAE, particularly, who have a defined AI strategy direction, even an AI minister. And these countries have sovereign wealth funds. So that provides the financial asset. Countries like South Korea have a strong R&D ecosystem. Countries like India have their DPI as well as complementary innovation. We have something from IIT Madras, which enables LLMs to run on personal computers.
Now this is incredibly important for the Global South, which does not have vast IT or telecommunications infrastructure, the last mile. If we are to look at last mile adoption, to be able to run these heavy AI models on frugal infrastructure is very critical. So that is kind of India's, and as I've argued elsewhere, the short term contribution in this domain. And similarly, you have other countries like Singapore, you have countries like France with their open source model like Mistral. So you have a bunch of these countries which have certain strategic resources and assets to contribute.
And for these countries to come together to pool these resources or kind of define how they can kind of use these resources and deliver on Global South priorities and act as an alternative to US and Chinese tech is useful. Particularly apart from the US, we are very hardware poor. Even China is quite hardware poor. So kind of looking at frugal innovation like the one India provides, looking at alternative R&D ecosystem, looking at additional funding that can fuel further innovation is critical if we are to define an alternative vision beyond what is defined by real geopolitics, and so on. Current geopolitical priorities of the two global superpowers.
Ajey Lele: Lady at the back.
Unidentified Participant: Hi, everyone. I am Nirjhar. Just to introduce you guys to me and our organization. We come from an organization we call ourselves Ties and we would be one of the younger lots here. But I had a question which comes from a genuine curiosity of the youth itself. We see in and around media that, even US and relevant countries are developing things like the Palantir who have deep, deep, deep knowledge about what the population is and it can have a say over how to maintain or who to keep in the country and who to not with using AI and everything. And I wanted to know from you all, everyone in the panel, I wanted to know a perspective. How far do you think India has to come along to that? And do you think it is important to develop technologies which runs in the country and who can evaluate our data sets? And what do you think the youth has a role to play in this?
Aditya Sharma: I would like to start with a small thing, whenever we want to cover something big, right? You need to start or take a baby step. And for that, you need to have a right type of skillset. Instead of taking any shortcut, I'm being blunt over here, youth has to inculcate right skills, because even if you're using AI, you need to have a right approach or enough domain knowledge to say whether whatever AI is generating is right or wrong. You won't be able to differentiate unless you have domain knowledge.
AI will support you, like you have electricity wherein you can work right using that you can work. It can't be your only resource, it can support you as a pillar, the foundation has to be your domain knowledge. So that should be the right approach.
Anulekha Nandi: So I think the youth obviously have a very important role to play not just you know as people who develop this technology, but those who use it and use it for what purposes. So I think that's a given, I mean, that's the future. When it comes to advanced technologies, of course, if we have seen the vulnerabilities in global supply chains that we've touched upon, as two super pass bar where does it leave us. So it is incredibly important to develop national capabilities, particularly on not just AI technologies in terms of our technology stack in general. So that too I feel is important as we go forward because economic futures are technological futures as well.
So kind of having those capabilities is securing our pathways going forward and like Mr. Sharma also mentioned we need to start somewhere and it's good that we are starting now and we have the right thought and intent in place. So with that I think there is hope that going forward and in many ways technology often leapfrogs. So particularly something telephones that took the US decades to kind of reach the last mile leapfrogged in developing countries. So technologies leapfrog when developing countries adopt, we learn from prior experiences and that helps us shorten our development timeframes.
So I think there is hope and obviously kind of on both questions indigenous technologies, technological capabilities we need to have them to kind of ensure our economic futures are secure. We have a good bargaining power on the global stage as well as of course the youth is what is the engine essentially that drives this forward. So I hope that answers your question.
Ajey Lele: Key factor over here is don't take shortcuts. This is essentially because if you see when the ChatGPT came, within no time almost 40 LLM models came. In India, suddenly, if my memory goes correct Sarvam or some model came. It was a flop show. Dhruv Rathee also came out with something. Again it flopped. So there are issues where if you start taking flop show because if I'm correct the downloads for an Indian model was some 20 downloads and a South Korean college model was some 2 lakh downloads or something. So that's the way it is happening, so you're going to be careful in that context. Any other questions?
Prithviraj Rajput: Good afternoon everyone. Myself Prithviraj Singh Rajput. I am the co-founder of Ties. We are a research media and technology company. I have a very interesting question. We have a project in house we are building. It's called Narrative OS. It's like an AI that can shape, scale narratives. It can be political, cultural or even social across digital infrastructures. So as someone building in this space, we always come across a question that what ethical boundaries should we follow? Because we have been consciously designing it. We see there are lots of opportunities to exploit because we are dealing with narrative. We are dealing with people directly not with their data, but people's mind because we are in space of narrative and should we treat narrative shaping AI under the same ethical lens as generative AI, or there should be some different kind of frameworks? Because while building, while ideation stage, we discovered that there are numerous loopholes which can be exploited and I would really like to know what you people think about this.
Ajey Lele: Soham, you can take this one. You spoke about elections and AI and all those things.
Soham Das: I'll try. I think what you just said there is more to it. I mean I just want to make a sweeping comment that would be wrong of me to do. But generally, I would just say one thing and that goes the youth should help expanding the security doctrine and do not take it in a negative manner. I'm just saying the security doctrine will be facing a lot of challenges when we will be seeing the individuals dealing with AI. Deep fake to misinformation, these are no more only non-state actor based things. These are right now in the individual level.
So the narrative what you're building can actually help in preventing and assisting in those aspects. Can come in and assist the state in that aspect by getting grants, by getting support infrastructure from the state itself. So that should be in my thing the approach because from misinformation to disinformation to electoral campaigns to polarization, AI can be used in multiple formats and it is being used at this moment, to building hashtags to anything, you would know that more than me as a technocrat.
I think this is where, as an organization or as a group of friends, you can come together to assist the governance all together by working for and not taking the shortcuts, as sir was mentioning.
Aditya Sharma: Although across the globe, governments are working on AI ethics and governance policy, but if you're building it on narrative OS, I would personally say youth has to take ownership of their action, accountability, because you yourself are accountable for your actions. So unless we have accountability, we won't be able to create a good product. So we need to be accountable. It's as simple as that.
Ajey Lele: We'll take one last question. Okay.
Amrit Raj: Good evening, everyone. I'm Amrit Raj. I'm at Kirori Mal College. My question is regarding moderation of AI. So even if I talk in today's context, I can just open up my phone and spread any kind of nuisance amongst the public or any kind of wrong information. So how can we, the government, or how can any organization actually moderate the use of AI? And even if there is moderation, who is going to preach the preachers? So that is my question. Thank you.
Ajey Lele: I think Mr. James, would you like to take on this question?
James Shire: Sure. Thank you very much. And it's an excellent question. Now, the starting point has to be existing standards and national rules. We have questions of content moderation that have been discussed extensively regarding social media platforms. The ability of individuals to spread misinformation more generally. And these have changed massively in the last couple of years. And there was a trend towards greater content moderation, the Facebook or Meta oversight board, where they nominated people from across the globe to decide what the thresholds should be for that content moderation.
That's now kind of fallen by the wayside. Meta has disbanded it. It's gone in a very different direction. Same with X under Elon Musk. So at the moment, there's a trend towards less, not more content moderation. And that's before you put AI generated material on top of that. Now, I think insofar as AI generated material has the same risks, whether it's to elections or to hate speech or incitement to violence, then the rules and the standards governing should be very similar.
The difficult point comes with attribution and liability blame. Because for that, you need to have a starting point, whereas you distinguish between human generated speech and machine-generated speech, or if there's a combination of both, to decide for what portion the human is responsible. And it's those questions of responsibility, the technical questions of distinguishing between human and machine-related creative speech, that are much more difficult to do at scale on social media platforms.
There's already a lot of work regarding the detection of bots, and they can build on that work, but this question of is the human behind that, and how far they're behind that, is one that needs a lot more work. And that's both policy, that's legal, and it's technological.
Ajey Lele: Thank you so much. I think we had a very interesting and a fruitful discussion, so I don't want to sum up anything over here, because as far as AI is concerned, it's an evolving system, evolving technology, so on and so forth. But one thing I can say with a lot amount of confidence, that the responsibility is on youth, so I've got no problems with it.
Unidentified Speaker: I once again take this opportunity to extend my heartfelt thank you to our chair, and esteemed panelists, for their insightful remarks. We have gained immensely from this thought-provoking discussion. A special thank you to our interactive audience. I would like to express my heartfelt gratitude to my colleagues in the council, and the administrative team. A special thank you to our acting director general, Ms. Nutan Kapoor Mahawar, and director research, Dr. Nivedita Ray. To learn more about the research work of ICWA, please visit our official website, and follow us on social media platforms. With that, I warmly invite you to join all of us for the high tea in the foyer. Thank you once again.
*****
List of Participants