Tech

Women in AI: Urvashi Aneja looks at AI’s social impact in India | TechCrunch


To give female academics and others focused on artificial intelligence some well-deserved and overdue spotlight time, TechCrunch is launching a Interview series Focus on the extraordinary women who are contributing to the artificial intelligence revolution. As the AI ​​craze continues, we will publish multiple articles throughout the year highlighting critical work that is often overlooked.Read more introduction here.

Urvashi Aneja is the founding director of the Digital Futures Lab, an interdisciplinary research effort examining the interaction between technology and society in the Global South. She is also an associate fellow in the Asia-Pacific Program at Chatham House, an independent policy institute based in the United Kingdom. London.

Aneja’s current research focuses on the social impact of algorithmic decision-making systems and platform governance in her country of India. Aneja recently authored a study on the current use of artificial intelligence in India, reviewing use cases across various sectors including policing and agriculture.

Q&A

In a nutshell, how did you get started in the field of artificial intelligence? What drew you to this field?

I started my career in research and policy engagement in the humanitarian field. For many years, I have studied the use of digital technologies in protracted crises in resource-poor contexts. I quickly realized that there is a fine line between innovation and experimentation, especially when dealing with vulnerable populations. The lessons learned from this experience make me deeply concerned about the technological solutionist narrative surrounding the potential of digital technologies, especially artificial intelligence.At the same time, India launched Digital India mission and National Artificial Intelligence StrategyI am troubled by the mainstream view that artificial intelligence is a panacea for India’s complex socio-economic problems, but there is a complete lack of critical discussion around the issue.

What work (in AI) are you most proud of?

I’m proud that we’ve been able to draw attention to the political economy of AI production and the wider implications for social justice, labor relations and environmental sustainability. Narratives about AI often focus on benefits for specific applications and miss the forest for the trees—a product-oriented perspective obscures broader structural impacts, such as AI’s impact on epistemic injustice. contributions, the deskilling of the workforce and the perpetuation of unaccountable power. I’m also proud that we’ve been able to translate these concerns into concrete policies and regulations – whether that’s designing procurement guidance for the use of AI in the public sector or evidence in legal proceedings against big tech companies in the global South.

How do you deal with the challenges of the male-dominated tech industry and the male-dominated artificial intelligence industry?

Let my work speak for itself and keep asking: Why?

What advice do you have for women seeking to enter the field of artificial intelligence?

Develop your knowledge and expertise. Make sure your technical understanding of the problem is sound, but don’t focus narrowly on AI alone. Instead, study broadly so that you can make connections across fields and disciplines. Not enough people understand artificial intelligence as a sociotechnical system that is a product of history and culture.

What are the most pressing issues facing artificial intelligence in its development?

I think the most pressing issue is the concentration of power in the hands of a few tech companies. While this is nothing new, new developments in large language models and generative artificial intelligence have exacerbated the problem. Many of these companies are now fanning concerns about the risks. AI. Not only does this distract from existing harms, it also makes these companies necessary to address AI-related harms. In many ways, we are losing some momentum to the “technological shock” that followed artificial intelligence. Cambridge Analytica episode. I am also concerned that in places like India, AI is being positioned as a necessity for social development, providing an opportunity to leapfrog ongoing challenges. Not only does this exaggerate the potential of AI, it also ignores the potential for institutional development beyond what is needed to put safeguards in place. Another issue we don’t consider carefully enough is the impact of artificial intelligence on the environment – ​​the current trajectory may be unsustainable. In the current ecosystem, those most vulnerable to climate change are unlikely to be the beneficiaries of AI innovations.

What issues should artificial intelligence users pay attention to?

Users need to realize that artificial intelligence is not magic or anything approaching human intelligence. It is a form of computational statistics that has many beneficial uses, but is ultimately just a guess at probability based on history or previous patterns. I’m sure there are several other issues that users need to be aware of, but I would like to remind everyone that we should be wary of attempts to shift responsibility downstream onto users. I’ve seen this recently when using generative AI tools in low-resource settings. In most of the world, the focus tends to shift to how end users (such as farmers or frontline health workers) need to upskill, rather than on how Be cautious about these experimental and unreliable technologies.

What is the best way to build artificial intelligence responsibly?

This must first assess the need for artificial intelligence. Is there a problem that artificial intelligence can uniquely solve, or are there other possible approaches? If we want to build artificial intelligence, do we need a complex black box model, or maybe a simpler logic-based model will work too? We also need to refocus domain knowledge into the construction of artificial intelligence. In our obsession with big data, we sacrifice theory – we need to build domain-based knowledge of theory of change, which should be the basis of the models we are building, not just big data. Of course, this also includes critical issues such as engagement, inclusive teams, labor rights, and more.

How can investors better promote responsible artificial intelligence?

Investors need to consider the entire life cycle of AI production, not just the output or results of AI applications, which requires considering a range of issues such as whether the workforce is fairly assessed, environmental impact, the company’s business model (i.e. whether it is based on business surveillance?) and accountability measures within the company. Investors also need to seek better, more rigorous evidence on the purported benefits of artificial intelligence.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button