Tech

Women in AI: UC Berkeley’s Brandie Nonnecke says investors should adhere to responsible AI practices | TechCrunch


To give female academics and others focused on artificial intelligence some well-deserved and overdue spotlight time, TechCrunch is launching a Interview series Focus on the extraordinary women who are contributing to the artificial intelligence revolution. As the AI ​​craze continues, we will publish multiple articles throughout the year highlighting critical work that is often overlooked.Read more introduction here.

Brandi Nonnecke is the founding director of the UC Berkeley-based CITRIS Policy Lab, which supports interdisciplinary research to address questions about the role of regulation in promoting innovation. Nonnecke is also co-director of the Berkeley Center for Law and Technology, where she leads the program: Artificial Intelligence, Platforms, and Society, and the UC Berkeley Center for Artificial Intelligence Policy, which trains researchers to develop effective AI governance and policy framework.

In his spare time, Nonnecke hosts TecHype, a video and podcast series that profiles emerging twin technology policy, regulations, and laws, provides insights on benefits and risks, and identifies strategies for leveraging technology for good.

Q&A

In a nutshell, how did you get started in the field of artificial intelligence? What drew you to this field?

I have been working in the field of responsible AI governance for almost ten years. My training in technology, public policy, and their intersection with social impact drew me to this field. Artificial intelligence is already ubiquitous and has a profound impact on our lives—for better or for worse. It’s important to me to meaningfully contribute to society’s ability to make the most of this technology, rather than stand by and do nothing.

What work (in AI) are you most proud of?

I’m very proud of two things we accomplished. First, UC was the first university to establish responsible AI principles and governance structures to better ensure the responsible procurement and use of AI. We are committed to serving the public and taking it seriously and responsibly. I had the privilege of serving as co-chair of the UC Chancellor’s Task Force on Artificial Intelligence and its subsequent Standing Committee on Artificial Intelligence. In these roles, I gained first-hand experience thinking about how best to implement our responsible AI principles to protect our faculty, staff, students, and the broader communities we serve. Second, I believe it is critical that the public understands emerging technologies and their true benefits and risks. We launch TecHype, a film and podcast series that aims to demystify emerging technologies and provide guidance for effective technology and policy interventions.

How do you deal with the challenges of the male-dominated tech industry and the male-dominated artificial intelligence industry?

Stay curious, be persistent, and don’t be intimidated by imposter syndrome. I find it critical to find mentors who support diversity and inclusion and provide the same support to others entering the field. Building inclusive communities in tech is a powerful way to share experiences, advice and encouragement.

What advice would you give to women seeking to enter the field of artificial intelligence?

My advice to women entering the field of artificial intelligence is threefold: pursue knowledge relentlessly, as artificial intelligence is a rapidly evolving field; embrace networking, as connections will open doors of opportunity and provide valuable support; and provide yourself and Advocate for others because your voice is critical in shaping an inclusive, equitable future for AI. Remember, your unique perspective and experience enrich the field and drive innovation.

What are the most pressing issues facing artificial intelligence in its development?

I believe one of the most pressing issues facing artificial intelligence as it develops is not to get caught up in the latest hype cycle. We are seeing this now in generative artificial intelligence. Of course, generative AI brings significant advances and will have huge impacts—good and bad. But other forms of machine learning being used today are covertly making decisions that directly impact everyone’s ability to exercise their rights. Rather than focusing on the latest wonders of machine learning, it is more important that we focus on how and where machine learning is being applied regardless of technological prowess.

What issues should artificial intelligence users pay attention to?

AI users should be aware of issues related to data privacy and security, possible biases in AI decision-making, and the importance of transparency in how AI systems operate and make decisions. Understanding these issues can empower users to demand more responsible and fair AI systems.

What is the best way to build artificial intelligence responsibly?

Building artificial intelligence responsibly involves integrating ethical considerations at every stage of development and deployment. This includes diverse stakeholder engagement, transparent approaches, bias management strategies and ongoing impact assessment. Prioritizing the public interest and ensuring that the development of AI technologies is consistent with human rights, equity and inclusion at their core is fundamental.

How can investors better promote responsible artificial intelligence?

This is a very important question! For a long time, we never explicitly discussed the role of investors. I can’t express enough the impact investors have! I think the statement “regulation kills innovation” is overused and often untrue. Instead, I am firmly against the belief that smaller companies can experience late-mover advantages and learn from the large AI companies that have been developing responsible AI practices and guidance from academia, civil society, and government. Investors have the power to shape the direction of the industry by making responsible AI practices a trend. This includes supporting initiatives focused on solving social challenges through AI, promoting diversity and inclusion in the AI ​​workforce, and advocating for strong governance and technology strategies to help ensure that AI technologies benefit society as a whole.



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button