Wired Exclusive | America’s top cybersecurity agency lays out new plan to weaponize artificial intelligence


Last month, a 120-page U.S. executive order The Biden administration plans to oversee companies developing artificial intelligence technology and guide the federal government on how to expand its adoption.However, the core focus of the document is on security issues related to artificial intelligence, including identifying and solving problems AI product vulnerabilities and develop defenses against potential cybersecurity attacks caused by artificial intelligence. As with any executive order, the problem is how to translate a large and abstract document into concrete action. Today, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) will announce an “Artificial Intelligence Roadmap” laying out plans to implement the order.

CISA has divided its plan to address topics related to artificial intelligence cybersecurity and critical infrastructure into five parts. Two of these relate to promoting communication, collaboration, and workforce expertise in public and private partnerships, and three are more specifically related to implementing specific components of EO. CISA is located within the U.S. Department of Homeland Security (DHS).

CISA Director Jen Easterly told Wired: “Frankly, it’s great to be able to bring this to light and hold ourselves accountable for the broad range of things we need to do to accomplish our mission and what’s in the executive order. Very important.” Release of the roadmap. “Artificial intelligence as software will obviously have a significant impact on society, but just as it will make our lives better and easier, it will likely do the same for our adversaries large and small. So our focus It’s how we ensure the safe and secure development and implementation of these systems.”

CISA’s programs focus on the responsible use of artificial intelligence, but also actively use artificial intelligence in U.S. digital defense. Easterly stressed that while the agency is “focused on security over speed” in developing artificial intelligence defense capabilities, the reality is that attackers will leverage these tools — in some cases already——So it is necessary and urgent for the US government to use them.

With this in mind, CISA’s approach to promoting the use of artificial intelligence in digital defense will revolve around established ideas that the public and private sectors can borrow from traditional cybersecurity. As Easterly puts it, “AI is a form of software, and we can stop thinking of it as some exotic thing that requires new rules to be applied.” AI systems should be “safe by design,” meaning they are developed with CISA also intends to promote the use of “software bills of materials” and other measures to keep artificial intelligence systems open to scrutiny and supply chain audits in order to maintain constraints and security, rather than trying to add protection retroactively.

“Artificial intelligence manufacturers [need] “Take responsibility for safety outcomes — that’s the whole idea of ​​shifting the burden to those companies that are best able to bear it,” Easterly said. “These are the companies that build and design these technologies, and this is about the importance of embracing it.” Radical transparency. Making sure we know what’s in this software so we can make sure it’s protected.”



Source link

Leave a Comment