When OpenAI’s ChatGPT took the world by storm last year, it surprised many power brokers in Silicon Valley and Washington, DC. The U.S. government should now be issuing advance warnings about future AI breakthroughs involving large language models (the technology behind ChatGPT).
The Biden administration is preparing to capitalize Defense Production Act Forcing technology companies to notify the government when using large amounts of computing power to train artificial intelligence models. The rule could take effect as soon as next week.
The new requirements will allow the U.S. government to obtain critical information about some of the most sensitive projects within OpenAI, Google, Amazon and other competing artificial intelligence technology companies. The company must also provide information about ongoing safety tests of its new artificial intelligence.
OpenAI has been reluctant to say how much work is being done on a successor to its current top product, GPT-4The U.S. government may be the first to know when GPT-5 actually begins working or security testing. OpenAI did not immediately respond to a request for comment.
“We are using the Defense Production Act, a power we have because of the President, to conduct an investigation that requires companies to share with us every time they train a new large language model and share the results with us – Security U.S. Commerce Secretary Gina Raimondo said Friday at an event at Stanford University Hoover InstitutionShe did not say when the requirement would take effect or what actions the government might take based on information it receives about the artificial intelligence project. More details are expected to be released next week.
The new rules were implemented as part of a sweeping White House executive order Released in October last year.this executive order The Commerce Department was given a Jan. 28 deadline to develop a plan requiring companies to brief U.S. officials on details of powerful new artificial intelligence models being developed. Those details should include the computing power used, ownership of the data fed into the model and details of security testing, the order said.
The October order called for starting to determine when AI models need to be reported to the Commerce Department, but set a limit of 100 billion26) floats or failures per second, which is 1,000 times lower for large language models processing DNA sequencing data. Neither OpenAI nor Google disclosed how much computing power they used to train their most powerful models, GPT-4 and Gemini, respectively. Report Executive Order states1026 flops is slightly higher than the value used to train GPT-4.
Raimondo also confirmed that the Commerce Department will soon implement another requirement from the October executive order requiring cloud computing providers such as Amazon, Microsoft and Google to notify the government when foreign companies use their resources to train large language models. They are reported when they exceed the same initial threshold of 100 seven failures.