total-news-1024x279-1__1_-removebg-preview.png

LANGUAGE

The time to regulate AI is now

Senate Judiciary Subcommittee last month hearing A report on AI oversight offered a glimmer of hope that policymakers are ready to tackle the regulatory challenges posed by the rapidly advancing frontier of AI capabilities.

A surprising level of consensus has been achieved between Democrats, Republicans, representatives of industry heavyweights (IBM) and hot new pioneers (OpenAI), and AI critic Gary Marcus. I was. hype. microsoft and Google It quickly published its own overlapping policy recommendations, and was well attended by academics, AI scientists, and tech company executives. signed the statement “Reducing the risk of AI extinction” should be a global priority.

It’s not that we didn’t see A regular play takes place — Senators ask witnesses for their opinions, veer off on trivial topics, and sometimes mention technical details that go beyond comprehension. But beyond the theater, his three areas of emerging agreement were particularly noteworthy.

First, the magnitude of the AI ​​regulatory challenge will likely require a new regulatory body. One of the urgent challenges is to clarify the exact powers of this proposed regulator.

OpenAI CEO Sam Altman suggested focusing on the most computationally intensive models, such as the largest models. “foundation” The models that power systems like ChatGPT are trained thousands to billions of times more computation than most other models. This approach has its advantages. This is a practical line to draw in the sand that captures the system. most unpredictable and potentially transformative ability, does not capture the majority of AI systems (whose use and impact may be handled within existing regulatory structures). Altman also suggested increased scrutiny of models demonstrating the capabilities of national security-related areas, such as: discovery and manufacturePresence of chemical and biological substances.

Defining these thresholds is difficult, but given the significant risks associated with the widespread adoption of such models, regulators should exercise caution before models are deployed and distributed. Once someone obtains the AI ​​system’s source files, it can be copied and distributed like any other software, making it virtually impossible to limit its spread.

Second, policy makers need to understand how current liability rules apply to potential harm from AI, and what changes are needed to address the unique challenges posed by current system frontiers. The time has come to consider whether These challenges include: opaque internal logic, Growing ecosystem of autonomous decision making There is a lack of consensus about state-of-the-art Internet connectivity models, the wide availability of avenues for users to compensate victims of AI harm, and what constitutes reasonable care in the development and deployment of systems. , which is poorly understood even by the creators of the system.

Third, the concept of pausing the scale-up of AI systems altogether proved ineffective. Even Marcus, the signatory to the Future of Life Institute’s Suspended Giant AI Experiment open letter, acknowledged greater support for its spirit than its letter. Instead, the discussion focused on establishing quickly. standard, audit A license to responsibly scale up future systems. (The letter also suggests that:) these measures but overshadowed by calls for a pause). This approach sets the “rules of the road” for building the biggest and best performing models, giving you early warning when it’s time to hit the brakes.

These are necessary steps, but they alone do not guarantee that the benefits of advanced AI systems outweigh the risks. Democratic values ​​such as transparency, privacy, and fairness are essential ingredients for responsible AI development. Confirm that current technical solutions are inadequate. Without further development of effective technical approaches, licensing and auditing measures alone cannot ensure compliance with these principles. Policy makers, industry and researchers should: work together So that our efforts to develop reliable and operational AI can keep pace with the overall capabilities of AI.

There are some signs that the White House is beginning to understand the magnitude of the challenge ahead. Following a meeting between the Vice President and Mr. Altman and other Frontier Lab CEOs, notice that these laboratories have signed public bodies red team It also revealed that the National Science Foundation has allocated $140 million to establish a new AI research institute.

But such efforts need to extend beyond just scratching the edge of the research question. A key part should include working with the largest and most capable systems to lay the foundation for the powerful AI that will ultimately unfold. Reliable properties Developed with high confidence in line with NSF’s $20 million safety learning response system. invitation.

Among our many priorities, we can look forward to new National AI R&D Strategic Plan Acknowledged the need for “further research” […] to increase effectiveness[,] reliability[,] Consider the “security and resilience of these large-scale models” and determine “what level of testing is sufficient to ensure the safety and security of non-deterministic and/or wholly inexplicable systems?” clarified the issue. With billions of dollars flowing into the labs developing these systems, these priorities must be aligned with the appropriate focus and direction of the research ecosystem.

The new consensus around the need for regulation has not been met uncritically. Senators were amused to see Silicon Valley executives plead for more regulation.some expressions concern On the possibility of regulatory trapping and stifling innovation. Some commentators go further, Distinctive Altman’s plea is an ironic attempt to build a barrier to potential competition. Policymakers skeptical of Mr. Altman should blame him for bluffing. as he requested, focusing the most stringent regulatory attention on the most advanced models. For now, these regulations only apply to a few very resource-rich labs like OpenAI.

Policy makers must also not be under the illusion that light regulatory measures will somehow prevent some degree of focus on AI frontiers. Cost of training state-of-the-art models – currently tens of millions of dollars Computing costs alone are rising rapidly, and prices for smaller players are steadily falling.

To be an effective regulator, governments must understand these state-of-the-art models and develop their stress-testing expertise, along with a relevant ecosystem of trusted third-party assessors and auditors, It is necessary to be able to respond thoroughly. Work with these leading research institutes. Similarly, parliament continues Addressing the issues raised at last month’s public hearings will require similar levels of bipartisan and expert engagement to rapidly put in place a coherent and effective regulatory framework for the most powerful and innovative AI systems. must be maintained.

Caleb Withers is New American Security Centerwhich focuses on AI safety and stability.

Copyright 2023 Nexstar Media Inc. All rights reserved. You may not publish, broadcast, rewrite or redistribute this material.

Leave a Reply

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp