total-news-1024x279-1__1_-removebg-preview.png

SELECT LANGUAGE BELOW

DOGE is using AI the wrong way

President Trump and Elon Musk's Doge initiative misuse artificial intelligence and bring its full potential.

The first round of massive layoffs of government officials led to public backlash. In response, the president I said The next stage of the Doge initiative will be more accurate. “I don't want to see a big cut with so many good people being cut.”

So, what would Mesrupel look like instead of a chainsaw, a better approach to using AI in the government landscape? Let's start by looking into the current use of AI and why the marks are missing.

So far, masks' approach is to use AI in government officials' responses. Controversial emails to federal workers In February, we sought a major bullet point summarizing what they were working on. AI uses this data based on large-scale language model adjustments Determine if an employee is required.

In theory, it's a clever idea. Train your AI models using reliable datasets to determine and verify which types of jobs are unnecessary or suitable for automation. Based on training data, AI may have determined that, for example, that if a job follows well-defined rules without interaction or adjustment, it is likely that it can be performed more effectively through the algorithm.

However, such an approach runs the risk of serious errors due to bias embedded in LLMS. What are these biases? It's difficult to communicate, and there's a problem there.

No one fully understands how LLMS works and how they come up with the answer. False negatives by applying such algorithms will fire key employees – can have an unusually high cost error. Most organizations have modest employees throughout the organization who are the go-to people for institutional knowledge and turbocharger productivity.

How does LLM rate answers such as “I answer people's questions”? It's hard to know, but I don't trust the machine. I would like more information from the respondents' bosses and colleagues. And I realized I'm back at using humans rather than AI to determine the value of workers.

Therefore, the current approach has the advantage of being simple and not requiring much data, but can lead to serious errors.

Instead, I strongly urge Musk's new “Scalpel” to utilize more objective data from government agencies. In this way, AI first makes the various institutions “meaning”. For example, what is the mission of USAID, its objectives, how they are measured, and how effective an agency is to achieve its objectives? How about pentagons?

For example, imagine giving LLM access to the history of the Pentagon, including all previous contracts and projects, anonymized communications associated with them, and all the results. This type of data can “fine tweak” the LLM. Now, imagine priming such a tweaked AI with the following type of prompt: “Given the Pentagon's mission and current goals, goals and budgets, we identify areas of greatest risk, potential for failure and budget impact.”

For example, the current F-35 Lightning II Joint Strike Fighter program's estimated budget is approximately $2 trillion. The Columbia-class submarine program has a budget that is close to $500 billion. These are big ticket items. Can AI understand them?

In fact, in-agency programs evaluations are within the scope of modern AI. Such an approach requires critical evaluation of fine-tuned systems to be conducted in carefully constructed test cases where “truth” is known. For example, past use cases for training AI could be B-52 bombers, Trident submarines, Minuteman missiles, and other types of programs, including defensive and offensive weapons. Such cases can be used to build models where current and assumed project types, such as the F-35, can predict which models are likely to have the highest failure rate or delay.

The technical challenges of creating such AI tools in the federal context are not trivial and involve privacy and security issues, but they cannot be overcome. The key question is who should have the authority to fine-tune AI to answer such questions, who should have the authority to issue queries, and what can you do with the answer?

In companies, CEOs clearly have the authority to do both and have a wide latitude in how to act on answers. In fact, a large part of the CEO's job is to ensure that business resources are deployed for the long-term benefits of shareholders.

Is it different for the US government? Should the President be authorized to fine-tune AI for specific government data, ask such questions, and take possible actions based on the answer?

In my opinion, the answer to the first question is clearly yes, but it is likely that a National Security Committee is needed. This is a clear way to evaluate the answers that need to be followed by a well-defined process to devise appropriate training data for AI.

The second question is more complicated and depends on the specific action being performed. Democratically elected governments can also have AI advise the committee and the president on possible actions and consequences.

Now is the moment when masks are re-adjusted, and we are making the most of AI's potential in even more transformative ways. With strategic leadership and strong Congressional oversight, the US can use this AI female to revolutionize and recharge the federal government.

Vasant Dhar He is a professor at the Stern School of Business and the Data Science Center at NYU. The artificial intelligence researcher and data scientist hosts podcasts.A brave new worldexplores how technology and virtualization in the later era are changing humanity.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp