total-news-1024x279-1__1_-removebg-preview.png

LANGUAGE

Government is wildly unprepared how AI can be abused by criminals

newYou can now listen to Fox News articles.

In the years leading up to 2020, I have warned that the next national emergency, like the 2008 financial crisis, would result in billions of dollars in fraud losses. When COVID-19 hit, my warnings became reality.

Hundreds of billions of dollars have been looted from the coffers of key government programs, and rent relief, unemployment benefits, SNAP benefits, and PPP loans have become piggy banks for thousands of domestic and cross-border cybercriminals.

Then, when state-level labor departments noticed hundreds of billions of dollars worth of fraudulent unemployment claims being paid out, many turned to facial recognition systems to verify applicants’ identities.

AI Tools Used By Police ‘Don’t Understand How These Technologies Work’: Study

I already said in 2020 that AI-generated deepfakes would be used to evade these systems, and surprisingly, that’s exactly what’s happening.

AI can generate synthetic identities that match legitimate Social Security beneficiary profiles, robbing eligible individuals of millions of dollars. (Kevin Dietsch/Getty Images)

Criminals are now using our faces to steal money from governments. They are filing tax returns and applying for unemployment insurance. They impersonate our voices, faces, and identities, which are mostly undetected.

Today I am ringing the alarm bells again. AI, especially generative AI, poses the greatest risk to the security of the most important government agencies and rights programs we have ever faced. Perhaps this time our leaders will listen before disaster strikes.

Sophisticated AI algorithms can perpetrate large-scale fraud across multiple domains. When trained on public or leaked datasets, it can predict the structure of sensitive information such as social security numbers, create synthetic identities, and generate fraudulent health insurance claims, defense contracts, tax returns, and aid applications. You can generate

The degree of accuracy can be staggering, and AI-driven automation can overwhelm detection and prevention systems with a high volume of fraudulent submissions, making the problem even worse.

Consider Social Security benefits, a lifeline for millions of Americans. AI can generate synthetic identities that match legitimate beneficiary profiles, shunning millions of dollars from the right beneficiaries.

In the fields of Medicare and Medicaid, AI forges seemingly legitimate medical claims, leading to the loss of billions of dollars in funding to ensure low-income families and seniors have access to critical health services. There is a possibility.

Likewise, our defense contracts are not exempt. AI can create fake companies that make compelling bids and divert millions of dollars for national security purposes.

Tax collection, the backbone of government funding, could also be undermined. Sophisticated AI algorithms can create complex tax returns designed to exploit loopholes to maximize fraudulent refunds.

DeepAI.org founder Kevin Barragona said artificial intelligence could be used for 'stalking purposes'

AI can be used as a tool to stalk unsuspecting victims. Generative AI can be particularly threatening because it can create new types of content, from text and code to images and videos. (Fox News)

Fortunately, the risks here are significant, but there are also silver linings. The same technology that powers fraudsters can be used to protect your system. For example, combining multi-factor authentication with “behavioral biometrics” offers a unique and sophisticated way to combat AI fraud that is not possible with traditional methods.

Beyond looking at static data like ID numbers and fingerprints, this technology analyzes the unique ways people interact with their digital devices. Factors like typing speed, mouse movement patterns, and even how you hold your phone are taken into account.

AI, no matter how advanced, is impersonal. These highly personal and subtle human behaviors cannot be convincingly imitated.

For example, consider a fraudulent tax return submitted using an AI-generated ID. A typical system might validate a return based on a static identifier such as a synthetic social security number. But systems with behavioral biometrics will look more closely at how data is entered: the timing of keystrokes and the cadence of typing.

In the fields of Medicare and Medicaid, AI forges seemingly legitimate medical claims, leading to the loss of billions of dollars in funding to ensure low-income families and seniors have access to critical health services. There is a possibility.

This is where the AI ​​falters and raises red flags when emulating human behavior. In this way, behavioral biometrics provide an important additional layer of defense in fighting AI-powered fraud.

Importantly, AI systems can detect patterns of fraud and anomalies in data that human reviewers tend to miss. Improbable combinations of age, income, work history, and other personal information can be flagged.

You can use AI to monitor application volume and patterns to identify anomalous inflows and patterns that could indicate automated fraud. By acting proactively, you can mitigate the damage on a large scale.

CLICK HERE TO GET THE OPINIONS NEWSLETTER

It is too much to stress that agency leaders who have not yet considered the impact of AI on their fraud detection and prevention systems are likely already victims of this type of sophisticated fraud. It is not.

The question is no longer whether these AI-powered threats will affect government agencies, but when and how severely. We must recognize the stark reality that AI fraud is not a distant threat, but a threat knocking on our door.

There is no silver bullet here. Fighting AI fraud requires a coordinated effort across government agencies, a deep commitment to continuous innovation, and a willingness to invest in advanced technologies such as behavioral biometrics.

CLICK HERE TO GET THE FOX NEWS APP

We need to start thinking of anti-fraud as an important aspect of national security, not just a control function. By taking this issue to the strategic level and fostering an open and robust dialogue about it, we can be a step ahead of those who seek to abuse the system.

The future of our country’s health and the security of our people depends on it.

Leave a Reply

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp