SELECT LANGUAGE BELOW

The $320 billion AI revolution allows for the creation of highly realistic videos, but could it also be harmful to us?

The $320 billion AI revolution allows for the creation of highly realistic videos, but could it also be harmful to us?

Just after the US military stated it had successfully eliminated Iran’s nuclear capabilities without any casualties or damage, images began circulating that seemed to challenge those assertions.

One of the images depicted a B-2 bomber, the aircraft allegedly used in the mission, crashing into the ground, with a damaged wing and emergency personnel surrounding it. This raised doubts about the narrative the president was presenting.

Sharp-eyed observers noticed something peculiar in the images: emergency workers looked oddly blended into the background, which doesn’t typically happen in real-life scenarios.

Another image featuring Iranian soldiers near the downed B-2 showed them unrealistically oversized compared to the jet. It turned out both images were generated by AI.

Gary Rivlin, the author of “AI Valley,” commented that, in today’s ecosystem, these sophisticated AIs are almost indistinguishable from reality, claiming they could appear fake 95% of the time yet still fool people.

There are concerns about this technology, especially when it comes to misinformation; Pulitzer Prize-winning experts have acknowledged that sometimes they can’t tell what’s genuine and what’s fabricated.

A recent example highlighted a video from protests against immigration enforcement in Los Angeles. It featured a National Guard soldier named “Bob,” eating a burrito while humorously commenting on its taste. Yet, this seemingly harmless clip had subtle inaccuracies—it showed “Bob” wearing a mask while eating and mislabeling police in the background—which triggered strong reactions from the Latin community.

These manipulated images and videos give rise to a plot similar to what we see in the new HBO film “Mountainhead,” where tech innovation leads to chaos driven by misguided actions and misinformation.

Rivlin emphasizes that the implications are significant; it’s easy for an observer to mistake something fake for real, raising worries about AI’s influence over our understanding of events.

Despite these concerns, AI offers many positive applications that significantly advance various fields, from restructuring industries to automating mundane tasks. For instance, Microsoft claims that its AI can diagnose diseases with four times the accuracy of human doctors.

A recent survey revealed that 43% of respondents have used AI in their work, and many have accessed it for free—at least for now. A report indicated that out of 1.8 billion AI users, only 3% are paying customers.

The evolution of video technology has reached a point where creating completely AI-generated advertisements is now possible, making it hard to differentiate between real and fabricated footage.

Tools like Deepfacelab can swap faces, and other AI technologies can replicate human voices or create convincing videos of individuals in unreal scenarios.

Amid the spread of misinformation and deepfake technology, distinguishing truth from fabrication becomes increasingly complex. There are instances of AI-generated posts that disturbingly imitate public figures making ridiculous claims.

Mike Belinski, who directs a scientific charity, has noted that while there are tools for detection, the battle against misinformation is ongoing and evolving, akin to a game of whack-a-mole.

As AI continues to make strides, such as the development of chatbots that can closely mimic human interaction, the implications for social connections and friendships are profound. Mark Zuckerberg noted that AI technology might fill gaps for individuals feeling isolated.

Meta is heavily investing in AI, with reports indicating they’ve poured substantial funds into startup ventures to advance their capabilities.

Meanwhile, Sam Altman, associated with OpenAI, is partnering with notable designers, including Jony Ive from Apple, to work on a new AI device aimed at enhancing human connections.

This gadget, described as a “companion device,” ostensibly aims to assist users in navigating their relationships, though it prompts a fundamental question: Are we truly ready to substitute human interaction with technology?

Rivlin expresses a mix of excitement and concern about these advancements, particularly regarding data usage and privacy, warning that if a service is offered for free, the user may be the product being sold.

He cautions against blind trust in big tech companies and asks, “What are they really doing with all this data?”

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News