SELECT LANGUAGE BELOW

The latest boom in Silicon Valley is based on plagiarized efforts.

The latest boom in Silicon Valley is based on plagiarized efforts.

There’s been a growing frustration among lawmakers and the public regarding Silicon Valley’s “move fast and break things” approach. Yet, it seems that Big Tech hasn’t quite grasped this concern.

As businesses rush to integrate artificial intelligence, they’re neglecting fundamental corporate responsibilities. For instance, a Meta chatbot allowed children to engage in inappropriate conversations. Additionally, OpenAI’s ChatGPT was implicated in a case where it aided a child in planning suicide. Studies indicate that AI chatbots negatively affect children’s mental health.

What’s the latest episode of Silicon Valley’s oversight? A significant wave of copyright infringement.

Digital Wild West

Initially, issues with copyright infringement related to AI products like ChatGPT were somewhat unclear. Early iterations of large-scale language models weren’t capable of generating convincing replicas of copyrighted materials. Moreover, the training datasets were kept secret.

However, recent updates, particularly with OpenAI’s Sora 2, have changed this landscape dramatically. Just last week, social media platforms like X were inundated with clips of various TV shows created by Sora 2 users, including “Family Guy,” “South Park,” and “SpongeBob.” While the clips might not be perfect duplicates yet, they’re alarmingly close. It seems plausible that it won’t take long for users to craft content virtually indistinguishable from the originals.

Flipping the Script

Initially, OpenAI had an “opt-out” strategy for copyright holders regarding their works being utilized in Sora 2. However, this conflicts with fundamental copyright principles, which assert that it’s the responsibility of others to ask for permission to use someone’s work. To override this would undermine the very essence of copyright law.

In response to protests from copyright holders and the Actors Guild, OpenAI adjusted its policy, moving towards an “opt-in” model. Now, rights holders can actively choose whether their content is included in Sora 2.

Interestingly, CEO Sam Altman seemed taken aback by the criticism. He mused on how perceptions change when people actually see the results, suggesting that reactions can be quite different from expectations.

This doesn’t justify the reckless infringement of copyright laws. Altman appears to be leaving room for an open-ended interpretation of what users might generate with AI—possibly the entire season of a favorite show. He seems to be aware of user demand and might be banking on this pressure to encourage copyright holders into agreements that don’t fully compensate them for their intellectual property.

He also mentioned that they intend to distribute some revenue to rights holders open to character generation by users. Despite his vague wording, it’s clear he’s engaging in a sort of negotiation—essentially stating that if viewers want this content, they must allow it or risk infringement without appropriate reward.

Congress Needs to Step In

This situation doesn’t negate AI’s worrisome capability to generate lifelike images of individuals. Some action has been taken, with the Trump administration targeting severe offenders, yet creating misleading images remains alarmingly easy.

At present, the majority of output is quirky—like videos of SpongeBob fleeing from the police. But what’s stopping someone from replacing SpongeBob with real people in compromising scenarios?

Essentially, OpenAI and its peers have set up a system that burdens copyright holders with proving their rights, while also leveraging their enthusiastic user base. This is problematic and runs contrary to legal standards. Given this blatant disregard from OpenAI and similar companies, Congress should urgently consider legal clarifications to compel these firms to modify their practices.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News