AI giant OpenAI has announced the latest in its AI system named Sora, which can generate realistic videos from text descriptions. AI-generated deepfake videos are already causing problems online, from fake porn to realistic scam calls, so the emergence of easily created videos comes with the potential for trouble.
CNBC report OpenAI, which develops the AI chatbot ChatGPT, announced Thursday that it is expanding its video generation capabilities with Sora. Sora allows users to enter a description of the desired video scene and generates high-resolution video clips based on the text prompts.
In addition to generating videos from scratch, Sora can also augment existing videos and fill in missing frames, according to OpenAI. AI models can currently create videos up to 1 minute in length.
Sora represents OpenAI’s effort to provide multimodal AI systems that can manipulate text, images, and now video. Brad Lightcap, OpenAI COO, said: If you think about the way we humans process and interact with the world, we see things, we hear things, we talk things. The world is much bigger than text. ”
Axios reports that an OpenAI representative stressed that the company has no intention of releasing Sora to the general public at this time. This is because OpenAI is still working to address safety concerns, including reducing the spread of misinformation, hateful content, and biased output from models. Additionally, OpenAI is committed to clearly labeling output as generated by AI.
Check out some examples of Sora in action below.
How it started (1 year ago) and where it is now: pic.twitter.com/vOrQd7wyBb
— Garrett Scott 🕳 (@thegarrettscott) February 15, 2024
OpenAI introduces SORA: new text2video #AI model.
Everyone is just posting amazing examples, but I’d like to talk about the results.
Thread 🔽 pic.twitter.com/wyYx0PsS2o
— Denis Rosiev ᯅ/acc (@Enuriru) February 15, 2024
2. A photo of a three-toed child and a six-legged cat as “war casualties” is currently garnering hundreds of thousands of retweets on X.
People overlook even such obvious flaws, not to mention the more sophisticated examples above.
That’s despite the fact that X has a community note… pic.twitter.com/hFKDhhaVEc— Denis Rosiev ᯅ/acc (@Enuriru) February 15, 2024
4. Don’t think that OpenAI will be a force for good, imposing censorship and stopping the production of deepfakes and other disinformation.
Well, that may be true, but they still lack unique and irreplaceable knowledge.
Other researchers and developers working on similar solutions… pic.twitter.com/dYlBWwFj7H— Denis Rosiev ᯅ/acc (@Enuriru) February 15, 2024
5. When it comes to living videographers, directors, and other filmmakers, the statement “AI won’t replace you. A human with AI will” is only half true.
This is a transition period and will not last very long. It’s hard to predict, but I think I’ll be in this industry for 10, 15 years at most.
a… pic.twitter.com/qtyLZqwuKN— Denis Rosiev ᯅ/acc (@Enuriru) February 15, 2024
7. Lastly, I would like to say that only 0.01% of people have a clear understanding of these technologies, their capabilities, and their future.
It’s easy to deceive others.
Use this knowledge to your advantage whenever possible. tell me… pic.twitter.com/ejWaFmAdLE— Denis Rosiev ᯅ/acc (@Enuriru) February 15, 2024
The launch of Sora puts OpenAI in direct competition with other big tech companies working on similar video AI generators, including Mark Zuckerberg’s Meta, Google, and Adobe. Meta and Google introduced comparable models for converting text into video clips.
Sora is based on diffuse AI architectures like ChatGPT and OpenAI’s image generator DALL-E. According to the company, Sora serves as the foundation for models that can simulate and understand the real world.
So far, OpenAI is only offering a small preview of Sora’s capabilities on its website along with 10 sample videos. The company initially said it was restricting access to its “red team,” which tests for potential risks such as bias and the spread of misinformation.
The release of Sora has raised concerns about the potential for fake AI-generated video content known as deepfakes. The number of deepfakes online has already increased by 900% since last year. OpenAI said it is developing tools to detect videos created by Sora and plans to embed metadata to identify AI-created content.
Breitbart News recently reported that at least six prominent technology companies intend to finalize a deal on AI election interference at this week’s Munich Security Conference. The agreement comes as more than 50 countries prepare for key national elections in 2024 and the threat of AI-driven disinformation is already emerging. For example, an AI voice-clone robocaller impersonated President Joe Biden to try to block voting in the New Hampshire primary.
The companies reportedly include Adobe, Google, Meta, Microsoft, OpenAI, and TikTok, and the agreement will guide joint efforts to stop the deceptive use of AI to target voters. It is said that they are looking forward to it. Details remain unclear, and why should the rest of the world expect TikTok, which has been accused of being a Chinese agent to the West, to make a good-faith effort to maintain election integrity? It is unclear whether to trust it.
Elections around the world face a growing threat from deepfake media, which are misattributed images and recordings created using AI-generated models. Deepfakes can be weaponized to undermine candidates and mislead voters through propaganda. Both companies aim to counter these risks through a unified stance against AI disinformation campaigns.
read more Click here for CNBC.
Lucas Nolan is a reporter for Breitbart News, covering free speech and online censorship issues.

