UNITED STATES- Last month the tech company, OpenAI announced its new Sora model, which promised to have the most realistic text to video AI technology out of any AI tool or model.
Since the announcement this past February, 15 social media users have expressed concern for the model’s realistic imagery and the potential misuses for the piece of technology like creating fake Not-Safe-For-Work content.
UT Austin Center for Generative AI Director Alex Dimakis said OpenAI has built-in preventions against making those specific kinds of content.
Demakis further explained, deceptive content is not a new or unaddressed concern in the digital age.
“People could photoshop and create fake things for a long time and the way that we are typically dealing with it is with, what we call provenance,” said Dimakis.
The Sora model’s potential has been a huge topic of focus as well, with most of the talk being on filmmaking and if this is just a tool to further careers, or a total replacement of them.
Dimakis commented on how this would not be the first advancement in the film industry, bringing up the first ever inclusions of sound, as well as the jump from 2D to 3D animation.
How do aspiring filmmakers feel about this?
Local filmmaker and Paperball Production Co-Director Anfernee Labus believed using the tool on its own will take out the humanity in filmmaking; although he does agree on a possible marriage of the two.
“I think it’s a very helpful tool in some cases, you know because you can use it as inspiration, you can use it as reference, but using it as an actual thing to put into your, into your movies, isn’t a good way to go about it,” said Labus
As of now, Sora is not available to the public, however, OpenAI has issued the model to a few “red teamers” to quote, “assess critical areas for harm or risks,” end quote.
Sora’s access is currently limited to a select number of artists, graphic designers and filmmakers to gain feedback and generate excitement for the full release.