OpenAIâs latest tool is both impressive and unsettling.
After finding success with ChatGPT, the research and development company shared a first look at Soraâa new text-to-video tool that remains in the beta phase. The GenAI model follows in the footsteps of other text-to-video engines from companies like Google and Runway; however, the quality of Sora-generated videos is unlike anything weâve seen from OpenAI competitors.
According to the company, the tool uses a detailed prompt to create âcomplex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world.â
OpenAI shared several examples of Sora-generated videos, including photorealistic and animated clips that spanned up to 60 seconds long. Some of the highest-engaged videos showed wooly mammoths walking âtreading through a snowy meadowâ; a âshort fluffy monsterâ examining a burning candle; and an outer space movie trailer shot on 35mm film.
Although the Sora-generated videos were quite grand, they fueled concerns about misinformation and AIâs impact on the job marketâspecifically in the realm of content creation. Many X users highlighted these concerns via X, wondering if the new tool would eliminate the need for production designers.
OpenAI has not announced an official launch date for Sora. The tool is currently available to a small group of security experts who are testing it for vulnerabilities.
âWeâll be taking several important safety steps ahead of making Sora available in OpenAIâs products,â the company wrote. âWe are working with red teamers â domain experts in areas like misinformation, hateful content, and bias â who will be adversarially testing the modelâŚ
âWeâll be engaging policymakers, educators and artists around the world to understand their concerns and to identify positive use cases for this new technology,â OpenAI continued. âDespite extensive research and testing, we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it. Thatâs why we believe that learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time.â