With increasing frequency, AI is pitched as the solution to a non-existent problem. For starters, last month, allowing folks to create AI-generated video with disquieting robo-voice-over to boot. Now, a recent exchange on X suggests that a more immersive feature set that would allow users to generate playable game worlds using Veo 3 may not be a far off possibility either.
In response to a post sharing generated video of a sky-high cyberpunk city, asking "playable world models [in Google Veo 3] wen [sic]?" Google DeepMind co-founder and CEO , "Now wouldn't that be something…" Google ai.studio and Gemini API product lead Logan Kilpatrick then replied to Hassabis' post with .
Wait a second—the aforementioned Veo 3 clip bears a striking similarity to a screen shot you can easily find on the . If this was used as a basis for the video in the original X post, then it's not only all the more baffling but it also just exposes how gen-AI is sometimes leveraged to devalue and erase the work of flesh and blood creatives. For one pertinent example, it turns out content creators were largely unaware that . Anyway, I know what I'm adding to my wishlist.
There's also plenty more reasons to be cautious about the advent of AI-generated video besides creative existentialism. As part of a report last month, that propagate disinformation or otherwise aim to exacerbate social and political tensions. More recently Media Matters [[link]] for America, a research nonprofit and watchdog, reported on a wave of . This is despite the video platform's community guidelines .
While some users may only want to use Google's Veo 3 to immerse themselves in prompt-based worlds, it's clear some have far from innocent ambitions for this tech. For its part, Google has introduced . At the very least, AI-heads may well be motivated to learn a new skill as they figure out how to crop this from view. But , this watermark won't appear in Veo 3 videos created in Flow by Ultra tier Google AI subscribers, greatly undermining the entire effort.
It's not like OpenAI is handling their competing text-to-video model, Sora, much better either. In addition to their own watermark and building tools that can detect when video was made using Sora, it is "working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who will be adversarially testing the model." It also says, [[link]] "our text classifier will check and reject text input prompts that are in violation of our usage policies, like those that request extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others." However, given that (via ) suggests that safety protocols in a number of major LLMs remain relatively easy to bypass, you'll forgive me for not feeling optimistic about the future of AI generated video.

👉👈
1. Best overall:
2. Best budget:
3. Best compact:
4. Alienware:
5. Best mini PC: