Over the last 24 hours, I’ve stepped deep into the world of AI video generation, experimenting with Google's Veo 3 to bring my novella Watersprites to visual life—one 8-second sequence at a time.
What began with curiosity quickly became something closer to directing again. And like all filmmaking, it’s part discovery, part compromise, and part pure stubborn joy.
Using text prompts, composite reference images, and a generous helping of trial-and-error, I’ve generated a series of clips exploring Freya’s transformation—from a feral woodland sprite by the High Pond to a disciplined, world-record-holding swimmer in a 50m pool. Some clips surprised me with subtlety. Others wandered into sci-fi parody. A few moments struck gold.
🎬 Lessons Learned
-
Clarity matters. If you want Freya in a swim cap, you need to say “hair tucked in under swim cap.”
-
Consistency is key. I now reuse visual templates to keep character design coherent.
-
AI has a mind of its own. Sometimes the “wrong” result becomes the most interesting one.
-
Editing saves everything. iMovie allows me to stitch short sequences together, add sound design, and shape something more cinematic.
I’m not producing a final film—yet. I’m developing a visual language, a storyboarded aesthetic, and a deeper understanding of what’s possible when human creativity collaborates with machine suggestion.
Expect more clips. More experiments. And more strange magic, half from the woods, half from the code.
If you'd like to see the progress, here’s a blog post with a series of efforts, fails and final success > http://bit.ly/3HLmfII
More to follow! Watch this space.