AI Video

This work explores how emerging generative tools can be used within real production environments. Some pieces are experimental, others were developed for client-facing initiatives.

The focus is not on the tools themselves, but on how they change creative workflows—idea development, visual prototyping, storytelling, and production speed. These projects test where AI meaningfully expands what’s possible and where traditional craft still matters most.

“The Promise vs. The Reality of Value-Based Care”

A client-facing video developed for Hamilton Risk Group exploring the gap between the promise of value-based care and the realities of implementation. Visuals were generated using ChatGPT and Kling, then animated with Kling and Veo, and a voiceover using ElevenLabs to create a narrative-driven explainer grounded in real healthcare challenges.

“Prompted #1”

The first in a three-part experimental montage series using images generated in ChatGPT and animated with Kling and Veo. This piece focuses on color, saturation, and visual intensity as a study in how generative imagery behaves in motion.

“Prompted #3: Terror in ‘62”

A narrative trailer built from generative imagery and image-to-video animation, inspired by the novel Dogwood. Framed as a documentary preview, the piece imagines the story as a real event, featuring survivor testimony and an investigative tone.

“Evaluating Risk”

A client video developed for Hamilton Risk Group exploring how healthcare organizations assess and manage financial and operational risk. Created using generative imagery and animated with image-to-video tools, the piece translates complex risk concepts into a clear, visual narrative.

“Prompted #2”

Part two of a three-film experimental series using generative imagery and image-to-video animation. This installment explores surreal composition and black-and-white visual language, referencing European New Wave cinema and surrealist photography.

“Beyond the Edge of the Milky Way”

A proof-of-concept narrative scene built using custom visuals created in Daz Studio and animated through image-to-video workflows, with Voice.AI used to provide different voices for the performer. The approach enabled precise control of camera, lighting, and performance while exploring how emerging tools can support cinematic storytelling in early development.