“We’re not here to scare you, but to show you reality.” With these words, Michael Goi (ASC, ISC), co-chair of the American Society of Cinematographers’ AI committee, opened his highly anticipated session at Camerimage. Accompanied by artist and specialist Ellenor Argyropoulos, he tackled a question haunting film sets: is AI an existential threat or simply another tool, like the fluid head or telescopic crane once were?
For two hours, the duo demystified the “monster” by creating images and video sequences in real time, transforming the conference room into a pre-visualization suite. Here’s what to remember from this dive into the “black box.”
1. The Philosophy: Taming the “Slot Machine”
Michael Goi immediately set the framework: generative AI works like a slot machine. You pull the lever (the “prompt”), and you never know exactly what will come out. Yet cinematography is the antithesis of chance: it’s the art of specificity and intention.
The challenge isn’t letting AI create the film for you, but learning to control this chaos to serve your vision. For the ASC, AI currently finds its ideal place in pre-production: for storyboards, mood boards, and visual communication with directors, advantageously replacing hours lost searching for poorly suited stock photos on Google.
2. Still Images (Midjourney): Your Vocabulary Is Your Superpower
The demonstration began with Midjourney, the reference tool for still image generation. Ellenor Argyropoulos showed that result quality depends directly on the technical precision of the request.
This is where the cinematographer has a decisive advantage over the screenwriter or producer. AI “understands” (or at least simulates) optical language.
An amateur will ask: “A scary room.”
A DP will ask: “High contrast, hard light, pools of darkness, 28mm lens, low angle, film noir style.”
The second result will be infinitely more usable. However, AI is literal like a child who’s read everything but never left home.
The default format trap: AI outputs square images. The essential command is aspect ratio (in the settings) to recover cinematic framing.
Style control: Commands like “stylize” (for artistic freedom) or “chaos” (for proposal variety) allow fine-tuning.
Iteration: If an image pleases with its lighting but not composition, the “Vary Region” function allows redrawing only a specific zone (changing a sign, modifying a face) without losing the overall atmosphere.
3. Moving Images (Runway Gen-3): From “Still” to “Rush”
Once the still image was validated, the duo switched to Runway for animation. Michael Goi insists: it’s better to generate a perfect still first, then animate it, rather than asking AI to generate video directly from text (where control is minimal).
The “Motion Brush” tool particularly impressed the audience. It allows “painting” on the image which zones should move (for example, smoke coming from a vent or falling rain) while keeping the rest static. Camera sliders (Pan, Tilt, Zoom) then simulate camera movement.
The result? A 5-second clip, imperfect (with that famous shimmer typical of current AI where background faces deform), but sufficient for an animatic or pitch deck. In 5 minutes, an abstract idea became a moving shot.
4. The Uncrossable Limit: Human Intent
Despite the tools’ speed, Michael Goi reassured the assembly about the notion of soul.
“AI has ingested millions of images, but it hasn’t lived,” he reminded. “It’s never had a broken heart, never felt cold rain. It only knows the data of rain.”
This absence of lived experience creates the “Uncanny Valley” effect. AI predicts the next pixel, but doesn’t feel the scene. The cinematographer’s role thus evolves toward curator and pilot. The act of choosing among proposed variations now constitutes the initial artistic act.
5. The Legal Aspect: Gray Zone
A crucial audience question concerned copyright. The current position (in the USA at least) is strict: a work 100% generated by AI cannot be copyright protected.
The ASC’s advice is therefore clear: use these tools for preparation, inspiration, and internal documents. For final theatrical release, caution is advised until the legal framework stabilizes, unless there’s substantial human retouching and compositing work on top of the AI base.
So, Board the Train or Stay on the Tracks?
“The train has left the station,” Michael Goi concluded. “We have a choice: stay on the tracks and get run over, or learn to drive the train.”
This session demonstrated that AI, far from making technical expertise obsolete, values it differently. The vaster your visual and technical culture, the better you’ll know how to “speak” to the machine to extract meaningful images.
Commands to remember for your first tests (on Midjourney, free registration to explore):
- Prompt in the “create” field
- Give your preferences (format ratio, styles) to the right of this field
- Click the arrow to launch generation of 4 images
- Like an image? Click on it and explore options on the right
- Use the –no command followed by an element you don’t want (e.g., –no tree to avoid trees)