Reading Time: 3 minutes read
The film industry is an opaque producer of quality entertainment, which I have over-consumed in the past four months. Given the nature of how software and the internet has affected everything else I touch, I was curious how the entertainment industry has changed in recent years.
To learn more, I spoke with a few friends and acquaintances in the post-production side of video to better understand how the industry is changing. My thoughts were that software has changed the coordination costs around huge productions, but most of the tech innovation has been around media distribution (ie. streaming). From what I gleaned, there hasn’t been as much dramatic shifts in shooting a regular TV show or movie, as the majority of the film industry is executing on a complicated production cycle. That being said, there are a few areas that I found really interesting.
Specifically, I was surprised how much the Unreal Engine is being used.
One major change that I heard repeatedly mentioned was the impact of improving hardware capacity today, as compared to 5 or 10 years ago. The increasing speed and capacity of new graphics cards and processors has affected the scale of video detail which can be captured and processed. Previously a camera that could shoot in 4K resolution would need to be downsampled for playback and editing, due to sheer limits in compute. Now footage is recorded in 6K or even up to 9K in some cases, during which cameras are capable of immediate playback, as opposed to the previous delays which resulted in footage needing to be processed before viewed.
Inline with having higher compute available on set, the most dramatic area where a new category of film production has emerged is in the “previs” space. This is not technically the post-production side, but instead the effort done before a shoot to plan a set, so as to capture the desired scenes rendered live with visual effects. Specifically speaking, tools such as the Unreal Engine, which was originally made as a a graphical processor for the Unreal shoot-em-up video game, is used to generate a simulated scene with characters placed as actors, and the camera shots planned. By using a simulated environment, the shot planning can become more intentional, visual effects planned better, and the overall production set to be better understood by everyone involved.
https://www.youtube.com/watch?v=gW1OTxYDvlQ
Motion capture is also huge.
The same Unreal Engine is used to eliminate significant post-production visual effects work by using motion capture to simulate environments “on camera”, through screens placed in the background of shots. The ability to capture final results “on camera” is a huge cost saving, as it reduces the need for editing footage after the fact. For example, when shooting a car driving down the road, rather than only having green screens in the window, and replacing the content with footage during the post-production process, the green screens can be replaced with large LED screens, and display the relevant visual content as defined in the Unreal Engine environment. By coordinating the screen placement, camera placement, lighting instruments, and designated camera shot, the actual filmed shot becomes a live simulation of sorts that can avoid a major post-production step.
An example media production that showcased this on-camera environment was in the Disney “baby Yoda” hit series, The Mandalorian, during which large LED walls were used in concert with simulated worlds, to capture fantasy landscapes on-camera.
https://youtu.be/ysIOi_MP_cs?t=82
Huge thanks to Matt Baker, Greg Silverman, Andrew Prasse, and the others who spoke with me on this.