Skip to content
Published 3rd June 2020

How real-time rendering is changing how we can produce rich and engaging film content.

Hollywood is reeling due to COVID-19, but there is a part of the industry, hidden away from lights, camera and action, buried away in team members respective homes, that continues to chip away at content creation, notwithstanding the lockdown restrictions.

Alvaro Garcia, VFX generalist, turned filmmaker issues a rallying cry in his YouTube channel ‘Realtime Mayhem‘. His informative and motivational videos, provide inspiration as well as practical, hands-on demonstrations of how to use a game engine for quick proof of concept, rapid iteration and even final quality shots. His indie production ‘The Seed of Juna’, initially planned for offline rendering, now serves as a testbed of what is possible using real-time.

This is an underground movement of 3D artists and animators with a VFX or game development background, leveraging the latest advances in real-time rendering to create compelling storytelling experiences with small, distributed teams.

The time is now ripe for a different approach to content creation

A subset of traditional CGI (computer-generated imaging) and animation, ‘real-time’ has been bubbling away beneath the surface for a while. But the time is now ripe for a different approach to content creation. The application of real-time is not only limited to animated productions.

The use of real-time techniques and production processes in live-action filmmaking has recently gained public awareness thanks to the pioneering work of Jon Favreau on large scale productions such as The Lion King and The Mandalorian. The specific technical approach that leverages real-time in live-action filmmaking is known as Virtual Production.

Matt Workman, an ex-music-video cinematographer, has been a particularly vocal proponent of Virtual Production, with the backing of real-time technology giant EPIC and their Unreal Engine. Together with Lux Machina, their LED Wall Virtual Production demo during SIGGRAPH 2019, gave a glimpse of how this technology enables not only Hollywood-scale productions but delivers cost and time advantages while maintaining a very high quality of imaging.

Origins: A Game Called Spacewar!

The idea of computing imagery in real-time probably dates back to the first videogames, such as Spacewar! This was 2D imagery of course, and videogames have come along way since now featuring fully 3D worlds which can host multiple users at a time, doubling up as social spaces as well as virtual playgrounds. 

In the last few years, the extents and detailing of these worlds has accelerated, offering users/players the possibility of ‘2nd lives’ beyond the screen, e-sports and events. Crossing the boundary and immersion in synthetic worlds has become second nature not only to the archetypical ‘gamer’ profile but also to a much wider-reaching demographic of spectators and content creators as well as active participants.

A prime example of how mature and rich game worlds have become is apparent when examining the recent success of Travis Scott’s concert, hosted by Epic Games within the game-world of their mega-hit Fortnite. A future such as that depicted in Steven Spielberg’s Ready Player One, is not so far removed from our imagination anymore!

Games are interactive

Games are interactive, of course, and this is where real-time rendering is key. Meaningful interaction in a synthetic world requires the user to be able to reposition the viewpoint at any time during the viewing experience; a dynamic rather than a static (or pre-calculated) perspective.

So, real-time rendering describes the ability to render CGI on the fly, based on the users viewpoint. A system of hardware and software able to generate moving images on-screen, from any angle or perspective, at least 30 times a second can be considered a real-time rendering system. Videogames generally render their imagery to screen at the rate of 30 or 60 frames per second. VR (virtual reality) is another prime example, and in this case, for a smooth experience, the imagery must be rendered at 90 frames per second.

Rendering is achieved technically by interpreting the description of a scene (3D geometry, materials, animations, etc.) based on the position and movement of the camera that will be framing the shot. Each render pass produces an image or a frame, that is instantly displayed on the viewer’s screen.

A realistic render from any point of view, on-demand

As adoption grew and videogames became more mature as a business, the investment in rendering technology, both on the hardware and the software side, grew alongside it. Initially, the technology was not able to offer a photorealistic finish. So videogame creators resorted to stylised representation to immerse their audiences.

But the advent of the GPU, a processor which works alongside a traditional general-purpose CPU, made manipulating CGI and image processing far more efficient. This, in turn, enabled PBR, or physically based rendering; i.e. a more accurate model of the flow of light in the real world, using materials and shaders.

Today, industrial, automotive and product design, as well as architectural visualisation, are benefiting from real-time rendering. The key advantage is the possibility of being able to view a photorealistic render from any point of view, on-demand.

The Future of Film Production?

The primary added value of real-time rendering, as described above, is the ability of the user to reposition their point of view at any time during the consumption of the visual experience. However, the very intention of a movie is to take the viewer on a journey curated by the Director. If the film is to remain faithful to its format, the Director must retain control of what the viewer is looking at.

‘Virtual Production’ was pioneered in television, primarily in the areas of broadcast. The possibility of live green screen removal, insertion of CG elements and compositing, enabled a richer experience for live event coverage, analysis and commentary. Then came the pioneering efforts of directors like James Cameron and Jon Favreau, building on the broadcast approach and applying the techniques to feature film production.

Traditionally, vfx-intensive films would have extensive pre-production and post-production phases in which armies of artists would first visualise the fictional world and then insert the CG elements into the shot footage. This approach generally required on-set talent to perform with imaginary characters and settings. The responsibility of ensuring that the shot footage would be compatible with the CG to be added in post-production, ultimately fell to the Director. This is where the saying “we’ll fix it in post” originated.

The real-time revolution is nigh

In traditional post-production, each of the 24 frames a second, which make up the final movie shots, is a composite of multiple render passes, produced in isolation and combined using specialist compositing software. The shots are then colour graded. The time to produce each render pass has traditionally been much greater than the 33ms required for a typical real-time frame. Thus, even if the output achievable is not (yet) comparable in terms of photorealism, realism can add real value in the process – context.

Context for actors and Director can now be provided in-camera and on background screens, offering reference if not final quality vfx on set. It tightens the loop between hypothesis and result, accelerating learning and iteration. It shortens the gap between pre-visualisation and final shot. It relegates less decision making to post as more is seen and done in-camera.

Real-time aids and optimises all stages of film production from scouting of virtual sets in VR, to in-camera visualisation of creatures and contextual lighting of actors on physical set, smoothing the path to the final render. The real-time revolution is nigh.

— Marc D’Souza | Founder & Lead Consultant