“I’ve always been very interested in science as well as fiction. I always thought that these two were linked because understanding and visualizing our world requires a powerful sense of imagination. That’s why I’m passionate about imagining the impossible, and I believe movies are the best candidates to stimulate the imagination.” – Azad Boubrit
Bogdan: How did you get started with modeling?
Azad: Since 2009 I’ve tried to learn the basics of visual effects and 3D by myself, using tutorials on the web. I started with 2D compositing of stock footage and small title designs thanks to the tutorials of the very talented Andrew Kramer. I also tried to use Blender for my VFX, allowing me to put 3D models impossible to capture in real life into my shots. At this stage it was just for fun, I made a couple of videos and started writing little stories in order to apply what I’ve learned in the tutorials during my spare time.
It’s in 2016 that I created my YouTube channel named “BEAM Studios” and I decided to be more and more serious about technical aspects of VFX and filmmaking. This was all about trying to reverse engineer epic sequences of big movies like Marvel Studios’ Dr. Strange or Star Wars. The real challenge at this point was to keep improving VFX quality (by increasing the working hours) and follow the aerospace engineering training at ISAE-ENSMA, a French university.
How did the video come to be?
The graduation ceremony at my university is composed of a theatrical show. Short videos are projected during the performance to increase the immersion. I was responsible for the creation of the opening, that is usually a 1-minute 3D animation video. Every year a team of students (8 to 12) challenges themselves to improve the result, using Blender’s internal renderer. For the 2018 ceremony, I decided to go where nobody from my university has gone before, to use a physically based renderer for an 8-minutes 3D animation short film.
Where did the inspiration come from?
The first step was to find the theme of the video, in this case, all the story was focused on mechanics and aeronautics. So there had to be aircraft and engineering stuff like spare parts, assemblies etc. The other challenge was to blend the latter elements with a story based on the year group (the one that graduated).
The main inspirational video that I found on the internet was a video game trailer “The CREW” developed by Ivory Tower and published by Ubisoft. The full CG trailer was made by a French talented team at Unit Image. I instantly fell in love with the photorealistic look of the images and the animation perfectly merged with the music. As I was an intermediate user with Blender, I decided to learn 3D by adapting this trailer for my project.
Other movies helped me for my animations like Iron Man and it’s F22 raptor scene and Pearl harbor dogfights scenes. I also watched random fighter jet videos just to see how an aircraft moves. The idea was to find exactly how to animate my 3D models so it’s possible to “visually feel” the aerodynamic forces and instabilities.
Did you do any testing before starting the actual modeling?
Before starting the project, I wanted to study the feasibility of making a photorealistic animation with Cycles. After 2 weeks of deep research on the internet, I found Andrew Price (Founder of Blender Guru and Poliigon). His tutorials perfectly explained how to get a realistic look with architecture renders. The ideas I retained for this project were the following:
• Use PBR maps for 90% of the materials
• Enable filmic color management for the best dynamic range
• Pay attention to the scale of the objects relative to the scene
• Add realistic and interesting lighting
• Add random and realistic details to the ground (ground detail is a priority)
• Complexify the geometry (i.e add bevels, clean meshes, correct UV) during the modeling.
By applying these 6 steps and a lot of working hours I was able to get plausible results. The next challenge was to find a way to make an animation.
How about the rendering time, could the local machine handle it?
Render time in Cycles basically means getting rid of the noise. The latter is generally caused by convergence issues with light paths calculations and probabilistic global illumination. Again, Andrew Price made a very good video explaining how to render an image with fewer samples. What I retained: In Cycles, a lot of parameters need to be adjusted to optimize the renders for animations. It can be indirect light bounces, volume step size, multiple importance surfaces or caustics etc. But the substantial feature that completely revolutionizes 3D animations in Blender is the denoiser. The denoiser allows the users to render a clean image with incredibly fewer samples hence less render time.
In my project, the real challenge was rendering the interior scenes as the lighting is mainly composed of global illumination. For these environments I needed to clamp indirect light, thus keeping a realistic lighting but also less noise in the corners or walls.
With all these techniques I could render a single image in 25 minutes with an Intel Core i7. The result was very satisfying but even with this render time, a 2 min animation film with interior shots was rendered in 52 days nonstop. This render time is very uncomfortable, and the main issue is that it’s impossible to work on the same machine simultaneously.
What other options did you find?
Ray tracing renderers are the best tools when it comes to photorealism but require a lot of CPU power. After some research on the internet I found some solutions:
• Buy multiple CPU machines. This is maybe a good solution in the long run. But as I had only 1 project to complete, this wasn’t necessary.
• Buy one more graphics card. GPU rendering is a good way to drastically reduce render time, in this case from 25 min to 7 minutes per frame of interior shots. I had already one Nvidia GTX1070 Aero OC. But the Aero OC version causes Cycles to randomly crash with the “Misaligned address” error. It was risky to buy another one as the developers still don’t know where the problem comes from.
• Use a free rendering service that allows users to mutually render frames on a worldwide scale. This solution works for small projects and simple scenes. Unfortunately, I needed more features like the possibility to upload heavy smoke cache, download non-destructive OpenEXR images, and upload big project files as all the PBR materials are packed. The free service couldn’t handle these projects.
Did any of those actually work?
The best solution I found was to hire an online rendering service that handles all the features necessary to get a professional workflow with photorealistic animation and also the possibility to make 3D visual effects. The first service that proposes all that for Blender is RenderStreet. I planned 6 months of rendering and production at the same time, so I didn’t need my shots to be rendered quickly. I just wanted to render my shots when they were ready and work on the next one simultaneously, then repeat the process. Hence the monthly payment (more restrictive than the “pay per hour” solution) is adapted to this workflow because I don’t need the full CPU/GPU power capacity that is more expensive.
This solution is then compatible with projects that are planned way before the deadline. In this case 9 months for 8 minutes. The price of rendering was fixed to 300$ instead of more than 1500$ with the on-demand solution for 196 3D shots. Then it’s possible to export full HD OpenEXR image sequences, upload all possible caches (smoke, simulations, particles). I could benefit from the latest versions of Blender and I had access to a mapped FTP server to upload my Blender files using Windows Explorer. So, I decided to choose RenderStreet as a rendering solution for this ambitious project. And it worked.
Let’s talk a bit about production, did you start with a script?
A 3D animation this long needs to be planned from the beginning to the end. So, the first step was to write a global script with the different ideas rather than a detailed script that lists all the 3D shots. Each shot I wrote was imagined without any precise timing, as it was impossible to visualize that in my mind. This step is crucial in the sense that it determines what is going to happen in the video and what are the environments and objects that will be used for sure.
What was your modeling workflow?
After I completed the detailed script, I started to model the environments. In this video, the environment is generally a room, an exterior scene near a building or a landscape. The idea is to create the environments that I imagined in the detailed script and try to model them according to the camera angles I had in mind. It’s very important to know when to stop modeling medium details of a room as the camera angles aren’t set yet.
For instance, I was completely sure about the use of the warehouse. I modeled it entirely and I started to add pipes, vents, big objects and furniture just to start setting up the scale of the scene. Then I modeled and textured objects that are in the center of attention. All mech parts of the F16, highly detailed F16 wheels, the turbofan engine, F16’s rig, various aircraft, a robot, a car and its rig etc.
The reason I modeled these objects and partial environments is that I needed them to set up the animatic. As I was alone I didn’t want to create low poly models and environments for the animatic then start from the beginning with highly detailed objects.
How did you integrate the modeling with the video production?
So, I made an animatic of the video, this is a viewport visualization of animations, timing, and continuity of the shots. It helps to verify the synchronization of the shots with the music. Above all, the animatic allows you to know exactly the total number of shots and the length of the video, which drastically improves the planning quality.
I took the objects I modeled, and I put them into my scenes. I started to add cameras at different angles according to what I had in mind in the detailed script. After setting-up some keyframes, I was ready to export the shots and verify continuity.
The OpenGL renderer in Blender allows exporting the viewport very quickly in AVI JPEG format. Every shot is then placed in a video editing software like Adobe Premiere. The idea is to move between the two software and add corrections or improvements until the result is satisfying. Thanks to that I was able to confront myself to issues that I couldn’t anticipate in my mind. This could be animation issues, the feasibility of the shots or the global feeling of camera movements. The advantage of this workflow is that when the animation looks good, I only need to focus on adding details, texturing and rendering. And I was sure that the result would work without losing my time rendering non-compatible shots. This process took 4 months.
After the animatic was complete I went back to the beginning of the video and I added details in the field of view of the camera. These objects aren’t interesting individually but they help to set-up the scale of the scene and the photorealistic look of the image. This could be papers, rocks, a cell phone, imperfections and so on. One should pay attention to the polycount as well as the memory. Smart duplication of small objects is a very good way to add details, but it can crash the computer if it’s not properly done (Alt+D conserves memory except Shift+D).
How did you handle the lighting in such complex environments?
I activated the filmic color management for the highest dynamic range, in that way I could turn up light energy coming from the outside without burning the image.
Every scene needs a realistic lighting and some of the biggest challenges were inside the warehouse.
One way to light the scene was to use a sun lamp (parallel lights) entering by an array of small windows. Photons bounce over the ground and hit every corner of the room. The issue with indirect lighting is that Cycles doesn’t like big interior spaces with small entries for the light, it generates an incredible amount of noise.
Clamping indirect lighting (i.e. reducing the energy that comes from indirect light) tends to preserve the final render, however, it’s less physically accurate. To compensate that I needed to add area lights that approximatively recreate global illumination.
What were the next steps?
As I said I had access to a mapped FTP drive in my windows explorer. I just had to click and drag my blender files. I found that this way of uploading files is more stable than directly via the web browser. The browser way showed some errors during upload as the internet connection was quite poor. But even with this drawback, I was able to upload and download my 196 3D shots not to mention all the VFX (smoke, clouds). The smoke cache can be easily uploaded as long as the relative paths are conserved between files.
The last step is to use a compositing software (After Effects or Nuke) to add final color corrections and bring life to the final render. It is here where the multiple passes are combined (volumetrics, Zpass, smoke elements, explosions). Most of the VFX are made outside of Blender, as the latter is a 3D software. Video compositing in the node editor isn’t the best way to post process layers.
What is your take on Blender after using it for this project?
It seems that the denoiser feature completely revolutionizes the animations in Blender, allowing the artists to render frames below 1000 samples without noise. In addition to that, thanks to RenderStreet it’s possible to handle ambitious projects with a complete set of tools and features that preserve the rendering workflow. It’s regularly updated to the latest version of Blender, and the support is incredible.
However, in Blender, there are some tools that need to be improved in the future. The first thing would be the camera rig. It would be very useful to propose directly a two-node camera with a point of interest without being constantly trying to play with null objects and non-intuitive constraints. The second point would be to improve the smoke simulator. Even if it is really good, some situations force the artist to use more complex simulation software such as FumeFX (which is very expensive as it comes with 3DsMax, but this is the price of physically based fluid dynamics algorithms). The denoiser should be slightly improved to avoid artifacts around bright and sharp areas.
The last versions of Blender allow a lot of people to have access to CGI for free and the results promise a good future for this magic software.
I would like to thank two talented people: Thibault Allenet and Victorien Michel, they were both involved in this project as sound designers. Victorien worked on creating the sound effects and synchronizing the animatic shots with the music. Thibault refined all the audio tracks to create a complete sound immersion, he added his magic to make us hear what we see. This short film wouldn’t have been possible without them.