Hi everyone, before I dig into today’s blog post, I wanted to invite you to tune in to tonight’s Extra Life livestream at 5:00 pm PT, where I will personally guide fans through a live demo of our studio’s upcoming open-world game — The Tomorrow Children. We’re really excited to participate in the Extra Life program and hope as many of you as possible can stop by to check out the first look at gameplay footage from The Tomorrow Children!
Go here at 5:00 pm PT tonight to watch live.
But for right now, I’m going to talk a little about the tech behind The Tomorrow Children. Hopefully you’ve all seen the trailer and are wondering how the glory we managed to get that crazy surreal look we have.
When the project started we had a meeting with Mark Cerny and he said “I want Q-Games to do something that’s a little… outside the box.” So I took that to heart and decided to use the PS4’s awesome compute power to drive three things: Cinematography, Lighting, and Geometry.
Cinematography
So I immediately drove the technology towards “cinematography” which I think is a much more visually stunning style compared to CG that strives to be realistic. Realistic graphics give us directly what we can see, but cinematography also gives us what we imagine we’re seeing. So, for example, if you’re looking at a green field on a cloudy day, the raw photons hitting our eyes are actually a bit of a dull green but the brain’s imagination (and this is different for every person) spruces up the image we are seeing, making it more stimulating and exciting.
Pretty much every movie you have ever seen has gone through this kind of color grading process, the most recent fad of which is a process called “orange-teal” that boosts orange in the highlights and blue in the shadows plus a number of other tweaks. Recent TV series such as Breaking Bad, Utopia, and True Detective rely on this heavily to set up atmosphere and scene tone.
So I thought to myself, why not do this for games too — but not via a few simple tone-mapping parameters, let’s go the whole hog and create a professional cinematic color-grading process. And we have two major advantages over cameras and their sensors. The first is that a modern 3D pipeline generates a larger dynamic range of color which gives us more freedom, and the second is that we have ‘Z’ information for every pixel which lets us introduce Z as a parameter into the color-grading process. This lets us do things such bring up black levels in the distance, or even swizzle colors around a little based on distance from the camera.
Lighting
Okay, with that cinematic feel to everything, we started investigating how to get a “pre-rendered” look to our 3D in realtime and we decided we were going to have to go all out and do something that no-one is doing. So we researched and invented something called “cascaded voxel cone-ray tracing.” The concept is a little complex but it involves calculating and storing light and its direction around the player as she moves in ever increasing cascades of data.
Because we have volumetric data (the voxel cone part) it means we can bounce light around fairly cheaply and any one pixel on the screen has up to three bounces of light hitting it, and from all directions too. Now, by the third bounce, the light is quite diminished in power but it makes that subtle difference that tricks the eye into thinking it is looking at something with true presence.
Interestingly most Pixar-style CG movies that you have seen are only using one bounce and although they beat us in detail because they can spend 30 minutes or more rendering one frame, our lighting is a lot more subtle and effective. For example, we can move a big red object around in real time and watch the sunlight reflecting off of it bounce onto the surrounding objects, and then watch that light bounce again onto other nearby objects that wouldn’t normally be lit (indirect lighting).
Geometry
Finally, we needed some other new tech to build the game around so we started looking into techniques to do deformable landscapes. We wanted something you could dig and mine, or create shapes in but we didn’t want anything too strongly grid based; we wanted it to feel more real. So we went with something called “layered depth cubes,” which is a way to represent the world without using polygons. Instead, it’s represented as volumes, which are then converted to polygons as needed (for example if the player goes near them and they need to be actually drawn on screen).
The benefit of not using polygons means the data is far easier manipulate and we can do boolean options (addition/subtraction) on the data to cut out holes or to add details and the whole structure remains solid and intact.
The added advantage of course is this all ties in with the lighting above to give it quick, easy-to-access data structures to bounce its light through!
These are just three areas of technology we’ve created for The Tomorrow Children and without the amazing power under the hood in the PS4 none of them would be possible. The future of 3D graphics is here and it is beautiful!
Exclusive Car Review at www.automoview.com