How Do Games Render Their Scenes? Techniques Used In Game Rendering
Games can already look incredibly realistic but how over the years, graphics and games have gotten better and better?
Technologies Used For Game Rendering
We've gone from simple wireframe shapes to 8-bit 2d graphics to detail 2d graphics to these incredibly realistic 3d graphics. What if we say, we still lived in 1998 but we've gotten so much further?
Just look at incredibly realistic graphics, our machines can render nowadays. It's truly baffling, how far we've come. Let's take a look back at how we got to where we are today.
Bit And Byte
In the beginning there were just bits and bytes. There are still bits and bytes but at that time the amount of bits and bytes were very limited compared to what we have today. This meant that developers couldn't draw a lot of colors on the screen at once without taking up all the resources available. Without going too much into details, the earliest graphics could only use black and white pixels on the screen because they only used a single bit, on or off. Then as computers got more advanced, games could start using 8 bit graphics, means they could use 8 bits to describe their colors. Let's take a look at how these 8 bits are used to define colors?
How Developers Used 8 bits Technique To Define Colors?
In early graphics generally used 3 bits for red 3 bits for green and only 2 bits for blue. This is because the eyes are less sensitive to blue light which is why most systems and applications used one last bit for the blue colors. Using this we can define eight shades of red, eight shades of green and four shades of blue and all combinations between them gives developers a total of 256 different colors to work with.
This is the nostalgic 8-bit art style you know from early nintendo games. Sometimes for games with a lot of blue shades developers chose to use more bits for the blue data and less for green or red shades. Let us know more about how exactly old graphics worked and what clever tricks devlopers used to work around their limitations?
Skip forward a few years we've come from 8-bit to 16-bit games. Computers could hold more data which caused graphics to improve. As computers became more powerful, 3d games became more and more common.
How Developers Created 3D Environment In Wolfenstein?
When we talk about early 3d games, one example immediately comes to mind that is "Wolfenstein". Wolfenstein was developed in September 1981. However at the time video cards didn't really have support for rendering 3d graphics yet how developers of wolfenstein created a 3d environment?
To create the 3d effect, it actually used ray tracing but nowadays the new video cards are being used. Was wolfenstein that far ahead of its time? Did its software possess the power to time travel?
The answer is no, the ray tracing used in Wolfenstein was actually much simpler because it used 2d ray tracing. The way it worked was kind of like a top-down shooter. It used a 2d map for the environment and for every frame it traced where the walls were, with rays shooting from the player's position the length of those rays determined how far away a wall was? Thus how high the wall should be drawn?
It was actually a pretty common technique at the time in game engines for creating 3d environments, commonly referred to as ray casting. This made for some pretty convincing 3d effects on hardware that didn't even support 3d environments.
Game developers were so clever they developed the next iteration of 3d rendering that was Doom. Doom allowed for much more complex rendering than Wolfenstein. In Doom, you could see stairs, elevated rooms, enemies above and below you, all of which was impossible in Wolfenstein. So what did Doom do to allow these more complex environments?
Technique Used In Doom To Create And Render 3D World?
Well, it used a technique called Binary Space Partitioning. This technique starts with slicing the world into manageable pieces. Basically, we keep slicing the remaining space in half, until we have slices of the world which are small enough for our computer to handle. All of this information is stored in what we call a BSP tree. Once this preparation is done we can render our world. To do this we check what node our player is in and then render the nodes around him. This is where our BSP tree comes in, because our BSP tree goes from larger to smaller spaces. We can quickly find what part of the world our player is in by repeatedly asking the question which one of these nodes contains our player. So we start at the top then we ask which one of these nodes contains our player? If we find our player is in the right node then we go one level deeper and ask the same question, this time the player is in the left node. We continue until we reach the very bottom of our BSP tree and we know exactly where our player is in the world? With this information we can start rendering our room. The way Doom renders its walls is different from the ray casting technique.
As we've just learnt this time, the world is divided into many small segments. When rendering the engine then determines the perspective of each wall in a segment by measuring the wall's distance from the player's perspective lines. Doing this each wall on the segment can determine what angle it needs to be drawn at to appear in the correct perspective. We draw all the walls from the closest segment first. Now we have one node and render the next closest segment, then again and again until our screen is completely filled. After wall drawn then the floors and ceilings are drawn and finally the game sprites. Once there are no pixels left to fill we can stop and our render is complete.
Note: how we draw the rooms from near to far, this ensures that we don't draw any obscured areas, saving us a lot of computation power.
By rendering our world this way we are no longer bound to the flat 2d environment. We saw in Wolfenstein because we don't really care what our nodes look like. We can have things like stairs, platforms and elevated rooms and all of that without using any 3d rendering hardware.
3D Graphics Card
Finally, something invented that changed the gaming world forever something that made the impossible possible, that could make all your virtual dreams come true, something that could render triangles lots and lots of triangles. We are talking of course about the first 3d graphics cards. For the first time, it was possible to display true 3d graphics. Surprisingly, the way 3D rendering was done at that time is very similar to what we are used to today.
How Do Developers Create 3D Models For Games By 3D Graphics Card?
As we mentioned before, 3d graphics work by using triangles. Everything you see in 3d graphics is secretly just a bunch of triangles cobbled together to form complex 3d models. But, how do these cobble together?
Shading And Flat Shading
Triangles produce realistic graphics, the basic principle is that for each individual triangle you can calculate how bright it should be, based on its angle towards a light source? The more directly it faces the light, the brighter it will be. Putting all of this together and having each triangle change its brightness based on its angle towards a light gives 3d models with their sense of depth which is called shading. In its simplest form you will change the brightness of the entire surface of the triangle based on its angle towards the light is called flat shading and can be seen in very early 3d games.
We can also perform more complex calculations on our triangle where all the pixels inside the triangles are interpolated in such a way that our surface looks smooth and much more realistic, this algorithm is called Phong Shading. It is an example of one of the earliest ways we could render realistic looking 3d models.
Still we can do more because we don't have to stick to realism. Developers can customize their shaders to create all sorts of crazy effects. One well-known example which achieves a much more stylized look is cell shading. In cell shading you start with a regular smooth shading such as the font shading. Now instead of using a smooth gradient containing all the levels of brightness we chop up our brightness levels into few bands or cells. Only if the brightness level of a pixel falls into another cell change the color of that pixel and this way we get a nice cartoony effect.
So far we only focused on the surfaces of objects. We are still missing a very important part to give our scene more depth.
Shadows And Rasterization
Shadows and rasterization are generally done using shadow mapping. The way this works is that essentially the scene is rendered from the perspective of the light. This gives us a map of which services are seen by the light and which aren't, this map is called a shadow map. Now to get the shadows in the scene we have to find points in the shadow map we just created. For example if we take any point, we can look up its coordinates in the shadow map. Is there a point that's closer to the light in those same coordinates, if yes then that means something is blocking the light which tells us that we must be in the shadows.
Of course this doesn't cover all the techniques involved for rendering shadows. For instance there are ways to make shadows softer or to take into account different lights or to account for ambient light bouncing of the environment. There is still much left to discover. Now let us know, how the basics of rendering shadows work in traditional rasterization?
Now that we have a lighting setup, things are already looking pretty good but we still only have flat colored textureless surface, so now we have to apply a texture to our objects. We start with an image we want to use for our texture. Now we need to determine which triangles in our objects will use, what part of our texture to determine. We need to generate a uv mapping, you can imagine this as essentially wrapping the texture around our object and remembering where the triangles overlap our image.
How To Create UV Mapping?
There are multiple ways to create this UV map but the principle is the same we want to know which triangles should use what part of our texture ? Once we have created the UV map we can simply use it to read the colors a triangle should have on our 3d model. So now instead of a solid color, by using our UV map our triangles actually know what part of our texture they should display. In Rasterization these uv maps can also be used to fake reflections. In principle this works by taking a 360 degree picture of the environment and then simply baking it into the texture itself. In principle this works by taking a 360 degree picture of the environment and then simply baking it into the texture itself. This enable us for very fast and efficient way to show reflections.
Finally we need to display the triangles we created on our screen. Our computer looks at the triangles in our scene and converts those triangles into pixels. However our graphics card only has so many pixels to work with. This means that straight lines and edges can become blocky and pixelated. To combat this, graphics cards have to use a technique called Anti-aliasing. This technique adds some extra faded pixels around the straight lines to make them appear more smooth.
Once again there are many different ways of anti-aliasing which vary in quality and performance. They all have the same goal make those jagged edges appear smooth and that covers the basics of rasterization.
These are mostly just simple examples to explain the fundamental principles. In reality there are much more calculations and shading techniques going on to make objects appear even more realistic. But the basic principle remains the same. Map all the triangles of the 3d models to pixels on the screen and then assign them a color using different shading techniques, this is the basic process that is used pretty much all games.
Today it's fast, it looks good and has proven itself to be a reliable way to render 3d graphics. Over the years developers have greatly improved and expanded upon these fundamental techniques to achieve the level of realism as we see in games, today. There are still much more to explore in the world of 3d graphics. Rasterization is reaching its limits and game developers are looking more and more towards another technique that can make their games look even more realistic.