Game Development Stack Exchange is a question and answer site for professional and independent game developers. Join them; it only takes a minute:

Sign up
Here's how it works:
  1. Anybody can ask a question
  2. Anybody can answer
  3. The best answers are voted up and rise to the top

I'm new to both gamedev and Blender, and there's something I can't shake:

In Blender, a single render (even using the more advanced Cycles renderer) can take up to 45 seconds on my machine. But obviously, in a game, you can have amazing graphics, and so rendering is obviously happening continuously, multiple times a second in real-time.

So I'm also wondering what the disconnect is, regarding how "slow" Blender's renders seem to be, versus how game engines achieve real-time (or near real-time) rendering.

share|improve this question
3  
Real-time rendering is a huge topic in itself, there's a lot of books written about it (including "Real-Time Rendering"). And renderers like Cycles work completely differently than 3D renderers in game engines - you can't really compare them – UnholySheep 22 hours ago
2  
@UnholySheep Of course you can compare them. How else would anyone explain the difference, to answer the question? – user985366 12 hours ago
    
Blender has to do everything; a video game's engine only has to do a limited set of things that were selected because they could be made sufficiently performant. If some rendering task is hard, Blender tries to optimize it as best it can while a video game engine simply omits it. – Chemical Engineer 11 hours ago
    
Is it possible that Blender does not use the GPU in your system? Besides Blender not making many approximations, the difference between CPU and GPU performance can be devastating for certain tasks. – Martin Ueding 22 mins ago
up vote 46 down vote accepted

Real-time rendering, even modern real-time rendering, is a grab-bag of tricks, shortcuts, hacks and approximations.

Take shadows for example.

We still don't have a completely accurate & robust mechanism for rendering real-time shadows from an arbitrary number of lights and arbitrarily complex objects. We do have multiple variants on shadow mapping techniques but they all suffer from the well-known problems with shadow maps and even the "fixes" for these are really just a collection of work-arounds and trade-offs (as a rule of thumb if you see the terms "depth bias" or "polygon offset" in anything then it's not a robust technique).

Another example of a technique used by real-time renderers is precalculation. If something (e.g. lighting) is too slow to calculate in real-time (and this can depend on the lighting system you use), we can pre-calculate it and store it out, then we can use the pre-calculated data in real-time for a performance boost, that often comes at the expense of dynamic effects. This is a straight-up memory vs compute tradeoff: memory is often cheap and plentiful, compute is often not, so we burn the extra memory in exchange for a saving on compute.

Offline renderers and modelling tools, on the other hand, tend to focus more on correctness and quality. Also, because they're working with dynamically changing geometry (such as a model as you're building it) they must oftn recalculate things, whereas a real-time renderer would be working with a final version that does not have this requirement.

share|improve this answer
7  
Another point to mention is that the amount of computation used to generate all the data a game will need to render views of an area quickly may be orders of magnitude greater than the amount of computation that would be required to render one view. If rendering views of an area would take one second without any precalculation, but some precalculated data could cut that to 1/100 second, spending 20 minutes on the precalculations could be useful if views will be needed in a real-time game, but if one just wants a ten-second 24fps movie it would have been much faster to spend four minutes... – supercat 18 hours ago
4  
...generating the 240 required views at a rate of one per second. – supercat 18 hours ago

The current answer has done a very good job of explaining the general issues involved, but I feel it misses an important technical detail: Blender's "Cycles" render engine is a different type of engine to what most games use.

Typically games are rendered by iterating through all the polygons in a scene and drawing them individually. This is done by 'projecting' the polygon coordinates through a virtual camera in order to produce a flat image. The reason this technique is used for games is that modern hardware is designed around this technique and it can be done in realtime to relatively high levels of detail. Out of interest, this is also the technique that was employed by Blender's previous render engine before the Blender Foundation dropped the old engine in favour of the Cycles engine.

Polygon Rendering

Cycles on the other hand is what is known as a raytracing engine. Instead of looking at the polygons and rendering them individually, it casts virtual rays of light out into the scene (one for every pixel in the final image), bounces that light beam off several surfaces and then uses that data to decide what colour the pixel should be. Raytracing is a very computationally expensive technique which makes it impractical for real time rendering, but it is used for rendering images and videos because it provides extra levels of detail and realism.

Raytracing Rendering


Please note that my brief descriptions of raytracing and polygon rendering are highly stripped down for the sake of brevity. If you wish to know more about the techniques I recommend that you seek out an in-depth tutorial or book as I suspect there are a great many people who have written better explanations than I could muster.

Also note that there are a variety of techniques involved in 3D rendering and some games do actually use variations of raytracing for certain purposes.

share|improve this answer
2  
+1 for a very good point; I deliberately didn't go down the rabbit hole of raytracing vs rasterization, so it's great to have this as a supplemental. – Le Comte du Merde-fou 15 hours ago
5  
This answer gets more to the heart of the difference. Game engines perform rasterization (forward or deferred) while offline renderers (like Blender, Renderman, etc.) perform ray-tracing. Two completely different approaches to drawing an image. – ssell 15 hours ago
2  
@LeComteduMerde-fou As gamedev is aimed at game developers I felt a supplemental technical explanation would be of benefit to the more technically inclined reader. – Pharap 15 hours ago

Blender also has a game engine built in. Comparing the heavy duty cycles (raytracing) rendering to game rendering is like comparing apples to oranges.

share|improve this answer
2  
Welcome to GameDev.SE! While we respect your opinion, we do not like opinion-based answers. Perhaps you intended to post this as a comment? If so, please wait until you have the minimum required reputation. We do not like commentary as answers, either. – Gnemlock 11 hours ago
    
You have enough rep to post comments... – Dmitry Kudriavtsev 2 hours ago

protected by Josh Petrie 13 hours ago

Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).

Would you like to answer one of these unanswered questions instead?

Not the answer you're looking for? Browse other questions tagged or ask your own question.