Jump to content
  • entries
    944
  • comments
    5,899
  • views
    929,529

Multipass Rendering in Leadwerks 5 beta


Josh

5,945 views

 Share

Current generation graphics hardware only supports up to a 32-bit floating point depth buffer, and that isn't adequate for large-scale rendering because there isn't enough precision to make objects appear in the correct order and prevent z-fighting.

Z-fighting.png.80c56cfcb001206e86517f3b4a9d5fec.png

After trying out a few different approaches I found that the best way to support large-scale rendering is to allow the user to create several cameras. The first camera should have a range of 0.1-1000 meters, the second would use the same near / far ratio and start where the first one left off, with a depth range of 1000-10,000 meters. Because the ratio of near to far ranges is what matters, not the actual distance, the numbers can get very big very fast. A third camera could be added with a range out to 100,000 kilometers!

The trick is to set the new Camera::SetClearMode() command to make it so only the furthest-range camera clears the color buffer. Additional cameras clear the depth buffer and then render on top of the previous draw. You can use the new Camera::SetOrder() command to ensure that they are drawn in the order you want.

auto camera1 = CreateCamera(world);
camera1->SetRange(0.1,1000);
camera1->SetClearMode(CLEAR_DEPTH);
camera1->SetOrder(1);

auto camera2 = CreateCamera(world);
camera2->SetRange(1000,10000);
camera2->SetClearMode(CLEAR_DEPTH);
camera2->SetOrder(2);

auto camera3 = CreateCamera(world);
camera3->SetRange(10000,100000000);
camera3->SetClearMode(CLEAR_COLOR | CLEAR_DEPTH);
camera3->SetOrder(3);

Using this technique I was able to render the Earth, sun, and moon to-scale. The three objects are actually sized correctly, at the correct distance. You can see that from Earth orbit the sun and moon appear roughly the same size. The sun is much bigger, but also much further away, so this is exactly what we would expect.

unknown.thumb.png.ac2c9ca703d935726a34b8fcefe70718.png

You can also use these features to render several cameras in one pass to show different views. For example, we can create a rear-view mirror easily with a second camera:

auto mirrorcam = CreateCamera(world);
mirrorcam->SetParent(maincamera);
mirrorcam->SetRotation(0,180,0);
mirrorcam=>SetClearMode(CLEAR_COLOR | CLEAR_DEPTH);

//Set the camera viewport to only render to a small rectangle at the top of the screen:
mirrorcam->SetViewport(framebuffer->GetSize().x/2-200,10,400,50);

This creates a "picture-in-picture" effect like what is shown in the image below:

Untitled.thumb.jpg.c8671733233f5be199ad3b4a3d715dea.jpg

Want to render some 3D HUD elements on top of your scene? This can be done with an orthographic camera:

auto uicam = CreateCamera(world);
uicam=>SetClearMode(CLEAR_DEPTH);
uicam->SetProjectionMode(PROJECTION_ORTHOGRAPHIC);

This will make 3D elements appear on top of your scene without clearing the previous render result. You would probably want to move the UI camera far away from the scene so only your HUD elements appear in the last pass.

  • Like 5
  • Confused 1
 Share

25 Comments


Recommended Comments

Quote

You would probably want to move the UI camera far away from the scene so only your HUD elements appear in the last pass.

Would it be possible to use two different worlds for this? One "real" world and one for your HUD and then first rendering only the first world and after that rendering the HUD-world.

Link to comment
4 minutes ago, Ma-Shell said:

Would it be possible to use two different worlds for this? One "real" world and one for your HUD and then first rendering only the first world and after that rendering the HUD-world.

This would probably not be practical, because the rendering thread is so disconnected from the game logic thread.

Link to comment

You mean, because unlike the previous version the user does not directly call Render() anymore, since it is run on a different thread and thus the user can not correctly orchestrate the rendering? This should not be a problem, since in CreateCamera you can specify the world.

auto realCamera = CreateCamera(realWorld);
auto HUDCamera = CreateCamera(HUDWorld);
realCamera->setOrder(1);
HUDCamera->setOrder(2);
realCamera->SetClearMode(CLEAR_DEPTH | CLEAR_COLOR);
HUDCamera->SetClearMode(CLEAR_DEPTH);

 

Link to comment
3 minutes ago, Ma-Shell said:

You mean, because unlike the previous version the user does not directly call Render() anymore, since it is run on a different thread and thus the user can not correctly orchestrate the rendering? This should not be a problem, since in CreateCamera you can specify the world.


auto realCamera = CreateCamera(realWorld);
auto HUDCamera = CreateCamera(HUDWorld);
realCamera->setOrder(1);
HUDCamera->setOrder(2);
realCamera->SetClearMode(CLEAR_DEPTH | CLEAR_COLOR);
HUDCamera->SetClearMode(CLEAR_DEPTH);

 

There is a World::Render() command which basically says "tell the rendering thread to start using this world for rendering". So rendering two different worlds in one pass would be sort of difficult to manage.

Link to comment

Ah, I see... So the rendering always uses the last world, which called World::Render() and uses all cameras from that world in the specified order?

Would it be possible to implement the same ordering as with cameras for worlds then? Like the following

realWorld->setOrder(1);
HUDWorld->setOrder(2);

where it would first render all cameras from realWorld and after that all cameras from HUDWorld.

This would probably mean, it is more intuitive to have the render-call on the context instead of the world, since all worlds would be rendered.

 

By the way, is there any way to disable certain cameras, so they get skipped? Like setting a negative order or something like this. What happens if you set two cameras to the same order?

Link to comment

What might work better is to have a layer / tag system that lets different cameras have a filter to render certain types of objects.

To skip a camera, you can either hide it or set the projection mode to zero.

Link to comment

 

1 hour ago, Josh said:

layer / tag system that lets different cameras have a filter to render certain types of objects

Isn't that basically what a world is? Each camera renders only the objects that are in its world, so the world is basically a filter for the camera

Link to comment
1 hour ago, Ma-Shell said:

 

Isn't that basically what a world is? Each camera renders only the objects that are in its world, so the world is basically a filter for the camera

In an abstract sense, yes, but there’s a thousand more details than that. The strictness of both Vulkan and the multithreaded architecture mean I can’t design things that are “fast and loose” anymore.

Link to comment
11 hours ago, Ma-Shell said:

Isn't that basically what a world is? Each camera renders only the objects that are in its world, so the world is basically a filter for the camera

I'm going to try adding a command like this:

void World::AddRenderPass(shared_ptr<World> world, const int order)

The order value can be -1 (background), 1 (foreground), or 0 (mix with current world). No guarantee yet but I will try and see how well it works.

Link to comment

Okay, it looks like that will probably not work. The data management is just too complicated. I think a filter value will probably work best, because that can be handled easily in the culling routine.

Link to comment

I see two possible issues with filters:
1. I understand a filter as having some sort of "layer-id" on the objects and a camera only rendering objects with a given layer-id. Suppose, you have two objects A and B. If you want to first render A and B and then from a different camera only B (e.g. you have a vampire in front of a mirror. You would want the entire scenery to include the vampire but when rendering the mirror, you would want to render the scenery from that perspective without the vampire). This would not be easily possible with a layer-id.

2. Performance: You have to run through all objects and see, whether they match the filter in order to decide, whether to cull them or not. If you instead just walk the world's list of entities, this does not happen.

 

Why not using something like this:

camWorldA->SetClearMode(ColorBit | DepthBufferBit);
camWorldB->SetClearMode(DepthBufferBit);
camWorldC->SetClearMode(DepthBufferBit);

context->ClearWorlds(); // nothing is being rendered
context->AddWorld(worldA); // All cameras of world A are rendered
context->AddWorld(worldB); // First all cameras of world A are rendered, then all of world B
context->AddWorld(worldC); // First all cameras of world A are rendered, then all of world B, then all of world C

This would give the user maximum flexibility and only require the context to hold a list of worlds, which are rendered one after another instead of having a single active world.

For compatibility and comfortability reasons, you could additionally define

void World::Render(Context* ctx)
{
	ctx->ClearWorlds();
	ctx->AddWorld(this);
}

This way, you could still use the system as before without ever knowing of being able to render multiple worlds.

 

EDIT: I see, my vampire wouldn't work with this system, as well (unless you allow a mesh to be in multiple worlds) but I still think, this is quite a flexible and easy to use system without much of an overhead

Edited by Ma-Shell
  • Like 1
Link to comment

I will have to experiment some more with this. Interestingly, this overlaps with some problems with 2D drawing. Now in reality there is no such thing as 2D graphics on your computer, it is always 3D graphics. I think my approach here will be to stop trying to hide the fact that 2D is really 3D even if it does not fit our conceptions of how it should be. Stay tuned...

  • Like 1
Link to comment
2 hours ago, Josh said:

I think my approach here will be to stop trying to hide the fact that 2D is really 3D even if it does not fit our conceptions of how it should be

I'm curious what this would allow the end-user to do in exchange for making a straightforward system more complex.  I know in the past people wanted to play movies on textures and map cameras to textures.  And a lot of games do 3D-ish UI (font and images on textures).  I wonder if this system would let us do them with reasonable ease.

Link to comment
3 hours ago, gamecreator said:

I'm curious what this would allow the end-user to do in exchange for making a straightforward system more complex.  I know in the past people wanted to play movies on textures and map cameras to textures.  And a lot of games do 3D-ish UI (font and images on textures).  I wonder if this system would let us do them with reasonable ease.

I started digging into 2D drawing recently because @Lethal Raptor Games was asking about it in the private forum. I found that our model rendering system works well for 2D sprites as well, but in Vulkan you have no guarantee what order objects will be drawn in. I realized how stupid it is to do back-to-front rendering of 2D objects and that we should just use the depth buffer to handle this. I mean, we don't render 3D objects in order by distance, so why are 2D objects any different?

I think the 2D rendering will take place with a separate framebuffer, and then the results will be drawn on top of the 3D view. I think DOOM 2016 did this, for the same reasons. See the section here on "User Interface": http://www.adriancourreges.com/blog/2016/09/09/doom-2016-graphics-study/

Naturally this means that 3D-in-2D elements are very simple to add, but it also means you only have a Z-position for ordering. The rendering speed of this will be unbelievably fast. 100,000 unique sprites each with a different image would be absolutely no problem.

  • Like 1
Link to comment

I thought perhaps 2D rendering would require an orthographic camera to be created and rendered on top of the 3D camera, but that would invalidate the depth buffer contents, and we want to hang onto that for post-processing effects. Unless we insert a post-processing step in between camera passes like this:

  1. Render perspective camera.
  2. Draw post-processing effects.
  3. Clear depth buffer and render 2D elements.
  • Confused 1
Link to comment

Ok, it's just a Vulkan limitation that makes things more complex.  So, to draw the layers of a clock in the proper order, for example, we'd have to assign proper Z coordinates to the back, the hour hand, minute hand and second hand?  Is that how the engine would know what goes on top of what?
clock.png.1e2aa4d74c6021880eb40186ea305740.png

 

Link to comment

You would probably have three sprites with a material with alpha masking enabled, and position them along the Z axis the way you would want them to appear. Imagine if you were doing it in 3D. Which you are.

Rotation of sprites is absolutely no problem with this approach, along with many other options.

You can also use polygon meshes very easily for 2D rendering. For example, the clock hands could be a model, perhaps loaded from an SVG file.

  • Like 1
Link to comment

Okay, I have rendering with multiple worlds working now. The command is like this:

void World::Combine(shared_ptr<World> world)

All the cameras in the added world will be used in rendering, using the contents of the world they belong to. There is no ordering of the worlds, instead the cameras within the world are drawn with the same rules as a single world with multiple cameras:

  • By default, cameras are rendered in the order they are created.
  • A camera order setting can be used to override this and become the primary sorting method. (If two cameras have the same order value, then the creation order is used to sort them.)

So you can do something like this:

//Create main world
auto mainworld = CreateWorld();
auto maincamera = CreateCamera(world);

//Create world for HUD rendering
auto foreground = CreateWorld();
auto fgcam = CreateCamera(foreground);
fgcam->SetProjectionMode(PROJECTION_ORTHOGRAPHIC);
fgcam->SetClearMode(CLEAR_DEPTH);

auto healthbar = CreateSprite(foreground);

mainworld->Combine(foreground);

//Draw both worlds. Ortho HUD camera will be drawn on top since it was created last.
mainworld->Render(framebuffer);

That means that drawing 2D graphics on top of 3D graphics requires a world and camera to be created for this purpose. There is no "2D" commands really, there is just orthographic camera projection. This is also really really flexible, and the same fast rendering the normal 3D graphics use will make 2D graphics ridiculously fast.

Leadwerks 4 used a lot of render-to-texture and caching to make the GUI fast, but that will be totally unnecessary here, I think.

  • Like 2
Link to comment

Yes, you would just call Combine again. It just adds the indicated world to a list. It’s one-way, so “Combine” might not be the best nomenclature.

  • Like 1
Link to comment

Curious: is there a reason this isn't all done under the hood to keep the commands as simple as they are now?  In other words, getting back to my previous question, what extra functionality does this offer the end-user?  What practical things will you be able to do with images and text that you can't now?

  • Like 1
Link to comment
17 minutes ago, gamecreator said:

Curious: is there a reason this isn't all done under the hood to keep the commands as simple as they are now?  In other words, getting back to my previous question, what extra functionality does this offer the end-user?  What practical things will you be able to do with images and text that you can't now?

I think it mostly boils down to making 3D models in the UI possible. I could see some other things working nicely like particles. So if we need to account for this, let’s just cut the foreplay and have one method that handles everything.

Theres also a fundamental shift in my approach to design. I’m not trying to make the easiest to use game engine anymore, because the easiest game to develop is a pre-built asset flip.

I am not interested in making Timmy’s first game maker so if that means Turbo Engine trades ease of use in the simple case for more speed and functionality I am okay with that. I almost feel like people don’t respect an API that isn’t difficult to use.

we’ll see, there is still some time before this is finalized.

  • Like 2
Link to comment

Appreciate it.  And I think you know I'm not trying to bust your balls.  I just feel like if there's sacrifices to be made, it should be for a good cause, which it sounds like it will be.  And I also appreciate that building a good foundation now, even if the rewards aren't immediately seen, is still a good thing.  Was mostly just curious about what Turbo might be shaping into.

Link to comment
16 hours ago, gamecreator said:

Appreciate it.  And I think you know I'm not trying to bust your balls.  I just feel like if there's sacrifices to be made, it should be for a good cause, which it sounds like it will be.  And I also appreciate that building a good foundation now, even if the rewards aren't immediately seen, is still a good thing.  Was mostly just curious about what Turbo might be shaping into.

Definitely, this is the best time to discuss everything.

I think this is the right way to go, even though it makes simple drawing harder. But which is harder, learning one system that does everything you want, or learning two systems that have some overlapping capabilities but work differently? ?

I am working on text now. It works by creating a textured model for the text you want to draw. For an FPS counter, for example, I recommend creating models for each number you display, and storing them in a C++ map or Lua table, like this:

void UpdateFPSDisplay(int fps)
{
	currentFPSDisplayModel->Hide();
	if (textcache[fps] == nullptr) textcache[fps] = CreateText(world, font, String(fps), 12);
	currentFPSDisplayModel = textcache[fps];
	currentFPSDisplayModel->Show();
}

This is more complicated than just calling DrawText(fps) but it is far more powerful. Text rendering with this system will be instantaneous, whereas it can be quite slow in Leadwerks 4, which performs one draw call for each character.

  • Like 2
Link to comment
Guest
Add a comment...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...