Jump to content

[Advice Request] Cubemap capture and custom render pass shader groups


Recommended Posts

Hello to all the inhabitants of the forum!

I'm experimenting with the third-person camera view in terms of usability for tight scenes with varying height levels and trying to achieve a very specific per-pixel opacity mask filtering. I've had a prototype made in Ultra Engine (the other one) and managed to obtain the basic level of what is needed, but lots of workarounds were required along the road and I can see that it just won't work in a good way without editing the source code of the engine due to some higher-level limitations.

  1. The main pillar to achieve the intended visuals is to have a real-time cubemap depth capture of the environment around the player similar to what a point light would have. I was not able to find a ready-to-use entity to capture cubemaps in Ultra Engine. From what I see in the documentation, it will be required to create 6 cameras rendering to unique texture buffers before creating the main camera to make a pre-pass cubemap capture of the scene.
    image.thumb.png.045bdc3db124fb421cb8e86c3462e250.png

    Is it right that only a depth test will be performed for a TextureBuffer created with CreateTextureBuffer(..., colorattachments = 0, depthattachment = true, ...)?
    Is there a way to simplify the cubemap capture setup by having a shared texture buffer?
    Is there a standardized way to capture cubemaps that I just didn't manage to find?
     
  2. The other requirement is to have another pre-pass with the same view frustrum as the main camera, but with a different shader applied to all the entities of the scene.
    I've managed to find this article (Shader Families from 24 July 2019) with some information on ShaderFamily which is not available in the documentation (ohh how good it would have been to have it in the docs). From what I see, there is a possibility to have a different shader for different rendering scenario and rendering pass - great, shadow pass switch that I will need later is crossed out!
    image.thumb.png.106ec815efb9d77b3da00ed13048b6f6.png

    I assume that a regular camera created with CreateCamera method uses the "base" render pass shader group and to have a different fragment shader applied one might set a different material (shader family) to all the required entities duplicates with a different bit passed to Entity::SetRenderLayers.
    Is there a way to to create a custom render pass shader group and then assign it to a camera instead of the "base" one to use it together with specifying Entity::SetRenderLayers bitmask without the need to duplicate entities?
Link to comment
Share on other sites

Both the points lights and the GI probe renderer in the editor render to all sides of a cubemap.

What do you plan to use this functionality for?

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

It is probably possible to set up six cameras that each draw to a different texture buffer. The texture buffer class can accept an index when a color texture is applied to it, so each texture buffer could render to a different cubemap face:
https://www.ultraengine.com/learn/TextureBuffer_SetColorAttachment?lang=cpp

This functionality is not in the SetDepthTexture command, but I could probably add it without much trouble.

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

4 hours ago, Vladimir Sabantsev said:

The other requirement is to have another pre-pass with the same view frustrum as the main camera, but with a different shader applied to all the entities of the scene.

What does your different shader output? Sometimes it is easier just to use a post-processing effect or turn the distance fog up all the way. This is how I do the orange outlines in the editor.

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

1 hour ago, Josh said:

It is probably possible to set up six cameras that each draw to a different texture buffer. The texture buffer class can accept an index when a color texture is applied to it, so each texture buffer could render to a different cubemap face:
https://www.ultraengine.com/learn/TextureBuffer_SetColorAttachment?lang=cpp

This functionality is not in the SetDepthTexture command, but I could probably add it without much trouble.

Did you mean to write "It is probably possible to set up six cameras that each draw to the same texture buffer."?
But how would a camera know to which of the attachments to render to? Is there an interface to pass the attachment index? I can see that Camera::SetRenderTarget receives only the TextureBuffer itself and nothing else.

 

1 hour ago, Josh said:

What does your different shader output? Sometimes it is easier just to use a post-processing effect or turn the distance fog up all the way. This is how I do the orange outlines in the editor.

The minimal requirement for this shader is to output a mask which corresponds to all the pixels which are not visible from the players' character position by filtering out all the pixels with the help of the cubemap depthbuffer captured at the players' character position. You can think of it as a mask for all the pixel which receive a shadow from a point light with a huge radius placed inside the players' character.

6 hours ago, Vladimir Sabantsev said:


image.thumb.png.106ec815efb9d77b3da00ed13048b6f6.png

Additional output which might be helpful is a layer populated with entity IDs for pixels in the masked-out areas to prevent some weirdly looking cases of self-masking on the final image (you can notice that a huge character on the street has an unfortunate accident on the upper part of the head).
image.thumb.png.9b366025abda1f6d495883697eeb3ab9.png

image.thumb.png.55c098efb57fe6396a0282edb7c72d1c.png

1 hour ago, Josh said:

Both the points lights and the GI probe renderer in the editor render to all sides of a cubemap.

I had an idea to use something like this, it can cover the first stage. But is there a way to grab their cubemaps in C++ nowadays? I saw an answer from a far back time saying that it's available as the first texture of entity material, but now I see that Entity::GetMaterial is commented out and is subsequently unavailable for the Light and PointLight classes.

Link to comment
Share on other sites

1 hour ago, Vladimir Sabantsev said:

Did you mean to write "It is probably possible to set up six cameras that each draw to the same texture buffer."?
But how would a camera know to which of the attachments to render to? Is there an interface to pass the attachment index? I can see that Camera::SetRenderTarget receives only the TextureBuffer itself and nothing else.

Yes, the color attachment function has an optional index parameter. The depth attachment function currently does not have this parameter.

I will look into adding the cubemap face index for the depth attachment method.

  • Upvote 1

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

  • 2 weeks later...

Yes. I am going to edit that structure and document the members to make things easier.

I added an "New Shader Family" option, under the Project tab on the right-side panel.

I also found that the depth component face value is already implemented internally in the renderer (which makes sense because I use this for shadow map rendering), so now I am just exposing that functionality in the command...

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

21 hours ago, Josh said:

The extra parameter is available now in the beta branch.

Don't have access to beta branch from the client, only 0.9.3, 0.9.7, stable (0.9.8), dev.
Got the dev branch, created a new C++ project, here is the list of small flaws I encountered:

  • Maps/start.ultra is absent from the newly created project
  • UltraEngine::LoadPlugin is used inside auto-generated main.cpp
  • "using namespace UltraEngine;" in several engine interface header files
Link to comment
Share on other sites

Hi, 

1. The blank project does not include a start map.
2. Why is this a problem?
3. I don't attempt to eliminate the automatic use of the UltraEngine namespace because typing UltraEngine:: in front of every type declared in the headers would be very tedious, and it's generally safe to assume the user wants to use it anyways.

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

5 hours ago, Vladimir Sabantsev said:

Don't have access to beta branch from the client, only 0.9.3, 0.9.7, stable (0.9.8), dev.

"dev" is the beta branch. Most people are using Steam, so I am just in the habit of calling it "beta".

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

12 hours ago, Josh said:

3. I don't attempt to eliminate the automatic use of the UltraEngine namespace because typing UltraEngine:: in front of every type declared in the headers would be very tedious, and it's generally safe to assume the user wants to use it anyways.

I'm just too used to use at least namespace acronyms to avoid name intersections and visually separate different libs.

Inconsistency in documentation found:
Both TextureBuffer::SetDepthAttachment and TextureBuffer::SetColorAttachment state that the texture should be created with TEXTURE_BUFFER flag, but Include\Enums.h TextureFlags::TEXTURE_BUFFER is commented out. At the same time I see that the following works well:

std::shared_ptr const texture_color = ule::CreateTexture(ule::TextureType::TEXTURE_2D, 512, 512,
                                                         ule::TextureFormat::TEXTURE_RGBA, {}, 1,
                                                         ule::TextureFlags::TEXTURE_DEFAULT, // | (ule::TextureFlags)1,
                                                         ule::TextureFilter::TEXTUREFILTER_LINEAR, 0);
texture_buffer->SetColorAttachment(texture_color, 0);


Tried to test the new SetDepthAttachment face argument, had some question along the way:

  1. Looks like switching from TextureFormat::TEXTURE_DEPTH to TextureFormat::TEXTURE_R32F breaks the render target.
    What texture formats are supported for the depth attachment?
  2. Assigning depth texture to material doesn't show anything even with a 1000x scaled sky sphere placed for contrast.
    What is the easiest way to make a debug output of depth attachment?
  3. It's either I don't understand something or the interface is not ready yet.
    I don't see any way to render to a texture buffer component with index other than 0.
  4. Is TextureType::TEXTURE_CUBE supported as a texture buffer attachment?

Example:

#include "UltraEngine.h"
#include "ComponentSystem.h"
#include "Encryption.h"

namespace ule = UltraEngine;
namespace tpp = tableplusplus;

int main(int argc, const char* argv[])
{
  tpp::table      cl          = ule::ParseCommandLine(argc, argv);
  std::vector     displays    = ule::GetDisplays();
  std::shared_ptr window      = ule::CreateWindow("Ultra Engine", 0, 0,
                                  int(720 * displays[0]->scale),
                                  int(720 * displays[0]->scale),
                                  displays[0], WINDOW_CENTER | WINDOW_TITLEBAR);
  std::shared_ptr framebuffer = ule::CreateFramebuffer(window);
  std::shared_ptr world       = ule::CreateWorld();

  std::shared_ptr camera = ule::CreateCamera(world, ule::PROJECTION_PERSPECTIVE);
  camera->SetClearColor(0.25f, 0.25f, 0.25f);
  camera->SetRotation(+90.f, 0.f, 0.f);
  camera->SetFov(90);
  camera->SetOrder(1);

  std::shared_ptr light = ule::CreateBoxLight(world);
  light->SetPosition(0.f, -2.f, 0.f);
  light->SetRange(0, 10);
  light->SetRotation(90, 0, 0);
  light->SetArea(6, 6);
  light->SetColor(2);

  std::shared_ptr sky_sphere = ule::CreateSphere(world);
  sky_sphere->SetScale(-1000.f, -1000.f, -1000.f);
  sky_sphere->SetColor(0.3f, 0.4f, 0.9f);

  std::shared_ptr plane = ule::CreatePlane(world);
  plane->SetPosition(+0.f, -5.f, +0.f);
  plane->SetScale(6.f, 6.f, 6.f);

  std::shared_ptr box_x_pos = ule::CreateBox(world);
  std::shared_ptr box_x_neg = ule::CreateBox(world);
  std::shared_ptr box_y_pos = ule::CreateBox(world);
//std::shared_ptr box_y_neg = ule::CreateBox(world);
  std::shared_ptr box_z_pos = ule::CreateBox(world);
  std::shared_ptr box_z_neg = ule::CreateBox(world);

  box_x_pos->SetPosition(+5.f, +0.f, +0.f);
  box_x_neg->SetPosition(-5.f, +0.f, +0.f);
  box_y_pos->SetPosition(+0.f, +5.f, +0.f);
//box_y_neg->SetPosition(+0.f, -2.f, +0.f);
  box_z_pos->SetPosition(+0.f, +0.f, +5.f);
  box_z_neg->SetPosition(+0.f, +0.f, -5.f);

#if 1 // Here is what currently possible
  std::shared_ptr const camera_custom_rt_0 = ule::CreateCamera(world, ule::PROJECTION_PERSPECTIVE);
  camera_custom_rt_0->SetFov(90.f);
  camera_custom_rt_0->SetPosition(ule::Vec3{ 0.f, 0.f, 0.f });
  camera_custom_rt_0->SetClearMode(ule::ClearMode::CLEAR_DEPTH |
                                   ule::ClearMode::CLEAR_COLOR);

  std::shared_ptr const texture_buffer  = ule::CreateTextureBuffer(512, 512, 2, true);

  std::shared_ptr const texture_color_0 = ule::CreateTexture(ule::TextureType::TEXTURE_2D, 512, 512,
                                                             ule::TextureFormat::TEXTURE_RGBA, {}, 1,
                                                             ule::TextureFlags::TEXTURE_DEFAULT,
                                                             ule::TextureFilter::TEXTUREFILTER_LINEAR, 0);

  std::shared_ptr const texture_color_1 = ule::CreateTexture(ule::TextureType::TEXTURE_2D, 512, 512,
                                                             ule::TextureFormat::TEXTURE_RGBA, {}, 1,
                                                             ule::TextureFlags::TEXTURE_DEFAULT,
                                                             ule::TextureFilter::TEXTUREFILTER_LINEAR, 0);

  std::shared_ptr const texture_depth_0 = ule::CreateTexture(ule::TextureType::TEXTURE_2D, 512, 512,
                                                             ule::TextureFormat::TEXTURE_DEPTH, {}, 1,
                                                             ule::TextureFlags::TEXTURE_DEFAULT,
                                                             ule::TextureFilter::TEXTUREFILTER_LINEAR, 0);

  std::shared_ptr const texture_depth_1 = ule::CreateTexture(ule::TextureType::TEXTURE_2D, 512, 512,
                                                             ule::TextureFormat::TEXTURE_DEPTH, {}, 1,
                                                             ule::TextureFlags::TEXTURE_DEFAULT,
                                                             ule::TextureFilter::TEXTUREFILTER_LINEAR, 0);

  texture_buffer->SetColorAttachment(texture_color_0, 0);
  texture_buffer->SetColorAttachment(texture_color_1, 1);

  texture_buffer->SetDepthAttachment(texture_depth_0, 0);
  texture_buffer->SetDepthAttachment(texture_depth_1, 1);

  std::shared_ptr camera_custom_mat = ule::CreateMaterial();

  camera_custom_rt_0->SetRenderTarget(texture_buffer);
  camera_custom_mat->SetTexture(texture_color_0);

  // How to assign camera to a different render target component with
  // index other than 0 so that the result is stored in texture_color_1?
  // camera_custom_rt_0->SetRenderTarget(texture_buffer, 1);
  // camera_custom_mat->SetTexture(texture_color_1);

  // What is the easiest way to make a debug output of depth attachment?
  // camera_custom_mat->SetTexture(texture_depth_0);

  plane->SetMaterial(camera_custom_mat);
#else // Here is something I was expecting
  std::shared_ptr const texture_color_cube  = ule::CreateTexture(ule::TextureType::TEXTURE_CUBE, 512, 512,
                                                                 ule::TextureFormat::TEXTURE_RGBA, {}, 6,
                                                                 ule::TextureFlags::TEXTURE_DEFAULT,
                                                                 ule::TextureFilter::TEXTUREFILTER_LINEAR, 0);

  std::shared_ptr const texture_depth_cube  = ule::CreateTexture(ule::TextureType::TEXTURE_CUBE, 512, 512,
                                                                 ule::TextureFormat::TEXTURE_R32F, {}, 6,
                                                                 ule::TextureFlags::TEXTURE_DEFAULT,
                                                                 ule::TextureFilter::TEXTUREFILTER_LINEAR, 0);

  //                                            texture, buffer_comp_idx, texture_layer_idx
  texture_buffer->SetColorAttachment(texture_color_cube,               0,                 0);
  texture_buffer->SetColorAttachment(texture_color_cube,               1,                 1);
  texture_buffer->SetColorAttachment(texture_color_cube,               2,                 2);
  texture_buffer->SetColorAttachment(texture_color_cube,               3,                 3);
  texture_buffer->SetColorAttachment(texture_color_cube,               4,                 4);
  texture_buffer->SetColorAttachment(texture_color_cube,               5,                 5);

  //                                            texture, buffer_comp_idx, texture_layer_idx
  texture_buffer->SetDepthAttachment(texture_depth_cube,               0,                 0);
  texture_buffer->SetDepthAttachment(texture_depth_cube,               1,                 1);
  texture_buffer->SetDepthAttachment(texture_depth_cube,               2,                 2);
  texture_buffer->SetDepthAttachment(texture_depth_cube,               3,                 3);
  texture_buffer->SetDepthAttachment(texture_depth_cube,               4,                 4);
  texture_buffer->SetDepthAttachment(texture_depth_cube,               5,                 5);

  //                                          buffer, buffer_comp_idx
  camera_custom_rt_0->SetRenderTarget(texture_buffer,               0);
  camera_custom_rt_1->SetRenderTarget(texture_buffer,               1);
  camera_custom_rt_2->SetRenderTarget(texture_buffer,               2);
  camera_custom_rt_3->SetRenderTarget(texture_buffer,               3);
  camera_custom_rt_4->SetRenderTarget(texture_buffer,               4);
  camera_custom_rt_5->SetRenderTarget(texture_buffer,               5);
#endif

  while (!window->Closed() &&
         !window->KeyDown(KEY_ESCAPE))
  {
    camera_custom_rt_0->Turn(0, 1, 0);

    world->Update();
    world->Render(framebuffer);
  }

  return 0;
}
  • Thanks 1
Link to comment
Share on other sites

1 hour ago, Dreikblack said:

Maybe this example of SetDepthAttachment use will help you a bit

Thanks, the following snipped is enough to make debug output of the depth buffer as a mask:

  std::shared_ptr fog_sphere = ule::CreateSphere(world);
  fog_sphere->SetScale(-100.f, -100.f, -100.f);
  fog_sphere->SetColor(1.f, 1.f, 1.f);
  fog_sphere->SetRenderLayers(2);

  std::shared_ptr const camera_depth_to_fog_tb = ule::CreateTextureBuffer(512, 512, 1, false);
  camera_depth_to_fog_tb->SetDepthAttachment(texture_depth_0);

  std::shared_ptr const camera_depth_to_fog = ule::CreateCamera(world);
  camera_depth_to_fog->SetRenderTarget(camera_depth_to_fog_tb);
  camera_depth_to_fog->SetRenderLayers(2);
  camera_depth_to_fog->SetClearMode(ule::CLEAR_COLOR);
  camera_depth_to_fog->SetClearColor(0.0f, 0.0f, 0.0f, 1.f);

  camera_custom_mat->SetTexture(camera_depth_to_fog_tb->GetColorAttachment(0));

 

Link to comment
Share on other sites

On 1/30/2025 at 11:24 PM, Josh said:

Yes. I am going to edit that structure and document the members to make things easier.

Extra note on shaders:
Found out that there is no way to pass a custom uniform variable. Render::RenderMaterial and Render::RenderShader are fully public, but there is no way to get rendermaterial field from Material (without inheriting it and doing a dirty reinterpret_cast, which doesn't look like an intended usage).

Link to comment
Share on other sites

2 hours ago, Dreikblack said:

atm it's possible to do only for post effect shaders.

For outline shader it would be: camera->SetUniform(0, "Thickness",  2)

Thanks, didn't notice the SetUniform methods inside Camera class.
I can also see now that there is no way to obtain "RenderTexture" instance. Lots of small inconveniences to solve before the release...

I will follow the patch notes and jump back to my experiments after some of the show-stoppers are resolved. Really excited to build a smooth pipeline with no crutches :)

Link to comment
Share on other sites

Thanks, I appreciate your interest in the low-level aspects of the engine that I don't normally get to explain to people, and I will do my best to support what you are trying to do. :)

The renderer dynamically chooses shaders from a "shader family", based on the current setting. Perhaps there should be per-material uniforms that can be set?

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

On 2/2/2025 at 6:38 AM, Josh said:

The renderer dynamically chooses shaders from a "shader family", based on the current setting. Perhaps there should be per-material uniforms that can be set?

If uniforms are set per-material, then all the shaders in the family are expected to have the same uniform variables, which is not true in the general case (I guess?), so I'm not sure if this is the best way to do it.

After this question I wonder a bit about how does a Camera::SetUniform work right now, because:
1. It obviously sets the variables for the currently used post effects, but it's not clear if the variables passed will be also applied after you call for Camera::ClearPostEffects, Camera::AddPostEffect. The same goes to materials with per-instance uniforms - what happens to the previously set variables after you call for Material::SetShaderFamily with a completely different set of uniform variables?
2. Post effects are a sequence of shaders and in Camera::SetUniform you pass an index and a name. I suppose that the index here is the index of a shader in the post-processing sequence and the name is the uniform variable name inside it, but it's hard to tell right from the interface. If this index is the index of the shader, then the user is required to keep track of the post-processing sequence while dealing with Camera::SetUniform, which can be so complex in some scenarios that a complete re-initialization of the camera and post effects will seem to be a more preferable way of dealing with it.

The question seems to be easy, but it's really not :)
My guess is that it will be more straight-forward to have access to some kind of a Shader instance with Shader::SetUniform methods from both ShaderFamily and PostEffect rather then setting it through Material or Camera. You load the ShaderFamily/PostEffect, find the right Shader instances, manipulate the variables as long as needed, repeat.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...