Jump to content

Vladimir Sabantsev

Developers
  • Posts

    12
  • Joined

  • Last visited

Vladimir Sabantsev's Achievements

Rookie

Rookie (2/14)

  • Collaborator
  • Reacting Well
  • First Post
  • Conversation Starter
  • One Month Later

Recent Badges

1

Reputation

  1. @Josh Here comes an example of using multiple color attachments in texture buffer. It almost works out the box, which is super great! Please check the notes at the very end. You can create a new shader family inside the editor: Project > + > New Shader Family Shader layout is declared inside Shaders/Base/Fragment.glsl layout(location = 0) out vec4 outColor[8]; By default, PRB shader outputs additional information to the texture buffer color attachment with index in [1, 7] based on the 'RenderFlags' value see: Shaders/PBR/Fragment.glsl see: outColor[attachmentindex] In our custom user-hook-shader we can output something by our own choice, e.g. the surface normal obtained from the PBR shader: ... void UserHook(inout Surface surface, in Material material) { outColor[1] = vec4(surface.normal.xyz, 1.0); ... C++ code: #include "Leadwerks.h" using namespace Leadwerks; int main(int argc, const char* argv[]) { auto displays = GetDisplays(); auto window = CreateWindow("Ultra Engine", 0, 0, 1280, 720, displays[0], WINDOW_CENTER | WINDOW_TITLEBAR); auto world = CreateWorld(); auto framebuffer = CreateFramebuffer(window); auto cam1 = CreateCamera(world); cam1->SetClearColor(0.125); cam1->SetPosition(0, 0, -3); cam1->SetFov(70); //Create scenery auto light = CreateBoxLight(world); light->SetRange(-10, 10); light->SetRotation(15, 15, 0); light->SetColor(2); auto box1 = CreateBox(world); auto box2 = CreateBox(world); auto cone = CreateCone(world); auto sphr = CreateSphere(world); box1->SetColor(1, 1, 1); box2->SetColor(1, 1, 1); cone->SetColor(0, 0, 1); sphr->SetColor(1, 0, 0); box1->SetPosition( 0.00, +0.50, 0.00); box2->SetPosition( 0.00, -0.50, 0.00); cone->SetPosition(+1.25, 0.00, 0.00); sphr->SetPosition(-1.25, 0.00, 0.00); //Create render target with texture buffer which has 2 color attachment textures auto texbuffer = CreateTextureBuffer(256, 256, 2, true, 0); auto texbase = CreateTexture(TextureType::TEXTURE_2D, 256, 256); auto texnorm = CreateTexture(TextureType::TEXTURE_2D, 256, 256); texbuffer->SetColorAttachment(texbase, 0); texbuffer->SetColorAttachment(texnorm, 1); //Create camera with render target attached auto cam2 = CreateCamera(world); cam2->SetClearColor(1, 1, 1); cam2->SetRenderTarget(texbuffer); //Configure render layers to render //1. cone and shpere only to cam2 //2. boxes only to cam1 cam1->SetRenderLayers(0b01); box1->SetRenderLayers(0b01); box2->SetRenderLayers(0b01); cam2->SetRenderLayers(0b10); cone->SetRenderLayers(0b10); sphr->SetRenderLayers(0b10); //Create render target material with custom shader family which has 2 color outputs auto shfcust = LoadShaderFamily("Shaders/Example-TextureBuffer-SetColorAttachment.fam"); auto mtlcust = CreateMaterial(); mtlcust->SetShaderFamily(shfcust); //Create render target debug output fragment base color material auto mtlbase = CreateMaterial(); mtlbase->SetTexture(texbase); //Create render target debug output fragment normal material auto mtlnorm = CreateMaterial(); mtlnorm->SetTexture(texnorm); //Apply custom material to cone and sphere cone->SetMaterial(mtlcust); sphr->SetMaterial(mtlcust); //Apply render target debug output materials to boxes box1->SetMaterial(mtlbase); box2->SetMaterial(mtlnorm); //Main loop while (window->Closed() == false and window->KeyDown(KEY_ESCAPE) == false) { //Orient the texturebuffer camera cam2->SetPosition(0, 0, 0); cam2->Turn(0, 1, 0); cam2->Move(0, 0, -3); world->Update(); world->Render(framebuffer); } return 0; } Result: NOTES: I had to add #include "../Base/Fragment.glsl" inside custom Fragment.glsl to have access to outColor I had to add #include guard inside /Base/Fragment.glsl to avoid duplicate declarations with PBR/Fragment.glsl Example didn't work with texbuffer->GetColorAttachment(1) without explicitly setting texbuffer->SetColorAttachment(texnorm, 1) May be a bug or some misunderstanding. My initial though to add "layout(location = 1) out vec4 outCustom;" didn't work. Is it because the overlap with the default "layout(location = 0) out vec4 outColor[8];"? Added #ifndef USER_HOOK section to avoid overwriting outColor[attachmentindex] in PBR/Fragment.glsl, but I'm not sure if it's actually needed. I notice some kind of a stall on application close
  2. Started by looking into new shader families and inspecting shader output layout. Default shader layout is defined in "Shaders/Base/Fragment.glsl" layout(location = 0) out vec4 outColor[8]; Is it right that values from "Shaders/PBR/Fragment.glsl" outColor[attachmentindex] can be accessed if you attach a texture array to color attachment 0 and define appropriate RenderFlags uniform? Seems great, less collision opportunities with custom shader layouts! The example code from Camera::SetRenderTarget gives an artifact, although the only moving object is a cam2, it feels as if all 3 objects in the scene are getting rotated and casting shadows to the red ball. UPD: Ok, it's because cone and sphere object also have the same material as the white box, not a bug.
  3. Here are some useful educational videos on the topic of framebuffers for those, who, as me myself, may understand the main concept and use-cases but lack the knowledge on what is going on under the hood. All three videos describe the use of framebuffers with opengl: https://www.youtube.com/watch?v=m0RsLImjtgM https://www.youtube.com/watch?v=HpUW7Z2Y42g It definitely solved some mysteries in my mind regarding the topic... Didn't know that a shared can have multiple outputs and that an FBO can be used to effectively store them. Not knowing it and the fact that the color attachment number in ultra is limited to 6, all of it made me think that it's somehow connected with the cubemap texture faces count, sitting in the back of my head and introducing confusion An example utilizing several color attachments can prevent this confusion, I'll see how the shader families were changed and try to make a sample with multiple color attachments. https://www.youtube.com/watch?v=lW_iqrtJORc And this one also solved some issues in my perception regarding using cubemap texture as attachments - there is simply no way to use one camera and one texture buffer with cubemaps due to how the Camera::Render call works and it's ok. The only way is to have 1 cubemap texture, 6 cameras, 6 texture buffers (when SetColorAttachment method will get additional attribute 'face' similar to SetDepthAttachment). I'll also try to make an example for it (only for depth attachment now) and share it here.
  4. If uniforms are set per-material, then all the shaders in the family are expected to have the same uniform variables, which is not true in the general case (I guess?), so I'm not sure if this is the best way to do it. After this question I wonder a bit about how does a Camera::SetUniform work right now, because: 1. It obviously sets the variables for the currently used post effects, but it's not clear if the variables passed will be also applied after you call for Camera::ClearPostEffects, Camera::AddPostEffect. The same goes to materials with per-instance uniforms - what happens to the previously set variables after you call for Material::SetShaderFamily with a completely different set of uniform variables? 2. Post effects are a sequence of shaders and in Camera::SetUniform you pass an index and a name. I suppose that the index here is the index of a shader in the post-processing sequence and the name is the uniform variable name inside it, but it's hard to tell right from the interface. If this index is the index of the shader, then the user is required to keep track of the post-processing sequence while dealing with Camera::SetUniform, which can be so complex in some scenarios that a complete re-initialization of the camera and post effects will seem to be a more preferable way of dealing with it. The question seems to be easy, but it's really not My guess is that it will be more straight-forward to have access to some kind of a Shader instance with Shader::SetUniform methods from both ShaderFamily and PostEffect rather then setting it through Material or Camera. You load the ShaderFamily/PostEffect, find the right Shader instances, manipulate the variables as long as needed, repeat.
  5. Thanks, didn't notice the SetUniform methods inside Camera class. I can also see now that there is no way to obtain "RenderTexture" instance. Lots of small inconveniences to solve before the release... I will follow the patch notes and jump back to my experiments after some of the show-stoppers are resolved. Really excited to build a smooth pipeline with no crutches
  6. Extra note on shaders: Found out that there is no way to pass a custom uniform variable. Render::RenderMaterial and Render::RenderShader are fully public, but there is no way to get rendermaterial field from Material (without inheriting it and doing a dirty reinterpret_cast, which doesn't look like an intended usage).
  7. Thanks, the following snipped is enough to make debug output of the depth buffer as a mask: std::shared_ptr fog_sphere = ule::CreateSphere(world); fog_sphere->SetScale(-100.f, -100.f, -100.f); fog_sphere->SetColor(1.f, 1.f, 1.f); fog_sphere->SetRenderLayers(2); std::shared_ptr const camera_depth_to_fog_tb = ule::CreateTextureBuffer(512, 512, 1, false); camera_depth_to_fog_tb->SetDepthAttachment(texture_depth_0); std::shared_ptr const camera_depth_to_fog = ule::CreateCamera(world); camera_depth_to_fog->SetRenderTarget(camera_depth_to_fog_tb); camera_depth_to_fog->SetRenderLayers(2); camera_depth_to_fog->SetClearMode(ule::CLEAR_COLOR); camera_depth_to_fog->SetClearColor(0.0f, 0.0f, 0.0f, 1.f); camera_custom_mat->SetTexture(camera_depth_to_fog_tb->GetColorAttachment(0));
  8. I'm just too used to use at least namespace acronyms to avoid name intersections and visually separate different libs. Inconsistency in documentation found: Both TextureBuffer::SetDepthAttachment and TextureBuffer::SetColorAttachment state that the texture should be created with TEXTURE_BUFFER flag, but Include\Enums.h TextureFlags::TEXTURE_BUFFER is commented out. At the same time I see that the following works well: std::shared_ptr const texture_color = ule::CreateTexture(ule::TextureType::TEXTURE_2D, 512, 512, ule::TextureFormat::TEXTURE_RGBA, {}, 1, ule::TextureFlags::TEXTURE_DEFAULT, // | (ule::TextureFlags)1, ule::TextureFilter::TEXTUREFILTER_LINEAR, 0); texture_buffer->SetColorAttachment(texture_color, 0); Tried to test the new SetDepthAttachment face argument, had some question along the way: Looks like switching from TextureFormat::TEXTURE_DEPTH to TextureFormat::TEXTURE_R32F breaks the render target. What texture formats are supported for the depth attachment? Assigning depth texture to material doesn't show anything even with a 1000x scaled sky sphere placed for contrast. What is the easiest way to make a debug output of depth attachment? It's either I don't understand something or the interface is not ready yet. I don't see any way to render to a texture buffer component with index other than 0. Is TextureType::TEXTURE_CUBE supported as a texture buffer attachment? Example: #include "UltraEngine.h" #include "ComponentSystem.h" #include "Encryption.h" namespace ule = UltraEngine; namespace tpp = tableplusplus; int main(int argc, const char* argv[]) { tpp::table cl = ule::ParseCommandLine(argc, argv); std::vector displays = ule::GetDisplays(); std::shared_ptr window = ule::CreateWindow("Ultra Engine", 0, 0, int(720 * displays[0]->scale), int(720 * displays[0]->scale), displays[0], WINDOW_CENTER | WINDOW_TITLEBAR); std::shared_ptr framebuffer = ule::CreateFramebuffer(window); std::shared_ptr world = ule::CreateWorld(); std::shared_ptr camera = ule::CreateCamera(world, ule::PROJECTION_PERSPECTIVE); camera->SetClearColor(0.25f, 0.25f, 0.25f); camera->SetRotation(+90.f, 0.f, 0.f); camera->SetFov(90); camera->SetOrder(1); std::shared_ptr light = ule::CreateBoxLight(world); light->SetPosition(0.f, -2.f, 0.f); light->SetRange(0, 10); light->SetRotation(90, 0, 0); light->SetArea(6, 6); light->SetColor(2); std::shared_ptr sky_sphere = ule::CreateSphere(world); sky_sphere->SetScale(-1000.f, -1000.f, -1000.f); sky_sphere->SetColor(0.3f, 0.4f, 0.9f); std::shared_ptr plane = ule::CreatePlane(world); plane->SetPosition(+0.f, -5.f, +0.f); plane->SetScale(6.f, 6.f, 6.f); std::shared_ptr box_x_pos = ule::CreateBox(world); std::shared_ptr box_x_neg = ule::CreateBox(world); std::shared_ptr box_y_pos = ule::CreateBox(world); //std::shared_ptr box_y_neg = ule::CreateBox(world); std::shared_ptr box_z_pos = ule::CreateBox(world); std::shared_ptr box_z_neg = ule::CreateBox(world); box_x_pos->SetPosition(+5.f, +0.f, +0.f); box_x_neg->SetPosition(-5.f, +0.f, +0.f); box_y_pos->SetPosition(+0.f, +5.f, +0.f); //box_y_neg->SetPosition(+0.f, -2.f, +0.f); box_z_pos->SetPosition(+0.f, +0.f, +5.f); box_z_neg->SetPosition(+0.f, +0.f, -5.f); #if 1 // Here is what currently possible std::shared_ptr const camera_custom_rt_0 = ule::CreateCamera(world, ule::PROJECTION_PERSPECTIVE); camera_custom_rt_0->SetFov(90.f); camera_custom_rt_0->SetPosition(ule::Vec3{ 0.f, 0.f, 0.f }); camera_custom_rt_0->SetClearMode(ule::ClearMode::CLEAR_DEPTH | ule::ClearMode::CLEAR_COLOR); std::shared_ptr const texture_buffer = ule::CreateTextureBuffer(512, 512, 2, true); std::shared_ptr const texture_color_0 = ule::CreateTexture(ule::TextureType::TEXTURE_2D, 512, 512, ule::TextureFormat::TEXTURE_RGBA, {}, 1, ule::TextureFlags::TEXTURE_DEFAULT, ule::TextureFilter::TEXTUREFILTER_LINEAR, 0); std::shared_ptr const texture_color_1 = ule::CreateTexture(ule::TextureType::TEXTURE_2D, 512, 512, ule::TextureFormat::TEXTURE_RGBA, {}, 1, ule::TextureFlags::TEXTURE_DEFAULT, ule::TextureFilter::TEXTUREFILTER_LINEAR, 0); std::shared_ptr const texture_depth_0 = ule::CreateTexture(ule::TextureType::TEXTURE_2D, 512, 512, ule::TextureFormat::TEXTURE_DEPTH, {}, 1, ule::TextureFlags::TEXTURE_DEFAULT, ule::TextureFilter::TEXTUREFILTER_LINEAR, 0); std::shared_ptr const texture_depth_1 = ule::CreateTexture(ule::TextureType::TEXTURE_2D, 512, 512, ule::TextureFormat::TEXTURE_DEPTH, {}, 1, ule::TextureFlags::TEXTURE_DEFAULT, ule::TextureFilter::TEXTUREFILTER_LINEAR, 0); texture_buffer->SetColorAttachment(texture_color_0, 0); texture_buffer->SetColorAttachment(texture_color_1, 1); texture_buffer->SetDepthAttachment(texture_depth_0, 0); texture_buffer->SetDepthAttachment(texture_depth_1, 1); std::shared_ptr camera_custom_mat = ule::CreateMaterial(); camera_custom_rt_0->SetRenderTarget(texture_buffer); camera_custom_mat->SetTexture(texture_color_0); // How to assign camera to a different render target component with // index other than 0 so that the result is stored in texture_color_1? // camera_custom_rt_0->SetRenderTarget(texture_buffer, 1); // camera_custom_mat->SetTexture(texture_color_1); // What is the easiest way to make a debug output of depth attachment? // camera_custom_mat->SetTexture(texture_depth_0); plane->SetMaterial(camera_custom_mat); #else // Here is something I was expecting std::shared_ptr const texture_color_cube = ule::CreateTexture(ule::TextureType::TEXTURE_CUBE, 512, 512, ule::TextureFormat::TEXTURE_RGBA, {}, 6, ule::TextureFlags::TEXTURE_DEFAULT, ule::TextureFilter::TEXTUREFILTER_LINEAR, 0); std::shared_ptr const texture_depth_cube = ule::CreateTexture(ule::TextureType::TEXTURE_CUBE, 512, 512, ule::TextureFormat::TEXTURE_R32F, {}, 6, ule::TextureFlags::TEXTURE_DEFAULT, ule::TextureFilter::TEXTUREFILTER_LINEAR, 0); // texture, buffer_comp_idx, texture_layer_idx texture_buffer->SetColorAttachment(texture_color_cube, 0, 0); texture_buffer->SetColorAttachment(texture_color_cube, 1, 1); texture_buffer->SetColorAttachment(texture_color_cube, 2, 2); texture_buffer->SetColorAttachment(texture_color_cube, 3, 3); texture_buffer->SetColorAttachment(texture_color_cube, 4, 4); texture_buffer->SetColorAttachment(texture_color_cube, 5, 5); // texture, buffer_comp_idx, texture_layer_idx texture_buffer->SetDepthAttachment(texture_depth_cube, 0, 0); texture_buffer->SetDepthAttachment(texture_depth_cube, 1, 1); texture_buffer->SetDepthAttachment(texture_depth_cube, 2, 2); texture_buffer->SetDepthAttachment(texture_depth_cube, 3, 3); texture_buffer->SetDepthAttachment(texture_depth_cube, 4, 4); texture_buffer->SetDepthAttachment(texture_depth_cube, 5, 5); // buffer, buffer_comp_idx camera_custom_rt_0->SetRenderTarget(texture_buffer, 0); camera_custom_rt_1->SetRenderTarget(texture_buffer, 1); camera_custom_rt_2->SetRenderTarget(texture_buffer, 2); camera_custom_rt_3->SetRenderTarget(texture_buffer, 3); camera_custom_rt_4->SetRenderTarget(texture_buffer, 4); camera_custom_rt_5->SetRenderTarget(texture_buffer, 5); #endif while (!window->Closed() && !window->KeyDown(KEY_ESCAPE)) { camera_custom_rt_0->Turn(0, 1, 0); world->Update(); world->Render(framebuffer); } return 0; }
  9. Don't have access to beta branch from the client, only 0.9.3, 0.9.7, stable (0.9.8), dev. Got the dev branch, created a new C++ project, here is the list of small flaws I encountered: Maps/start.ultra is absent from the newly created project UltraEngine::LoadPlugin is used inside auto-generated main.cpp "using namespace UltraEngine;" in several engine interface header files
  10. Did you mean to write "It is probably possible to set up six cameras that each draw to the same texture buffer."? But how would a camera know to which of the attachments to render to? Is there an interface to pass the attachment index? I can see that Camera::SetRenderTarget receives only the TextureBuffer itself and nothing else. The minimal requirement for this shader is to output a mask which corresponds to all the pixels which are not visible from the players' character position by filtering out all the pixels with the help of the cubemap depthbuffer captured at the players' character position. You can think of it as a mask for all the pixel which receive a shadow from a point light with a huge radius placed inside the players' character. Additional output which might be helpful is a layer populated with entity IDs for pixels in the masked-out areas to prevent some weirdly looking cases of self-masking on the final image (you can notice that a huge character on the street has an unfortunate accident on the upper part of the head). I had an idea to use something like this, it can cover the first stage. But is there a way to grab their cubemaps in C++ nowadays? I saw an answer from a far back time saying that it's available as the first texture of entity material, but now I see that Entity::GetMaterial is commented out and is subsequently unavailable for the Light and PointLight classes.
  11. Hello to all the inhabitants of the forum! I'm experimenting with the third-person camera view in terms of usability for tight scenes with varying height levels and trying to achieve a very specific per-pixel opacity mask filtering. I've had a prototype made in Ultra Engine (the other one) and managed to obtain the basic level of what is needed, but lots of workarounds were required along the road and I can see that it just won't work in a good way without editing the source code of the engine due to some higher-level limitations. The main pillar to achieve the intended visuals is to have a real-time cubemap depth capture of the environment around the player similar to what a point light would have. I was not able to find a ready-to-use entity to capture cubemaps in Ultra Engine. From what I see in the documentation, it will be required to create 6 cameras rendering to unique texture buffers before creating the main camera to make a pre-pass cubemap capture of the scene. Is it right that only a depth test will be performed for a TextureBuffer created with CreateTextureBuffer(..., colorattachments = 0, depthattachment = true, ...)? Is there a way to simplify the cubemap capture setup by having a shared texture buffer? Is there a standardized way to capture cubemaps that I just didn't manage to find? The other requirement is to have another pre-pass with the same view frustrum as the main camera, but with a different shader applied to all the entities of the scene. I've managed to find this article (Shader Families from 24 July 2019) with some information on ShaderFamily which is not available in the documentation (ohh how good it would have been to have it in the docs). From what I see, there is a possibility to have a different shader for different rendering scenario and rendering pass - great, shadow pass switch that I will need later is crossed out! I assume that a regular camera created with CreateCamera method uses the "base" render pass shader group and to have a different fragment shader applied one might set a different material (shader family) to all the required entities duplicates with a different bit passed to Entity::SetRenderLayers. Is there a way to to create a custom render pass shader group and then assign it to a camera instead of the "base" one to use it together with specifying Entity::SetRenderLayers bitmask without the need to duplicate entities?
×
×
  • Create New...