Jump to content

Vladimir Sabantsev

Developers
  • Posts

    9
  • Joined

  • Last visited

Vladimir Sabantsev's Achievements

Rookie

Rookie (2/14)

  • Reacting Well
  • First Post
  • Conversation Starter
  • One Month Later
  • Week One Done

Recent Badges

1

Reputation

  1. If uniforms are set per-material, then all the shaders in the family are expected to have the same uniform variables, which is not true in the general case (I guess?), so I'm not sure if this is the best way to do it. After this question I wonder a bit about how does a Camera::SetUniform work right now, because: 1. It obviously sets the variables for the currently used post effects, but it's not clear if the variables passed will be also applied after you call for Camera::ClearPostEffects, Camera::AddPostEffect. The same goes to materials with per-instance uniforms - what happens to the previously set variables after you call for Material::SetShaderFamily with a completely different set of uniform variables? 2. Post effects are a sequence of shaders and in Camera::SetUniform you pass an index and a name. I suppose that the index here is the index of a shader in the post-processing sequence and the name is the uniform variable name inside it, but it's hard to tell right from the interface. If this index is the index of the shader, then the user is required to keep track of the post-processing sequence while dealing with Camera::SetUniform, which can be so complex in some scenarios that a complete re-initialization of the camera and post effects will seem to be a more preferable way of dealing with it. The question seems to be easy, but it's really not My guess is that it will be more straight-forward to have access to some kind of a Shader instance with Shader::SetUniform methods from both ShaderFamily and PostEffect rather then setting it through Material or Camera. You load the ShaderFamily/PostEffect, find the right Shader instances, manipulate the variables as long as needed, repeat.
  2. Thanks, didn't notice the SetUniform methods inside Camera class. I can also see now that there is no way to obtain "RenderTexture" instance. Lots of small inconveniences to solve before the release... I will follow the patch notes and jump back to my experiments after some of the show-stoppers are resolved. Really excited to build a smooth pipeline with no crutches
  3. Extra note on shaders: Found out that there is no way to pass a custom uniform variable. Render::RenderMaterial and Render::RenderShader are fully public, but there is no way to get rendermaterial field from Material (without inheriting it and doing a dirty reinterpret_cast, which doesn't look like an intended usage).
  4. Thanks, the following snipped is enough to make debug output of the depth buffer as a mask: std::shared_ptr fog_sphere = ule::CreateSphere(world); fog_sphere->SetScale(-100.f, -100.f, -100.f); fog_sphere->SetColor(1.f, 1.f, 1.f); fog_sphere->SetRenderLayers(2); std::shared_ptr const camera_depth_to_fog_tb = ule::CreateTextureBuffer(512, 512, 1, false); camera_depth_to_fog_tb->SetDepthAttachment(texture_depth_0); std::shared_ptr const camera_depth_to_fog = ule::CreateCamera(world); camera_depth_to_fog->SetRenderTarget(camera_depth_to_fog_tb); camera_depth_to_fog->SetRenderLayers(2); camera_depth_to_fog->SetClearMode(ule::CLEAR_COLOR); camera_depth_to_fog->SetClearColor(0.0f, 0.0f, 0.0f, 1.f); camera_custom_mat->SetTexture(camera_depth_to_fog_tb->GetColorAttachment(0));
  5. I'm just too used to use at least namespace acronyms to avoid name intersections and visually separate different libs. Inconsistency in documentation found: Both TextureBuffer::SetDepthAttachment and TextureBuffer::SetColorAttachment state that the texture should be created with TEXTURE_BUFFER flag, but Include\Enums.h TextureFlags::TEXTURE_BUFFER is commented out. At the same time I see that the following works well: std::shared_ptr const texture_color = ule::CreateTexture(ule::TextureType::TEXTURE_2D, 512, 512, ule::TextureFormat::TEXTURE_RGBA, {}, 1, ule::TextureFlags::TEXTURE_DEFAULT, // | (ule::TextureFlags)1, ule::TextureFilter::TEXTUREFILTER_LINEAR, 0); texture_buffer->SetColorAttachment(texture_color, 0); Tried to test the new SetDepthAttachment face argument, had some question along the way: Looks like switching from TextureFormat::TEXTURE_DEPTH to TextureFormat::TEXTURE_R32F breaks the render target. What texture formats are supported for the depth attachment? Assigning depth texture to material doesn't show anything even with a 1000x scaled sky sphere placed for contrast. What is the easiest way to make a debug output of depth attachment? It's either I don't understand something or the interface is not ready yet. I don't see any way to render to a texture buffer component with index other than 0. Is TextureType::TEXTURE_CUBE supported as a texture buffer attachment? Example: #include "UltraEngine.h" #include "ComponentSystem.h" #include "Encryption.h" namespace ule = UltraEngine; namespace tpp = tableplusplus; int main(int argc, const char* argv[]) { tpp::table cl = ule::ParseCommandLine(argc, argv); std::vector displays = ule::GetDisplays(); std::shared_ptr window = ule::CreateWindow("Ultra Engine", 0, 0, int(720 * displays[0]->scale), int(720 * displays[0]->scale), displays[0], WINDOW_CENTER | WINDOW_TITLEBAR); std::shared_ptr framebuffer = ule::CreateFramebuffer(window); std::shared_ptr world = ule::CreateWorld(); std::shared_ptr camera = ule::CreateCamera(world, ule::PROJECTION_PERSPECTIVE); camera->SetClearColor(0.25f, 0.25f, 0.25f); camera->SetRotation(+90.f, 0.f, 0.f); camera->SetFov(90); camera->SetOrder(1); std::shared_ptr light = ule::CreateBoxLight(world); light->SetPosition(0.f, -2.f, 0.f); light->SetRange(0, 10); light->SetRotation(90, 0, 0); light->SetArea(6, 6); light->SetColor(2); std::shared_ptr sky_sphere = ule::CreateSphere(world); sky_sphere->SetScale(-1000.f, -1000.f, -1000.f); sky_sphere->SetColor(0.3f, 0.4f, 0.9f); std::shared_ptr plane = ule::CreatePlane(world); plane->SetPosition(+0.f, -5.f, +0.f); plane->SetScale(6.f, 6.f, 6.f); std::shared_ptr box_x_pos = ule::CreateBox(world); std::shared_ptr box_x_neg = ule::CreateBox(world); std::shared_ptr box_y_pos = ule::CreateBox(world); //std::shared_ptr box_y_neg = ule::CreateBox(world); std::shared_ptr box_z_pos = ule::CreateBox(world); std::shared_ptr box_z_neg = ule::CreateBox(world); box_x_pos->SetPosition(+5.f, +0.f, +0.f); box_x_neg->SetPosition(-5.f, +0.f, +0.f); box_y_pos->SetPosition(+0.f, +5.f, +0.f); //box_y_neg->SetPosition(+0.f, -2.f, +0.f); box_z_pos->SetPosition(+0.f, +0.f, +5.f); box_z_neg->SetPosition(+0.f, +0.f, -5.f); #if 1 // Here is what currently possible std::shared_ptr const camera_custom_rt_0 = ule::CreateCamera(world, ule::PROJECTION_PERSPECTIVE); camera_custom_rt_0->SetFov(90.f); camera_custom_rt_0->SetPosition(ule::Vec3{ 0.f, 0.f, 0.f }); camera_custom_rt_0->SetClearMode(ule::ClearMode::CLEAR_DEPTH | ule::ClearMode::CLEAR_COLOR); std::shared_ptr const texture_buffer = ule::CreateTextureBuffer(512, 512, 2, true); std::shared_ptr const texture_color_0 = ule::CreateTexture(ule::TextureType::TEXTURE_2D, 512, 512, ule::TextureFormat::TEXTURE_RGBA, {}, 1, ule::TextureFlags::TEXTURE_DEFAULT, ule::TextureFilter::TEXTUREFILTER_LINEAR, 0); std::shared_ptr const texture_color_1 = ule::CreateTexture(ule::TextureType::TEXTURE_2D, 512, 512, ule::TextureFormat::TEXTURE_RGBA, {}, 1, ule::TextureFlags::TEXTURE_DEFAULT, ule::TextureFilter::TEXTUREFILTER_LINEAR, 0); std::shared_ptr const texture_depth_0 = ule::CreateTexture(ule::TextureType::TEXTURE_2D, 512, 512, ule::TextureFormat::TEXTURE_DEPTH, {}, 1, ule::TextureFlags::TEXTURE_DEFAULT, ule::TextureFilter::TEXTUREFILTER_LINEAR, 0); std::shared_ptr const texture_depth_1 = ule::CreateTexture(ule::TextureType::TEXTURE_2D, 512, 512, ule::TextureFormat::TEXTURE_DEPTH, {}, 1, ule::TextureFlags::TEXTURE_DEFAULT, ule::TextureFilter::TEXTUREFILTER_LINEAR, 0); texture_buffer->SetColorAttachment(texture_color_0, 0); texture_buffer->SetColorAttachment(texture_color_1, 1); texture_buffer->SetDepthAttachment(texture_depth_0, 0); texture_buffer->SetDepthAttachment(texture_depth_1, 1); std::shared_ptr camera_custom_mat = ule::CreateMaterial(); camera_custom_rt_0->SetRenderTarget(texture_buffer); camera_custom_mat->SetTexture(texture_color_0); // How to assign camera to a different render target component with // index other than 0 so that the result is stored in texture_color_1? // camera_custom_rt_0->SetRenderTarget(texture_buffer, 1); // camera_custom_mat->SetTexture(texture_color_1); // What is the easiest way to make a debug output of depth attachment? // camera_custom_mat->SetTexture(texture_depth_0); plane->SetMaterial(camera_custom_mat); #else // Here is something I was expecting std::shared_ptr const texture_color_cube = ule::CreateTexture(ule::TextureType::TEXTURE_CUBE, 512, 512, ule::TextureFormat::TEXTURE_RGBA, {}, 6, ule::TextureFlags::TEXTURE_DEFAULT, ule::TextureFilter::TEXTUREFILTER_LINEAR, 0); std::shared_ptr const texture_depth_cube = ule::CreateTexture(ule::TextureType::TEXTURE_CUBE, 512, 512, ule::TextureFormat::TEXTURE_R32F, {}, 6, ule::TextureFlags::TEXTURE_DEFAULT, ule::TextureFilter::TEXTUREFILTER_LINEAR, 0); // texture, buffer_comp_idx, texture_layer_idx texture_buffer->SetColorAttachment(texture_color_cube, 0, 0); texture_buffer->SetColorAttachment(texture_color_cube, 1, 1); texture_buffer->SetColorAttachment(texture_color_cube, 2, 2); texture_buffer->SetColorAttachment(texture_color_cube, 3, 3); texture_buffer->SetColorAttachment(texture_color_cube, 4, 4); texture_buffer->SetColorAttachment(texture_color_cube, 5, 5); // texture, buffer_comp_idx, texture_layer_idx texture_buffer->SetDepthAttachment(texture_depth_cube, 0, 0); texture_buffer->SetDepthAttachment(texture_depth_cube, 1, 1); texture_buffer->SetDepthAttachment(texture_depth_cube, 2, 2); texture_buffer->SetDepthAttachment(texture_depth_cube, 3, 3); texture_buffer->SetDepthAttachment(texture_depth_cube, 4, 4); texture_buffer->SetDepthAttachment(texture_depth_cube, 5, 5); // buffer, buffer_comp_idx camera_custom_rt_0->SetRenderTarget(texture_buffer, 0); camera_custom_rt_1->SetRenderTarget(texture_buffer, 1); camera_custom_rt_2->SetRenderTarget(texture_buffer, 2); camera_custom_rt_3->SetRenderTarget(texture_buffer, 3); camera_custom_rt_4->SetRenderTarget(texture_buffer, 4); camera_custom_rt_5->SetRenderTarget(texture_buffer, 5); #endif while (!window->Closed() && !window->KeyDown(KEY_ESCAPE)) { camera_custom_rt_0->Turn(0, 1, 0); world->Update(); world->Render(framebuffer); } return 0; }
  6. Don't have access to beta branch from the client, only 0.9.3, 0.9.7, stable (0.9.8), dev. Got the dev branch, created a new C++ project, here is the list of small flaws I encountered: Maps/start.ultra is absent from the newly created project UltraEngine::LoadPlugin is used inside auto-generated main.cpp "using namespace UltraEngine;" in several engine interface header files
  7. Did you mean to write "It is probably possible to set up six cameras that each draw to the same texture buffer."? But how would a camera know to which of the attachments to render to? Is there an interface to pass the attachment index? I can see that Camera::SetRenderTarget receives only the TextureBuffer itself and nothing else. The minimal requirement for this shader is to output a mask which corresponds to all the pixels which are not visible from the players' character position by filtering out all the pixels with the help of the cubemap depthbuffer captured at the players' character position. You can think of it as a mask for all the pixel which receive a shadow from a point light with a huge radius placed inside the players' character. Additional output which might be helpful is a layer populated with entity IDs for pixels in the masked-out areas to prevent some weirdly looking cases of self-masking on the final image (you can notice that a huge character on the street has an unfortunate accident on the upper part of the head). I had an idea to use something like this, it can cover the first stage. But is there a way to grab their cubemaps in C++ nowadays? I saw an answer from a far back time saying that it's available as the first texture of entity material, but now I see that Entity::GetMaterial is commented out and is subsequently unavailable for the Light and PointLight classes.
  8. Hello to all the inhabitants of the forum! I'm experimenting with the third-person camera view in terms of usability for tight scenes with varying height levels and trying to achieve a very specific per-pixel opacity mask filtering. I've had a prototype made in Ultra Engine (the other one) and managed to obtain the basic level of what is needed, but lots of workarounds were required along the road and I can see that it just won't work in a good way without editing the source code of the engine due to some higher-level limitations. The main pillar to achieve the intended visuals is to have a real-time cubemap depth capture of the environment around the player similar to what a point light would have. I was not able to find a ready-to-use entity to capture cubemaps in Ultra Engine. From what I see in the documentation, it will be required to create 6 cameras rendering to unique texture buffers before creating the main camera to make a pre-pass cubemap capture of the scene. Is it right that only a depth test will be performed for a TextureBuffer created with CreateTextureBuffer(..., colorattachments = 0, depthattachment = true, ...)? Is there a way to simplify the cubemap capture setup by having a shared texture buffer? Is there a standardized way to capture cubemaps that I just didn't manage to find? The other requirement is to have another pre-pass with the same view frustrum as the main camera, but with a different shader applied to all the entities of the scene. I've managed to find this article (Shader Families from 24 July 2019) with some information on ShaderFamily which is not available in the documentation (ohh how good it would have been to have it in the docs). From what I see, there is a possibility to have a different shader for different rendering scenario and rendering pass - great, shadow pass switch that I will need later is crossed out! I assume that a regular camera created with CreateCamera method uses the "base" render pass shader group and to have a different fragment shader applied one might set a different material (shader family) to all the required entities duplicates with a different bit passed to Entity::SetRenderLayers. Is there a way to to create a custom render pass shader group and then assign it to a camera instead of the "base" one to use it together with specifying Entity::SetRenderLayers bitmask without the need to duplicate entities?
×
×
  • Create New...