-
Posts
929 -
Joined
-
Last visited
Content Type
Blogs
Forums
Store
Gallery
Videos
Downloads
Everything posted by klepto2
-
The Leadwerks 2 Editor was designed to run scripts directly, while this came in handy for things like additions it was also dangerous. Scripts with errors could have easily break the editor. In le3 and later this was leaved out, but in ultraengine I would assume that it will be possible again ( not with lua, but with direct plugins). even if it was cut of there are some ways to have interactive scripts. While normal lua scripts are not executed, those for posteffects are. Unfortunately the naming is a of an entity is not accessible in the editor, at least if I remember correctly. But in theory you could access all items in the loaded map and generate whatever you want. Of course you need to remove the posteffect later for the release.
-
yeah, it all works as expected, but somehow the brightness was lower in the previous builds. I have set the lightcolor vec3(2,2,2) for the same which is now: vec3(0.5,0.5,0.5). Anyway: probes are working awesome:
-
small bug or change report: the directional light is now much brighter than in the previous builds. (like factor 10 or so) terrain is not affacted by light in any way.
-
nevermind, got it viewInfo.subresourceRange.baseMipLevel = mipLevel; viewInfo.subresourceRange.levelCount = VK_REMAINING_MIP_LEVELS; viewInfo.subresourceRange.baseArrayLayer = 0; viewInfo.subresourceRange.layerCount = texture->CountFaces();
-
maybe a stupid question, but maybe you can share the code you use for the mipmap Imageview? i can't get it to work correctly.
-
what would be nice, would be if you could at the imageInfo to the VkTexture, there should be everything included which is need for the view creation.
-
That is currently no problem as this is not yet needed
-
i just had a similar idea. This also is just needed for cubemaps if i understand you correctly as the other textures will behave the same way as before?
-
Maybe just an idea: to hide the VkTexture things and maybe everything related to transfer data from your vulkan renderer to 3rd party code from the common user, you could add a new namespace: UltraEngine::Transfer in this you could have something like: VkTexture GetTransferDataForTexture(shared_ptr<Texture> texture); this could be added as pure functions or in a static class (which i would prefer) the benefit is that you don't need answer questions to everyone who looks into the texture class, but also everyone who needs access to it has a central point where to look at. I don't know how this would break the image slots? I would guess the way your currently doing it will brake the image slots much faster Lets take a cubemap with 1024*1024 and the assumed 6 layers: while your appoach currently generates 66 imageviews for this, the correct size would be 10. So adding the last 10 wouldn't be that hard also the vulkan spec states that there is no limit for imageviews, though multiple people stated that theyx tested it and it breaks after around 520k but that should be more than enough ;).
-
i use the above shader this to create the IBL texture, and this code below is the actual cpp code to setup the mipmapgeneration per roughness: vector<EnvironmentSkyReflectionContants> reflShaders; for (int layer = 0; layer < reflectionTexture->CountMipmaps(); layer++) { shared_ptr<ComputeShader> refshader; int tx = 16; if (reflectionTexture->GetMipmapWidth(layer) < 16) { refshader = ComputeShader::Create("Shaders\\Environment\\env_reflection_gen_1.comp.spv"); tx = 1; } else { refshader = ComputeShader::Create("Shaders\\Environment\\env_reflection_gen.comp.spv"); } refshader->AddSampler(envSkyWithoutSun); refshader->AddTargetImage(reflectionTexture,layer); refshader->SetupPushConstant(sizeof(EnvironmentSkyReflectionContants)); EnvironmentSkyReflectionContants data; data.reflectiondata.x = layer / reflectionTexture->CountMipmaps(); data.reflectiondata.y = layer; refshader->BeginDispatch(world, Max(1,reflectionTexture->GetMipmapWidth(layer)) / tx, Max(1, reflectionTexture->GetMipmapHeight(layer)) / tx, 6, false, ComputeHook::RENDER, &data, sizeof(EnvironmentSkyReflectionContants)); reflShaders.push_back(data); }
-
the baseimageview is only one imageview, while this is enough for samplers, for images you need the imageview of each mipmap layer with all faces. and pass them separatly.
-
ok, now it is working, but you broke my mipmap generation pipeline. i would need the mipmap like before, not for every single face. I compute the mipmap for the whole cubemap at once and you provide only the mipmaps for each face seperatly.
-
I still get the error, but i don't get this error in this sample: #include "UltraEngine.h" #include "ComponentSystem.h" using namespace UltraEngine; int main(int argc, const char* argv[]) { //Get the displays auto displays = GetDisplays(); //Create a window auto window = CreateWindow("Ultra Engine", 0, 0, 1280 * displays[0]->scale, 720 * displays[0]->scale, displays[0], WINDOW_CENTER | WINDOW_TITLEBAR); //Create a world auto world = CreateWorld(); //Create a framebuffer auto framebuffer = CreateFramebuffer(window); //Create a camera auto camera = CreateCamera(world); camera->SetClearColor(0.125); camera->SetFOV(70); camera->SetPosition(0, 0, -3); //Create a light auto light = CreateDirectionalLight(world); light->SetRotation(35, 45, 0); //Create a box auto box = CreateBox(world); //Entity component system auto actor = CreateActor(box); auto component = actor->AddComponent<Mover>(); component->rotation.y = 45; auto envSky = CreateTexture(TEXTURE_CUBE, 1024, 1024, TEXTURE_RGBA32, {} , 6, TEXTURE_STORAGE, TEXTUREFILTER_LINEAR, 0); world->SetEnvironmentMap(envSky, ENVIRONMENTMAP_BACKGROUND); VkTexture textureData; //Main loop while (window->Closed() == false and window->KeyDown(KEY_ESCAPE) == false) { world->Update(); world->Render(framebuffer); textureData = envSky->GetVKTexture(); } //Add breakpoint here to see the broken VkTexture return 0; } but if you set a breakpoint add the second last }, you can see that VKTexture contains garbage. [Edit:] for the include problem i needed to restore sol3 and nlohman from a previously made backup.
-
i have made a clean and rebuild and reinstalled ultraengine, but to no success. I now get several include errors in framework.h eg: #include <sol/sol.hpp> or #include "Libraries/nlohmann_json/single_include/nlohmann/json.hpp"
-
-
ok, cubemaps are not working anymore as well. as they are not cubemaps anymore. Descriptor in binding #0 index 0 requires an image view of type VK_IMAGE_VIEW_TYPE_CUBE but got VK_IMAGE_VIEW_TYPE_2D
-
in compute shaders you don't need separate imageviews for each face so i guess this will be broken, but i need to test this. i think i still need the base imageviews for these.
-
GetVktexture is broken for 3d-Textures. the imageViews is empty.
-
-
a small way further to volumetrc clouds
-
By teh way the reflection bug was my fault. In the original reflection shader i had a flipped z coordinate which resulted in the wrong reflection.
-
this is what i use to create the IBL map from the sky-cubemap: #version 450 #extension GL_ARB_separate_shader_objects : enable #extension GL_ARB_shading_language_420pack : enable layout (local_size_x = 16, local_size_y = 16, local_size_z = 1) in; layout (set = 0, binding = 0) uniform samplerCube envSkybox; layout (set = 0, binding = 1, rgba32f) uniform imageCube envReflection; layout (push_constant) uniform Contants { vec4 data; } constants; const uint numSamples = 16; #define PI 3.14159265359 float roughnessLevel = constants.data.x; float RadicalInverse_VdC(uint bits) { bits = (bits << 16u) | (bits >> 16u); bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u); bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u); bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u); bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u); return float(bits) * 2.3283064365386963e-10; // / 0x100000000 } // https://learnopengl.com/#!PBR/IBL/Specular-IBL vec2 Hammersley(uint i, uint N) { return vec2(float(i) / float(N), RadicalInverse_VdC(i)); } // https://learnopengl.com/#!PBR/IBL/Specular-IBL vec3 ImportanceSampleGGX(vec2 Xi, vec3 N, float roughness) { float a = roughness*roughness; float phi = 2.0 * PI * Xi.x; float cosTheta = sqrt((1.0 - Xi.y) / (1.0 + (a*a - 1.0) * Xi.y)); float sinTheta = sqrt(1.0 - cosTheta*cosTheta); // from spherical coordinates to cartesian coordinates vec3 H; H.x = cos(phi) * sinTheta; H.y = sin(phi) * sinTheta; H.z = cosTheta; // from tangent-space vector to world-space sample vector vec3 up = abs(N.z) < 0.999 ? vec3(0.0, 0.0, 1.0) : vec3(1.0, 0.0, 0.0); vec3 tangent = normalize(cross(up, N)); vec3 bitangent = cross(N, tangent); vec3 sampleVec = tangent * H.x + bitangent * H.y + N * H.z; return normalize(sampleVec); } vec3 cubeCoordToWorld(ivec3 cubeCoord, vec2 cubemapSize) { vec2 texCoord = vec2(cubeCoord.xy) / cubemapSize; texCoord = texCoord * 2.0 - 1.0; // -1..1 switch(cubeCoord.z) { case 0: return vec3(1.0, -texCoord.yx); // posx case 1: return vec3(-1.0, -texCoord.y, texCoord.x); //negx case 2: return vec3(texCoord.x, 1.0, texCoord.y); // posy case 3: return vec3(texCoord.x, -1.0, -texCoord.y); //negy case 4: return vec3(texCoord.x, -texCoord.y, 1.0); // posz case 5: return vec3(-texCoord.xy, -1.0); // negz } return vec3(0.0); } void main() { ivec3 cubeCoord = ivec3(gl_GlobalInvocationID); vec3 viewDir = normalize(cubeCoordToWorld(cubeCoord, vec2(imageSize(envReflection)))); vec3 N = normalize(viewDir); vec3 R = N; vec3 V = N; float weight = 0; vec3 color = vec3(0); for (int samples = 0; samples < numSamples; samples++) { vec2 Xi = Hammersley(samples, numSamples); vec3 L = ImportanceSampleGGX(Xi, N, roughnessLevel); float NdotL = dot(N, L); if (NdotL > 0) { color += texture(envSkybox, L).rgb; weight += NdotL; } } imageStore(envReflection, cubeCoord, vec4(color / weight, 1.0)); } on some discussions is mentioned, that you should always use computeshader when u write to cubemaps, But from your code i don't think this is actually true, because in the discussions it is assumed, that you need to attach and render each face separatly in the fragment shader, but you just attach each face as a different target, so even if i prefer the compute way yours should be fast as well.
-
From my experience, you only need the miplevels. (At least i just needed the layercount) the layer for the cubemap are accessed by the z component. In the sky rendering I always render to the full cubemap at once. Well for the clouds I currently try to implement temporal reprojection, this allows rendering of 1/16th of the whole cubemap in one frame without a noticeable lag or noise. ( the clouds in the above screenshot let the framerate drop from hundreds to just 20 fps so there is need for optimization)
-