Jump to content

Ultra Engine testing


Josh
 Share

Recommended Posts

this is what i use to create the IBL map from the sky-cubemap:

#version 450

#extension GL_ARB_separate_shader_objects : enable
#extension GL_ARB_shading_language_420pack : enable

layout (local_size_x = 16, local_size_y = 16, local_size_z = 1) in;

layout (set = 0, binding = 0) uniform samplerCube envSkybox;
layout (set = 0, binding = 1, rgba32f) uniform imageCube envReflection;

layout (push_constant) uniform Contants
{
	vec4 data;
} constants;

const uint numSamples = 16;
#define PI 3.14159265359
float roughnessLevel = constants.data.x;

float RadicalInverse_VdC(uint bits)
{
	bits = (bits << 16u) | (bits >> 16u);
	bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u);
	bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u);
	bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u);
	bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u);
	return float(bits) * 2.3283064365386963e-10; // / 0x100000000
}

// https://learnopengl.com/#!PBR/IBL/Specular-IBL
vec2 Hammersley(uint i, uint N)
{
	return vec2(float(i) / float(N), RadicalInverse_VdC(i));
}
// https://learnopengl.com/#!PBR/IBL/Specular-IBL
vec3 ImportanceSampleGGX(vec2 Xi, vec3 N, float roughness)
{
	float a = roughness*roughness;
	float phi = 2.0 * PI * Xi.x;
	float cosTheta = sqrt((1.0 - Xi.y) / (1.0 + (a*a - 1.0) * Xi.y));
	float sinTheta = sqrt(1.0 - cosTheta*cosTheta);
	// from spherical coordinates to cartesian coordinates
	vec3 H;
	H.x = cos(phi) * sinTheta;
	H.y = sin(phi) * sinTheta;
	H.z = cosTheta;
	// from tangent-space vector to world-space sample vector
	vec3 up = abs(N.z) < 0.999 ? vec3(0.0, 0.0, 1.0) : vec3(1.0, 0.0, 0.0);
	vec3 tangent = normalize(cross(up, N));
	vec3 bitangent = cross(N, tangent);
	vec3 sampleVec = tangent * H.x + bitangent * H.y + N * H.z;
	return normalize(sampleVec);
}

vec3 cubeCoordToWorld(ivec3 cubeCoord, vec2 cubemapSize)
{
    vec2 texCoord = vec2(cubeCoord.xy) / cubemapSize;
    texCoord = texCoord  * 2.0 - 1.0; // -1..1

    switch(cubeCoord.z)
    {
        case 0: return vec3(1.0, -texCoord.yx); // posx
        case 1: return vec3(-1.0, -texCoord.y, texCoord.x); //negx
        case 2: return vec3(texCoord.x, 1.0, texCoord.y); // posy
        case 3: return vec3(texCoord.x, -1.0, -texCoord.y); //negy
        case 4: return vec3(texCoord.x, -texCoord.y, 1.0); // posz
        case 5: return vec3(-texCoord.xy, -1.0); // negz
    }
    return vec3(0.0);
}
  
void main() 
{
	ivec3 cubeCoord = ivec3(gl_GlobalInvocationID);
	vec3 viewDir = normalize(cubeCoordToWorld(cubeCoord, vec2(imageSize(envReflection))));

	vec3 N = normalize(viewDir);
	vec3 R = N;
	vec3 V = N;

	float weight = 0;
	vec3 color = vec3(0);

	for (int samples = 0; samples < numSamples; samples++)
	{
		vec2 Xi = Hammersley(samples, numSamples);
		vec3 L = ImportanceSampleGGX(Xi, N, roughnessLevel); 

		float NdotL = dot(N, L);
		if (NdotL > 0)
		{
			color += texture(envSkybox, L).rgb;
			weight += NdotL;
		}
	}

	imageStore(envReflection,	
		cubeCoord,
		vec4(color / weight, 1.0));
}

on some discussions is mentioned, that you should always use computeshader when u write to cubemaps,  But from your code i don't think this is actually true, because in the discussions it is assumed, that you need to attach and render each face separatly in the fragment shader, but you just attach each face as a different target, so even if i prefer the compute way yours should be fast as well.

  • Windows 10 Pro 64-Bit-Version
  • NVIDIA Geforce 1080 TI
Link to comment
Share on other sites

7 minutes ago, klepto2 said:

on some discussions is mentioned, that you should always use computeshader when u write to cubemaps,  But from your code i don't think this is actually true, because in the discussions it is assumed, that you need to attach and render each face separatly in the fragment shader, but you just attach each face as a different target, so even if i prefer the compute way yours should be fast as well.

Yeah, someone brought up procedural level generation, which would require fast rendering of environment probes as they are created, so I used a fragment shader.

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

Probes have an adjustable fade distance for each of their six edges. You can use this to make them smoothly transition between probes, or from interior to exterior spaces. In this shot I have the opposite settings applied that I should be using, for the X and Z axes. You can see a sharp border at the entrance and gradual fading in on the other edges. Since the probe reflection is blocking the sky reflection, a dark area forms in the probe volume:

Untitled.thumb.jpg.52b45f4cbb74b0bcbe386f5b82e44300.jpg

And here is it with three sharp edges that align closely to the walls, and a gradual fade in at the entrance:

Untitled.thumb.jpg.615ce08224ee3cf51f7f46bfb376bd16.jpg

  • Like 4

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

 

Update:

  • Environment probes are added. example
  • CreatePointLight(), CreateSpotLight(), CreateDirectionalLight(), CreateBoxLight() replace CreateLight().
  • VkTexture imageviews first element is imageview only for first mipmap. All cubemap faces have separate imageviews. Order is face0 mip0, face0 mip1, face0 mip2, face1 mip0, face1 mip1...

Still to do:

  • Support reflection ray passing through multiple probes
  • Integrate screen-space reflections to mix with probe reflections.
  • Add an acceleration structure for much faster SSR
  • Test variance shadow maps with blur in fragment shader instead of compute shader to see once and for all is they are faster than conventional shadow maps.

 

  • Like 2

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

You know, looking at these examples, it would not be that hard to make a small volume texture for just the dragon, and work that into reflections. Instead of having a grid that followed the camera around, we could just use a small per-object voxel texture. So you would use probes to reflect the large static environment, and voxels could still be used to reflect dynamic objects.

  • Like 1

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

Okay, the cubemap image views are all 2D image views, in order for render-to-texture to work. I figure no one will be rendering to a volume texture slice-by-slice. Do you still need the base image view for these?

Do the cubemap image views work correctly for your purposes, how they are now?

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

Okay, I just uploaded an update. VkTexture now has a baseimageview member that I think will do what you want, and the imageviews array contains separate 2D image views for each layer and mipmap.

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

Maybe you need to do a clean and rebuild, and make sure there are no more updates available? It looks like it is crashing in the VkTexture destructor, but  I didn't even make a destructor for this class.

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

i have made a clean and rebuild and reinstalled ultraengine, but to no success. I now get several include errors in framework.h eg:

 #include <sol/sol.hpp>

or

#include "Libraries/nlohmann_json/single_include/nlohmann/json.hpp"

 

  • Windows 10 Pro 64-Bit-Version
  • NVIDIA Geforce 1080 TI
Link to comment
Share on other sites

I still get the error, but i don't get this error in this sample:

#include "UltraEngine.h"
#include "ComponentSystem.h"

using namespace UltraEngine;

int main(int argc, const char* argv[])
{
    //Get the displays
    auto displays = GetDisplays();

    //Create a window
    auto window = CreateWindow("Ultra Engine", 0, 0, 1280 * displays[0]->scale, 720 * displays[0]->scale, displays[0], WINDOW_CENTER | WINDOW_TITLEBAR);

    //Create a world
    auto world = CreateWorld();

    //Create a framebuffer
    auto framebuffer = CreateFramebuffer(window);

    //Create a camera
    auto camera = CreateCamera(world);
    camera->SetClearColor(0.125);
    camera->SetFOV(70);
    camera->SetPosition(0, 0, -3);
    
    //Create a light
    auto light = CreateDirectionalLight(world);
    light->SetRotation(35, 45, 0);

    //Create a box
    auto box = CreateBox(world);

    //Entity component system
    auto actor = CreateActor(box);
    auto component = actor->AddComponent<Mover>();
    component->rotation.y = 45;

    auto envSky = CreateTexture(TEXTURE_CUBE, 1024, 1024, TEXTURE_RGBA32, {}
    , 6, TEXTURE_STORAGE, TEXTUREFILTER_LINEAR, 0);

    world->SetEnvironmentMap(envSky, ENVIRONMENTMAP_BACKGROUND);
    VkTexture textureData;
    //Main loop
    while (window->Closed() == false and window->KeyDown(KEY_ESCAPE) == false)
    {
        world->Update();
        world->Render(framebuffer);
        textureData = envSky->GetVKTexture();
    } //Add breakpoint here to see the broken VkTexture
    return 0;
}

but if you set a breakpoint add the second last }, you can see that VKTexture contains garbage.

[Edit:] for the include problem i needed to restore sol3 and nlohman from a previously made backup.

  • Windows 10 Pro 64-Bit-Version
  • NVIDIA Geforce 1080 TI
Link to comment
Share on other sites

6 minutes ago, SpiderPig said:

Sprites that show text are completely white now too... shader issue?

Probably. I'll come back to this.

#include "UltraEngine.h"

using namespace UltraEngine;

int main(int argc, const char* argv[])
{
    //Get the displays
    auto displays = GetDisplays();

    //Create a window
    auto window = CreateWindow("Ultra Engine", 0, 0, 800, 600, displays[0], WINDOW_CENTER | WINDOW_TITLEBAR);

    //Create framebuffer
    auto framebuffer = CreateFramebuffer(window);

    //Create world
    auto world = CreateWorld();

    //Create camera
    auto camera = CreateCamera(world);
    camera->SetProjectionMode(PROJECTION_ORTHOGRAPHIC);
    camera->SetRange(-1, 1);
    camera->SetClearColor(0.125);

    //Create sprite
    auto sprite = CreateSprite(world, LoadFont("Fonts/arial.ttf"), "HELLO", 36);
    
    //Main loop
    while (window->Closed() == false and window->KeyHit(KEY_ESCAPE) == false)
    {
        world->Update();
        world->Render(framebuffer, true);
    }
    return 0;
}

 

  • Thanks 1

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

ok, now it is working, but you broke my mipmap generation pipeline. i would need the mipmap like before, not for every single face. I compute the mipmap for the whole cubemap at once and you provide only the mipmaps for each face seperatly. 

  • Windows 10 Pro 64-Bit-Version
  • NVIDIA Geforce 1080 TI
Link to comment
Share on other sites

Are you using the new baseimageview member? It should include all layers and all mipmaps.

Or do you need separate imageviews for each mipmap that each contain all faces?

For simple linear mipmap generation you can do a VkCmdBlitImage() without any shaders.

My job is to make tools you love, with the features you want, and performance you can't live without.

Link to comment
Share on other sites

  • Josh changed the title to Ultra Engine testing
  • Josh locked this topic
Guest
This topic is now closed to further replies.
 Share

×
×
  • Create New...