Until now, all my experiments with voxel cone step tracing placed the center of the GI data at the world origin (0,0,0). In reality, we want the GI volume to follow the camera around so we can see the effect everywhere, with more detail up close. I feel my productivity has not been very good lately, but I am not being too hard on myself because this is very difficult stuff. The double-blind nature of it (rendering the voxel data and then using that data to render an effect) makes development ver
Adding emission into the cascaded voxel cone step tracing global illumination and dynamic reflections system (SEO ftw) was simple enough:
There's some slight trailing but it looks okay to me. There is a bit of a "glitch" in that when the emissive surface gets near the wall, the ambient occlusion kicks in, even though the sphere is self-illuminating. This happens because the emission color is mixed with the light voxel during the rasterization step. I could fix this by storing emis
I had to spend several weeks just eliminating light leaks and other artifacts, and getting the results I wanted in a variety of scenes. The results are looking good. Everyone who tries implementing this technique has problems with light leaks but I have fortunately been able to avoid this with careful planning:
Now that I have nice results with a single volume texture centered at the origin, it's time to add additional stages. The idea is to have a cascading series of volume textures
Finally, finally, finally, finally, for the first time since I started working on this feature several years ago, finally we have real-time global illumination with a second light bounce: Below you can see the direct light hitting the floor, bounding up to the ceiling, and then being reflected back down on the floor again.
Performance is still good and I have not started fine-tuning optimization yet. I was just trying to get the effect working at all, which was quite difficult to do,
Now that I have the downsampled reflection data working, I can start casting rays. The cone step tracing is not a 100% perfect representation of physical light, but it gives a very favorable balance of quality and performance. Somehow I came up with a few formulas that eliminate light leaks and other artifacts.
Quite honestly I did not think the results would be this good. Indoor / outdoor scenes with thin walls are very difficult to prevent light leaks in, but somehow it's working ve
For downsampling of GI voxel data, I found that a compute shader offers the best performance. The first step was to add support for compute shaders into Ultra Engine.
I've never used these before but I was able to get them working pretty quickly. I think the user API will look something like this:
//Load compute shader
auto module = LoadShaderModule("Shaders/Compute/test.comp.spv");
auto shader = CreateShader();
shader->SetModule(module, SHADER_COMPUTE);
//Create work group
int wor
After testing and some discussion with other programmers, I decided to try performing voxelization on the GPU instead of the CPU. The downside is the memory usage is much higher than a sparse voxel octree, but I found that sparse voxel octrees were very slow when it came to soft reflections, although the results of the sharp raycast were impressive:
You can read the details of GPU voxelization here if you wish.
Initially I thought the process would require rendering the
I've got cone step tracing working now with the sparse voxel octree implementation. I actually found that two different routines are best when the surface is rough or smooth. For sharp reflections, and precise voxel raytracing works best:
For rough surfaces, cone step tracing can be used. There are some issues to work out and I need to revisit the downsampling routine, but it's basically working:
Here's a video showing the sharp raycast in motion. Performance is quite good
I've moved on to one of the final steps for voxel cone step tracing, which is downsampling the lit voxels in a way that approximates a large area of rays being cast. You can read more about the details of this technique here.
This artifact looks like a mirror that is sunken below the surface of some kind of frame. It was appearing because the mesh surface was inside the voxel, and neighboring voxels were being intersected. The solution was to move the ray starting point out of the voxel the
I've now got basic specular reflections working with the sparse voxel octree system. This uses much less memory than a voxel grid or even a compressed volume texture. It also supports faster optimized ray tests, for higher quality reflections and higher resolution. Some of the images in this article were not possible to produce in my initial implementation that used volume textures.
This shot shows the reflection of just the diffuse color. Notice the red column is visible in three reflectio
While seeking a way to increase performance of octree ray traversal, I came across a lot of references to this paper:
http://wscg.zcu.cz/wscg2000/Papers_2000/X31.pdf
Funnily enough, the first page of the paper perfectly describes my first two attempted algorithms. I started with a nearest neighbor approach and then implemented a top-down recursive design:
GLSL doesn't support recursive function calls, so I had to create a function that walks up and down the octree hierarchy with
The VK_KHR_dynamic_rendering extension has made its way into Vulkan 1.2.203 and I have implemented this in Ultra Engine. What does it do?
Instead of creating renderpass objects ahead of time, dynamic rendering allows you to just specify the settings you need as your are performing filling in command buffers with rendering instructions. From the Khronos working group:
In my experience, post-processing effects is where this hurt the most. The engine has a user-defined stack of post-pro
Previously I described how I was able to save the voxel data into a sparse octree and correctly lookup the right voxel in a shader. This shot shows that each triangle is being rasterized separately, i.e. the triangle bounding box is being correctly trimmed to avoid a lot of overlapping voxels:
Calculating direct lighting using the sparse octree was very difficult, and took me several days of debugging. I'm not 100% sure what the problem was, other than it seems GLSL code is not quite
My initial implementation of mesh voxelization for ray tracing used this code. It was good for testing, but has some problems:
It's slow, using an unnecessary and expensive x * y * z loop
No support for per-voxel color based on a texture lookup
There are mathematical mistakes that cause inaccuracy, and the math has to be perfect
My solution addresses these problems and only uses an x * y loop to generate the voxels. It does this by identifying the major (largest magn
Previously I noted that since Voxel global illumination involves calculation of direct lighting, it would actually be possible to do away with shadow maps altogether, and use voxels for direct and global illumination. This can eliminate the problems of image-based shadows like shadow acne and adjusting the shadow map size. I also believe this method will turn out a lot faster than shadow map rendering, and you know how I like fast performance.
The sparse voxel octree node structure consume
This is an update on my progress of our voxel raytracing system. VXRT is designed to provide all the reflection information that PBR materials use. If a picture is worth a thousand words, then this counts as a 5000 word article.
Direct lighting:
Global illumination:
Specular reflection:
Skybox component:
Final combined image:
The Ultra Engine editor is designed to be expandable and modifiable. Lua script is integrated into the editor and can be used to write editor extensions and even modify the scene or the editor itself in real-time.
We can create a scene object entirely in code and make it appear in the scene browser tree:
box = CreateBox(editor.world)
box.name = "box01"
o = CreateSceneObject(box) --make editor recognize the entity and add it to the scene browser
o:SetSelected(true)
We can even
A while back I wrote enthusiastically about Basis Universal super compression. KTX2 is a texture file format from Khronos, makers of the Vulkan and glTF specifications. Like DDS files, KTX2 can store multiple mipmaps, as well as memory-compressed texture formats like DXT5 and BC7. However, KTX2 now supports Basis compressed data as well, which makes it the all-in-one universal texture format. glTF has an official extension for KTX2 textures in glTF files, so it can be combined with Draco mesh co
Google Draco is a library that aims to do for mesh data what MP3 and OGG did for music. It does not reduce memory usage once a mesh is loaded, but it could reduce file sizes and improve download times. Although mesh data does not tend to use much disk space, I am always interested in optimization. Furthermore, some of the NASA models I work with are very high-poly, and do take up significant disk space. Google offers a very compelling chart showing a compression ratio of about 95%:
Ho
The glTF importer took a very long time to develop, but it much easier to write a glTF save routine. In one day I got an exporter working with support for everything except skinning and animation. To save a model in glTF format, just call Model::Save("mymodel.gltf") and it will work! Entire scenes can also be saved in glTF format.Here is a model that was loaded from Leadwerks MDL, MAT, and TEX files and saved as glTF. The textures are converted to PNG files. (Microsoft has an official extension
In Leadwerks, required files were always a slightly awkward issue. The engine requires a BFN texture and a folder of shaders, in order to display anything. One of my goals is to make the Ultra Engine editor flexible enough to work with any game. It should be able to load the folder of an existing game, even if it doesn't use Ultra Engine, and display all the models and scenes with some accuracy. Of course the Quake game directory isn't going to include a bunch of Ultra Engine shaders, so what to
The new editor is being designed to be flexible enough to work with any game, so it can be used for modding as well as game development with our new 3D engine. Each project has configurable settings that can be used to handle what the editor actually does when you run the game. In the case of a game like Quake, this will involve running a few executables to first compile the map you are working on into a BSP structure, then perform lightmaps and pre-calculate visibility.
You can also
Many games store 3D models, textures, and other game files in some type of compressed package format. These can be anything from a simple ZIP file to a custom multi-file archive system. This has the benefit of making the install size of the game smaller, and can prevent users from accessing the raw files. Often times undocumented proprietary file formats are used to optimize loading time, although with DDS and glTF this is not such a problem anymore.
Leadwerks uses built-in support for encr
At last I have been able to work the plugin system into the new editor and realize my dreams.
The editor automatically detects supported file formats and generates thumbnails for them. (Thumbnails are currently compatible with the Leadwerks system, so Leadwerks can read these thumbnail files and vice-versa.) If no support for a file format is found, the program just defaults to the whatever icon or thumbnail Windows shows.
The options dialog includes a tab where you can examine each pl
I've been wracking my brain trying to decide what I want to show at the upcoming conference, and decided I should get the new editor in a semi-workable state. I started laying out the interface two days ago. To my surprise, the whole process went very fast and I discovered some cool design features along the way.
With the freedom and control I have with the new user interface system, I was able to make the side panel extend all the way to the top and bottom of the window client area. This g