There are two aspects of GLTF files that have non-optimal loading speed. First, the vertex data is not stored in the same exact layout and format as our vertex structure. I found the difference in performance for this was pretty small, even for large models, so I was willing to let it go. Tangents can take a bit longer to build, but those are usually included in the model if they are needed.
The second issue is the triangle tree structure which is used for raycasting (the pick commands). I
A big update is now available for beta subscribers!
Multipass Rendering
You can use several cameras to increase the depth range or mix 2D and 3D graphics.
2D Graphics
As discussed earlier, 2D graphics are now supported using persistent 2D objects.
Depth Sorting
Multiple layers of transparency will now render in correct order, with lighting in each layer. Note that I used an empty script called "null.lua" to prevent the glass surfaces here from being collapsed at loa
Previously, we saw how the new renderer can combine multiple cameras and even multiple worlds in a single render to combine 3D and 2D graphics. During the process of implementing Z-sorting for multiple layers of transparency, I found that Vulkan does in fact respect rasterization order. That is, objects are in fact drawn in the same order you provide draw calls to a command buffer.
Furthermore, individual primitives (polygons) are also rendered in the order they are stored in the indice b
Previously I described how multiple cameras can be combined in the new renderer to create an unlimited depth buffer. That discussion lead into multi-world rendering and 2D drawing. Surprisingly, there is a lot of overlap in these features, and it makes sense to solve all of it at one time.
Old 2D rendering systems are designed around the idea of storing a hierarchy of state changes. The renderer would crawl through the hierarchy and perform commands as it went along, rendering all 2D elemen
Current generation graphics hardware only supports up to a 32-bit floating point depth buffer, and that isn't adequate for large-scale rendering because there isn't enough precision to make objects appear in the correct order and prevent z-fighting.
After trying out a few different approaches I found that the best way to support large-scale rendering is to allow the user to create several cameras. The first camera should have a range of 0.1-1000 meters, the second would use the same n
Still a lot of things left to do. Now that I have very large-scale rendering working, people want to fill it up with very big terrains. A special system will be required to handle this, which adds another layer to the terrain system. Also, I want to resume work on the voxel GI system, as I feel these results are much better than the performance penalty of ray-tracing. There are a few odds and ends like AI navigation and cascaded shadow maps to finish up.
I am planning to have the engine mor
A new build is available on the beta branch on Steam.
Updated to Visual Studio 2019.
Updated to latest version of OpenVR and Steamworks SDK.
Fixed object tracking with seated VR mode. Note that the way seated VR works now is a little different, you need to use the VR orientation instead of just positioning the camera (see below).
Added VR:SetRotation() so you can rotate the world around in VR.
The VRPlayer script has rotation added to the left controller. Pres
If were a user of BlitzMax in the past, you will love these convenience commands in Turbo Engine:
int Notify(const std::wstring& title, const std::wstring& message, shared_ptr<Window> window = nullptr, const int icon = 0)
int Confirm(const std::wstring& title, const std::wstring& message, shared_ptr<Window> window = nullptr, const int icon = 0)
int Proceed(const std::wstring& title, const std::wstring& message, shared_ptr<Window> window = nullptr, co
A huge update is available for Turbo Engine Beta.
Hardware tessellation adds geometric detail to your models and smooths out sharp corners.
The new terrain system is live, with fast performance, displacement, and support for up to 255 material layers.
Plugins are now working, with sample code for loading MD3 models and VTF textures.
Shader families eliminate the need to specify a lot of different shaders in a material file.
Support for multiple monitors and be
New commands in Turbo Engine will add better support for multiple monitors. The new Display class lets you iterate through all your monitors:
for (int n = 0; n < CountDisplays(); ++n)
{
auto display = GetDisplay(n);
Print(display->GetPosition()); //monitor XY coordinates
Print(display->GetSize()); //monitor size
Print(display->GetScale()); //DPI scaling
}
The CreateWindow() function now takes a parameter for the monitor to create the window on / relative to.
auto displ
Texture loading plugins allow us to move some big libraries like FreeImage into separate optional plugins, and it also allows you to write plugins to load new texture and image formats.
To demonstrate the feature, I have added a working VTF plugin to the GMF2 SDK. This plugin will load Valve texture files used in Source Engine games like Half-Life 2 and Portal. Here are the results, showing a texture loaded directly from VTF format and also displayed in Nem’s nifty VTFEdit tool.
I wanted to work on something a bit easier before going back into voxel ray tracing, which is another difficult problem. "Something easier" was terrain, and it ended up consuming the entire month of August, but I think you will agree it was worthwhile.
In Leadwerks Game Engine, I used clipmaps to pre-render the terrain around the camera to a series of cascading textures. You can read about the implementation here:
This worked very well with the hardware we had available at the time, b
Multithreading is very useful for processes that can be split into a lot of parallel parts, like image and video processing. I wanted to speed up the normal updating for the new terrain system so I added a new thread creation function that accepts any function as the input, so I can use std::bind with it, the same way I have been easily using this to send instructions in between threads:
shared_ptr<Thread> CreateThread(std::function<void()> instruction);
The terrain update nor
The GMF2 data format gives us fine control to enable fast load times. Vertex and indice data is stored in the exact same format the GPU uses so everything is read straight into video memory. There are some other optimizations I have wanted to make for a long time, and our use of big CAD models makes this a perfect time to implement these improvements.
Precomputed BSP Trees
For raycast (pick) operations, a BSP structure needs to be constructed from the mesh data. In Leadwerks Engine thi
My work with the MD3 format and its funny short vertex positions made me think about vertex array sizes. (Interestingly, the Quake 2 MD2 format uses a single char for vertex positions!)
Smaller vertex formats run faster, so it made sense to look into this. Here was our vertex format before, weighing in at 72 bytes:
struct Vertex
{
Vec3 position;
float displacement;
Vec3 normal;
Vec2 texcoords[2];
Vec4 tangent;
unsigned char color[4];
unsigned char boneweights[4];
unsigned char b
The GMF2 SDK has been updated with tangents and bounds calculation, object colors, and save-to-file functionality.
The GMF2 SDK is a lightweight engine-agnostic open-source toolkit for loading and saving of game models. The GMF2 format can work as a standalone file format or simply as a data format for import and export plugins. This gives us a protocol we can pull model data into and export model data from, and a format that loads large data much faster than alternative file formats.
Not a lot of people know about this, but back in 2001 Discreet (before the company was purchased by Autodesk) released a free version of 3ds max for modding games. Back then game file formats and tools were much more highly specialized than today, so each game required a "game pack" to customize the gmax interface to support that game. I think the idea was to charge the game developer money to add support for their game. Gmax supported several titles including Quake 3 Arena and Microsoft Flight
I wanted to get a working example of the plugin system together. One thing led to another, and...well, this is going to be a very different blog.
GMF2
The first thing I had to do is figure out a way for plugins to communicate with the main program. I considered using a structure they both share in memory, but those always inevitably get out of sync when either the structure changes or there is a small difference between the compilers used for the program and DLL. This scared me so I we
I started to implement quads for tessellation, and at that point the shader system reached the point of being unmanageable. Rendering an object to a shadow map and to a color buffer are two different processes that require two different shaders. Turbo introduces an early Z-pass which can use another shader, and if variance shadow maps are not in use this can be a different shader from the shadow shader. Rendering with tessellation requires another set of shaders, with one different set for each
With tessellation now fully implemented, I was very curious to see how it would perform when applied to arbitrary models. With tessellation, vertices act like control points for a Bezier mesh that is subdivided dynamically in screen space. Could tessellation be used to add new details to any low-poly model?
Here is a low-res character model with a pointy head and obvious sharp edges all around his silhouette:
When tessellation is enabled, the sharp edges go away and the mes
Previously I talked about the technical details of hardware tessellation and what it took to make it truly useful. In this article I will talk about some of the implications of this feature and the more advanced ramifications of baking tessellation into Turbo Game Engine as a first-class feature in the
Although hardware tessellation has been around for a few years, we don't see it used in games that often. There are two big problems that need to be overcome.
We need a way to preven
Now that I have all the Vulkan knowledge I need, and most work is being done with GLSL shader code, development is moving faster. Before starting voxel ray tracing, another hard problem, I decided to work one some *relatively* easier things for a few days. I want tessellation to be an every day feature in the new engine, so I decided to work out a useful implementation of it.
While there are a ton of examples out there showing how to split a triangle up into smaller triangles, useful discus
Shadow maps are now supported in the new Vulkan renderer, for point, spot, and box lights!
Our new forward renderer eliminates all the problems that deferred renderers have with transparency, so shadows and lighting works great with transparent surfaces. Transparent objects even receive lighting from their back face automatically!
There is some shadow acne, which I am not going to leave alone right now because I want to try some ideas to eliminate it completely, so you neve
After about four days of trying to get render-to-texture working in Vulkan, I have everything working except...it doesn't work. No errors, no clue what is wrong, but the renderer is not even clearing the depth attachment, which is why the texture read shown here is flat red.
There's not much else to say right now. I will keep trying to find the magic combination of cryptic obscure settings it takes to make Vulkan do what I want.
This is very hard stuff, but once I have it working
Now that we have lights working in our clustered forward Vulkan renderer (same great technique the latest DOOM games are using) I am starting to implement shadow maps. The first issue that came up was managing render-to-texture when the texture might still be in use rendering the previous frame. At first I thought multiple shadowmaps would be needed per light, like a double-buffering system, but that would double the number of shadow textures and video memory. Instead, I created a simple object