Memory Management in Vulkan
Vulkan gives us explicit control over the way data is handled in system and video memory. You can map a buffer into system memory, modify it, and then unmap it (giving it back to the GPU) but it is very slow to have a buffer that both the GPU and CPU can access. Instead, you can create a staging buffer that only the CPU can access, then use that to copy data into another buffer that can only be read by the GPU. Because the GPU buffer may be in-use at the time you want to copy data to it, it is best to insert the copy operation into a command buffer, so it happens after the previous frame is rendered. To handle this, we have a pool of transfer buffers which are retrieved by a command buffer when needed, then released back into the pool once that command buffer is finished drawing. A fence is used to tell when the command buffer completes its operations.
One issue we came across with OpenGL in Leadwerks was when data was uploaded to the GPU while it was still being accessed to render a frame. You could actually see this on some cards when playing my Asteroids3D game. There was no mechanism in OpenGL to synchronize memory, so the best you could do was put data transfers at the start of your rendering code, and hope that there was enough of a delay before your drawing actually started that the memory copying had completed. With the super low-overhead approach of Vulkan rendering, this problem would become much worse. To deal with this, Vulkan uses explicit memory management with something called pipeline barriers. When you add a command into a Vulkan command buffer, there is no guarantee what order those commands will be executed in, and pipeline barriers allow you to create a point where certain commands must be executed before other ones can begin.
Here are the order of operations:
- Start recording new command buffer.
- Retrieve staging buffer from pool and remove from pool.
- Copy data into staging buffer.
- Insert command to copy from staging buffer to the GPU buffer.
- Insert pipeline barrier to make sure data is transferred before drawing begins.
- Execute the command buffer.
- When the fence is completed, move all staging buffers back into the staging buffer pool.
In the new game engine, we have several large buffers to store the following data:
- Mesh vertices
- Mesh indices
- Entity 4x4 matrices (and other info)
- A list of visible entity IDs
- Visible light information.
- Skeleton animation data
I found this data tends to fall into two categories.
- Some data is large and only some of it gets updated each frame. This includes entity 4x4 matrices, skeleton animation data, and mesh vertex and index data.
- Other data tends to be smaller and only concerns visible objects. This includes visible entity IDs and light information. This data is updated completely each time a new visibility set arrives.
The first type of data requires data buffers that can be resized, because they can be very large, and more objects or data might be added at any time. For example, the vertex buffer contains all vertices that exist, in all meshes the user creates or loads. If a new mesh is loaded that requires space greater than the buffer capacity, a new buffer must be created, then the full contents of the old buffer are copied over, directly in GPU memory. A new pipeline barrier is inserted to ensure the data transfer to the new buffer is finished, and then additional data is copied.
The second type of data is a bit simpler. If the existing buffer is not big enough, a new bigger buffer is created. Since the entire contents of the buffer are uploaded with each new visibility set, there is no need to copy any existing data from the old buffer.
I currently have about 2500 lines of Vulkan-specific code. Calling this "boilerplate" is disingenuous, because it is really specific to the way you set your renderer up, but the core mesh rendering system I first implemented in OpenGL is working and I will soon begin adding support for textures.
- 1
3 Comments
Recommended Comments