Jump to content

Voxel Cone Tracing Part 4 - Direct Lighting


Josh

3,716 views

 Share

Now that we can voxelize models, enter them into a scene voxel tree structure, and perform raycasts we can finally start calculating direct lighting. I implemented support for directional and point lights, and I will come back and add spotlights later. Here we see a shadow cast from a single directional light:

Image1.thumb.jpg.431d8d3efdb57fd62ad07f788dcec95a.jpg

And here are two point lights, one red and one green. Notice the distance falloff creates a color gradient across the floor:

Image3.thumb.jpg.677123383c7411e1cb66203796fb48e1.jpg

The idea here is to first calculate direct lighting using raycasts between the light position and each voxel:

direct.jpg.24822d78dde848b014361e5c35a3a223.jpg

Then once you have the direct lighting, you can calculate approximate global illumination by gathering a cone of samples for each voxel, which illuminates voxels not directly visible to the light source:

bounce1.jpg.26f4374dd0b520906ee9893182887b7f.jpg

And if we repeat this process we can simulate a second bounce, which really fills in all the hidden surfaces:

bounce2.jpg.9ad255c039de36ceddec7652e3fb7c11.jpg

When we convert model geometry to voxels, one of the important pieces of information we lose are normals. Without normals it is difficult to calculate damping for the direct illumination calculation. It is easy to check surrounding voxels and determine that a voxel is embedded in a floor or something, but what do we do in the situation below?

prob.jpg.71c7b4619002782f69a24b00e6e66efa.jpg

The thin wall of three voxels is illuminated, which will leak light into the enclosed room. This is not good:

prob2.jpg.09949c69e1ef45c5a351904e888fa84b.jpg

My solution is to calculate and store lighting for each face of each voxel.

Vec3 normal[6] = { Vec3(-1, 0, 0), Vec3(1, 0, 0), Vec3(0, -1, 0), Vec3(0, 1, 0), Vec3(0, 0, -1), Vec3(0, 0, 1) };							
for (int i = 0; i < 6; ++i)
{
	float damping = max(0.0f,normal[i].Dot(lightdir)); //normal damping
	if (!isdirlight) damping *= 1.0f - min(p0.DistanceToPoint(lightpos) / light->range[1], 1.0f); //distance damping
	voxel->directlighting[i] += light->color[0] * damping;
}

This gives us lighting that looks more like the diagram below:

sol.jpg.d7021c30986497f00f423c3c60740679.jpg

When light samples are read, the appropriate face will be chosen and read from. In the final scene lighting on the GPU, I expect to be able to use the triangle normal to determine how much influence each sample should have. I think it will look something like this in the shader:

vec4 lighting = vec4(0.0f);
lighting += max(0.0f, dot(trinormal, vec3(-1.0f, 0.0f, 0.0f)) * texture(gimap, texcoord + vec2(0.0 / texwidth, 0.0));
lighting += max(0.0f, dot(trinormal, vec3(1.0f, 0.0f, 0.0f)) * texture(gimap, texcoord + vec2(1.0 / texwidth, 0.0));
lighting += max(0.0f, dot(trinormal, vec3(0.0f, -1.0f, 0.0f)) * texture(gimap, texcoord + vec2(2.0 / texwidth, 0.0));
lighting += max(0.0f, dot(trinormal, vec3(0.0f, 1.0f, 0.0f)) * texture(gimap, texcoord + vec2(3.0 / texwidth, 0.0));
lighting += max(0.0f, dot(trinormal, vec3(0.0f, 0.0f, -1.0f)) * texture(gimap, texcoord + vec2(4.0 / texwidth, 0.0));
lighting += max(0.0f, dot(trinormal, vec3(0.0f, 0.0f, 1.0f)) * texture(gimap, texcoord + vec2(5.0 / texwidth, 0.0));

This means that to store a 256 x 256 x 256 grid of voxels we actually need a 3D RGB texture with dimensions of 256 x 256 x 1536. This is 288 megabytes. However, with DXT1 compression I estimate that number will drop to about 64 megabytes, meaning we could have eight voxel maps cascading out around the player and still only use about 512 megabytes of video memory. This is where those new 16-core CPUs will really come in handy!

I added the lighting calculation for the normal Vec3(0,1,0) into the visual representation of our voxels and lowered the resolution. Although this is still just direct lighting it is starting to look interesting:

1528524900_direct(2).thumb.jpg.e13e38a4298d69291145a01950bb2786.jpg

The last step is to downsample the direct lighting to create what is basically a mipmap. We do this by taking the average values of each voxel node's children:

void VoxelTree::BuildMipmaps()
{
	if (level == 0) return;
	int contribs[6] = { 0 };
	for (int i = 0; i < 6; ++i)
	{
		directlighting[i] = Vec4(0);
	}
	for (int ix = 0; ix < 2; ++ix)
	{
		for (int iy = 0; iy < 2; ++iy)
		{
			for (int iz = 0; iz < 2; ++iz)
			{
				if (kids[ix][iy][iz] != nullptr)
				{
					kids[ix][iy][iz]->BuildMipmaps();
					for (int n = 0; n < 6; ++n)
					{
						directlighting[n] += kids[ix][iy][iz]->directlighting[n];
						contribs[n]++;
					}
				}
			}
		}
	}
	for (int i = 0; i < 6; ++i)
	{
		if (contribs[i] > 0) directlighting[i] /= float(contribs[i]);
	}
}

If we start with direct lighting that looks like the image below:

0.jpg.8ca8f5929133332b2971c5b9ebc7bc13.jpg

When we downsample it one level, the result will look something like this (not exactly, but you get the idea):

1.jpg.8ad1ba65d7ff0d4a3752485f843c9b93.jpg

Next we will begin experimenting with light bounces and global illumination using a technique called cone tracing.

  • Like 3
  • Confused 1
 Share

1 Comment


Recommended Comments

Guest
Add a comment...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...