-
Posts
24,627 -
Joined
-
Last visited
Content Type
Blogs
Forums
Store
Gallery
Videos
Downloads
Everything posted by Josh
-
This works fine: // ==================================================================== // This file was generated by Leadwerks C++/LEO/BlitzMax Project Wizard // Written by Rimfrost Software // http://www.rimfrost.com // ==================================================================== #include "engine.h" int WINAPI WinMain( HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd ) { Initialize() ; RegisterAbstractPath("C:/Leadwerks Engine SDK"); SetAppTitle( "test" ) ; Graphics( 800, 600 ) ; AFilter() ; TFilter() ; TWorld world; TBuffer gbuffer; TCamera camera; TMesh mesh; TLight light; TMesh ground; TMaterial material; world = CreateWorld() ; if (!world) { MessageBoxA(0,"Error","Failed to create world.",0); return Terminate(); } gbuffer=CreateBuffer(GraphicsWidth(),GraphicsHeight(),BUFFER_COLOR|BUFFER_DEPTH|BUFFER_NORMAL); camera=CreateCamera(); PositionEntity(camera,Vec3(0,0,-2)); material=LoadMaterial("abstract::cobblestones.mat"); CreateEmitter(50); mesh=CreateCube(); PaintEntity(mesh,material); ground=CreateCube(); ScaleEntity(ground,Vec3(10,1,10)); PositionEntity(ground,Vec3(0,-2,0)); PaintEntity(ground,material); light=CreateDirectionalLight(); RotateEntity(light,Vec3(45,45,45)); // Game loop while( !KeyHit() && !AppTerminate() ) { if( !AppSuspended() ) // We are not in focus! { // Rotate cube TurnEntity( mesh, Vec3( 0.5f*AppSpeed() ) ) ; // Update timing and world UpdateAppTime(); UpdateWorld(AppSpeed()) ; // Render SetBuffer(gbuffer); RenderWorld(); SetBuffer(BackBuffer()); RenderLights(gbuffer); // Send to screen Flip(0) ; } } // Done return Terminate() ; }
-
It will only redraw the character meshes, in only the sections of the point lights they affect.
-
March will be the last month to upgrade to Leadwerks Engine 2.3
Josh replied to Josh's topic in General Discussion
PayPal can also be used: https://www.paypal.com/xclick/business=orders%40leadwerks.com&item_name=Leadwerks%20Engine%202.3%20single-user%20Upgrade&amount=50.00&no_shipping=0&no_note=1¤cy_code=USD Syncing the update info between TheGameCreators and our sales has been difficult. After March I will be automating the process more, so I can spend my time more efficiently. The records are much more detailed and better organized now. -
This addresses the emitter bug found in version 2.31. Let me know if you run into any other problems. engine.dll.zip
-
This would make a great resource: http://leadwerks.com/werkspace/index.php?/page/resources
-
What loading screen?
-
Make one copy in the main world and one copy in the transparency world, and hide and show them. If you need a pickable mesh, apply the invisible material to a mesh in the main world.
-
Good animation is all in the blend
Josh commented on Chris Paulson's blog entry in Chris Paulson's Blog
That's pretty interesting. The walk and run animations occur at different frequencies, yet you have managed a perfectly synced transition. Simon Benge, who created the animations for FPSCreatorX, is currently working on animations of our soldier model. I am starting off with idle and run on four directions. This will give you some good media to work with. -
You aren't mixing the SSAO results in correctly, but it doesn't matter much because the results are much worse than even my first SSAO attempt. I'd like to see his demo, because no one who used his shader got results that looked anything like the images he posted.
-
This sort of works. I can't tell if this is what it is supposed to look like, or if it has an error. I'd like to see a demo of his technique before continuing. I don't see any color bounce in any of the shots after the first page. uniform sampler2D texture0;//color uniform sampler2D texture1;//depth uniform sampler2D texture2;//normal uniform sampler2D texture10;//noise uniform vec2 camerarange; uniform vec2 buffersize; float DepthToLinearDepth(in float depth) { return (camerarange.x / (camerarange.y - depth * (camerarange.y - camerarange.x)) * camerarange.y)/(camerarange.y-camerarange.x); } vec3 readNormal(in vec2 coord) { return normalize(texture2D(texture2, coord).xyz*2.0 - 1.0); } vec3 posFromDepth(vec2 coord){ float d = texture2D(texture1, coord.xy).r; d=DepthToLinearDepth(d); vec3 tray = mat3x3(gl_ProjectionMatrixInverse)*vec3((coord.x-0.5)*2.0,(coord.y-0.5)*2.0,1.0); return tray*d; } //Ambient Occlusion form factor: float aoFF(in vec3 ddiff,in vec3 cnorm, in float c1, in float c2, in vec2 coord){ vec3 vv = normalize(ddiff); float rd = length(ddiff); return (1.0-clamp(dot(readNormal(coord.xy+vec2(c1,c2)),-vv),0.0,1.0)) * clamp(dot( cnorm,vv ),0.0,1.0)* (1.0 - 1.0/sqrt(1.0/(rd*rd) + 1.0)); } //GI form factor: float giFF(in vec3 ddiff,in vec3 cnorm, in float c1, in float c2, in vec2 coord){ vec3 vv = normalize(ddiff); float rd = length(ddiff); return 1.0*clamp(dot(readNormal(coord.xy+vec2(c1,c2)),-vv),0.0,1.0)* clamp(dot( cnorm,vv ),0.0,1.0)/ (rd*rd+1.0); } void main() { vec2 texcoord = gl_FragCoord.xy/buffersize; //read current normal,position and color. vec3 n = readNormal(texcoord.st); vec3 p = posFromDepth(texcoord.st); vec3 col = texture2D(texture0, texcoord.xy).rgb; //randomization texture vec2 fres = vec2(800.0/128.0*5,600.0/128.0*5); vec3 random = texture2D(texture10, texcoord.st*fres.xy).xyz; random = random*2.0-vec3(1.0); //initialize variables: float ao = 0.0; vec3 gi = vec3(0.0,0.0,0.0); float incx = 1.0/800.0*0.1; float incy = 1.0/600.0*0.1; float pw = incx; float ph = incy; float cdepth = DepthToLinearDepth(texture2D(texture1, texcoord.xy).r); //3 rounds of 8 samples each. for(float i=0.0; i<3.0; ++i) { float npw = (pw+0.0007*random.x)/cdepth; float nph = (ph+0.0007*random.y)/cdepth; vec3 ddiff = posFromDepth(texcoord.st+vec2(npw,nph))-p; vec3 ddiff2 = posFromDepth(texcoord.st+vec2(npw,-nph))-p; vec3 ddiff3 = posFromDepth(texcoord.st+vec2(-npw,nph))-p; vec3 ddiff4 = posFromDepth(texcoord.st+vec2(-npw,-nph))-p; vec3 ddiff5 = posFromDepth(texcoord.st+vec2(0,nph))-p; vec3 ddiff6 = posFromDepth(texcoord.st+vec2(0,-nph))-p; vec3 ddiff7 = posFromDepth(texcoord.st+vec2(npw,0))-p; vec3 ddiff8 = posFromDepth(texcoord.st+vec2(-npw,0))-p; ao+= aoFF(ddiff,n,npw,nph,texcoord); ao+= aoFF(ddiff2,n,npw,-nph,texcoord); ao+= aoFF(ddiff3,n,-npw,nph,texcoord); ao+= aoFF(ddiff4,n,-npw,-nph,texcoord); ao+= aoFF(ddiff5,n,0,nph,texcoord); ao+= aoFF(ddiff6,n,0,-nph,texcoord); ao+= aoFF(ddiff7,n,npw,0,texcoord); ao+= aoFF(ddiff8,n,-npw,0,texcoord); gi+= giFF(ddiff,n,npw,nph,texcoord)*texture2D(texture0, texcoord.xy+vec2(npw,nph)).rgb; gi+= giFF(ddiff2,n,npw,-nph,texcoord)*texture2D(texture0, texcoord.xy+vec2(npw,-nph)).rgb; gi+= giFF(ddiff3,n,-npw,nph,texcoord)*texture2D(texture0, texcoord.xy+vec2(-npw,nph)).rgb; gi+= giFF(ddiff4,n,-npw,-nph,texcoord)*texture2D(texture0, texcoord.xy+vec2(-npw,-nph)).rgb; gi+= giFF(ddiff5,n,0,nph,texcoord)*texture2D(texture0, texcoord.xy+vec2(0,nph)).rgb; gi+= giFF(ddiff6,n,0,-nph,texcoord)*texture2D(texture0, texcoord.xy+vec2(0,-nph)).rgb; gi+= giFF(ddiff7,n,npw,0,texcoord)*texture2D(texture0, texcoord.xy+vec2(npw,0)).rgb; gi+= giFF(ddiff8,n,-npw,0,texcoord)*texture2D(texture0, texcoord.xy+vec2(-npw,0)).rgb; //increase sampling area: pw += incx; ph += incy; } ao/=24.0; gi/=24.0; gl_FragColor = vec4(col-vec3(ao)+gi*5.0,1.0); }
-
I converted the depth values to linear positions.
-
Because he wrote it wrong. Here's the closest I can get to working. It clearly is not outputting anything useful yet: uniform sampler2D texture0;//color uniform sampler2D texture1;//depth uniform sampler2D texture2;//normal uniform sampler2D texture10;//noise uniform vec2 camerarange; uniform vec2 buffersize; include "abstract::depthtozposition.frag" vec3 readNormal(in vec2 coord) { return normalize(texture2D(texture2, coord).xyz*2.0 - 1.0); } vec3 posFromDepth(vec2 coord){ float d = texture2D(texture1, coord.xy).r; d=DepthToZPosition(d); vec3 tray = mat3x3(gl_ProjectionMatrixInverse)*vec3((coord.x-0.5)*2.0,(coord.y-0.5)*2.0,1.0); return tray*d; } //Ambient Occlusion form factor: float aoFF(in vec3 ddiff,in vec3 cnorm, in float c1, in float c2, in vec2 coord){ vec3 vv = normalize(ddiff); float rd = length(ddiff); return (1.0-clamp(dot(readNormal(coord.xy+vec2(c1,c2)),-vv),0.0,1.0)) * clamp(dot( cnorm,vv ),0.0,1.0)* (1.0 - 1.0/sqrt(1.0/(rd*rd) + 1.0)); } //GI form factor: float giFF(in vec3 ddiff,in vec3 cnorm, in float c1, in float c2, in vec2 coord){ vec3 vv = normalize(ddiff); float rd = length(ddiff); return 1.0*clamp(dot(readNormal(coord.xy+vec2(c1,c2)),-vv),0.0,1.0)* clamp(dot( cnorm,vv ),0.0,1.0)/ (rd*rd+1.0); } void main() { vec2 texcoord = gl_FragCoord.xy/buffersize; //read current normal,position and color. vec3 n = readNormal(texcoord.st); vec3 p = posFromDepth(texcoord.st); vec3 col = vec3(1);//texture2D(texture0, texcoord.xy).rgb; //randomization texture vec2 fres = vec2(800.0/128.0*5,600.0/128.0*5); vec3 random = texture2D(texture10, texcoord.st*fres.xy).xyz; random = random*2.0-vec3(1.0); //initialize variables: float ao = 0.0; vec3 gi = vec3(0.0,0.0,0.0); float incx = 1.0/buffersize.x*0.1; float incy = 1.0/buffersize.y*0.1; float pw = incx; float ph = incy; float cdepth = DepthToZPosition(texture2D(texture1, texcoord.xy).r); //3 rounds of 8 samples each. for(float i=0.0; i<3.0; ++i) { float npw = (pw+0.0007*random.x)/cdepth; float nph = (ph+0.0007*random.y)/cdepth; vec3 ddiff = posFromDepth(texcoord.st+vec2(npw,nph))-p; vec3 ddiff2 = posFromDepth(texcoord.st+vec2(npw,-nph))-p; vec3 ddiff3 = posFromDepth(texcoord.st+vec2(-npw,nph))-p; vec3 ddiff4 = posFromDepth(texcoord.st+vec2(-npw,-nph))-p; vec3 ddiff5 = posFromDepth(texcoord.st+vec2(0,nph))-p; vec3 ddiff6 = posFromDepth(texcoord.st+vec2(0,-nph))-p; vec3 ddiff7 = posFromDepth(texcoord.st+vec2(npw,0))-p; vec3 ddiff8 = posFromDepth(texcoord.st+vec2(-npw,0))-p; ao+= aoFF(ddiff,n,npw,nph,texcoord); ao+= aoFF(ddiff2,n,npw,-nph,texcoord); ao+= aoFF(ddiff3,n,-npw,nph,texcoord); ao+= aoFF(ddiff4,n,-npw,-nph,texcoord); ao+= aoFF(ddiff5,n,0,nph,texcoord); ao+= aoFF(ddiff6,n,0,-nph,texcoord); ao+= aoFF(ddiff7,n,npw,0,texcoord); ao+= aoFF(ddiff8,n,-npw,0,texcoord); /* gi+= giFF(ddiff,n,npw,nph,texcoord)*texture2D(texture0, texcoord.xy+vec2(npw,nph)).rgb; gi+= giFF(ddiff2,n,npw,-nph,texcoord)*texture2D(texture0, texcoord.xy+vec2(npw,-nph)).rgb; gi+= giFF(ddiff3,n,-npw,nph,texcoord)*texture2D(texture0, texcoord.xy+vec2(-npw,nph)).rgb; gi+= giFF(ddiff4,n,-npw,-nph,texcoord)*texture2D(texture0, texcoord.xy+vec2(-npw,-nph)).rgb; gi+= giFF(ddiff5,n,0,nph,texcoord)*texture2D(texture0, texcoord.xy+vec2(0,nph)).rgb; gi+= giFF(ddiff6,n,0,-nph,texcoord)*texture2D(texture0, texcoord.xy+vec2(0,-nph)).rgb; gi+= giFF(ddiff7,n,npw,0,texcoord)*texture2D(texture0, texcoord.xy+vec2(npw,0)).rgb; gi+= giFF(ddiff8,n,-npw,0,texcoord)*texture2D(texture0, texcoord.xy+vec2(-npw,0)).rgb; */ gi+= giFF(ddiff,n,npw,nph,texcoord); gi+= giFF(ddiff2,n,npw,-nph,texcoord); gi+= giFF(ddiff3,n,-npw,nph,texcoord); gi+= giFF(ddiff4,n,-npw,-nph,texcoord); gi+= giFF(ddiff5,n,0,nph,texcoord); gi+= giFF(ddiff6,n,0,-nph,texcoord); gi+= giFF(ddiff7,n,npw,0,texcoord); gi+= giFF(ddiff8,n,-npw,0,texcoord); //increase sampling area: pw += incx; ph += incy; } ao/=24.0; gi/=24.0; gl_FragColor = vec4(col-vec3(ao)+gi*5.0,1.0); }
-
Won't compile for me: Error: Failed to compile fragment shader object. 0(24) : error C7011: implicit cast from "vec4" to "vec2" 0(32) : error C7011: implicit cast from "vec4" to "vec2" 0(42) : error C7011: implicit cast from "vec4" to "vec2" 0(46) : error C7011: implicit cast from "vec4" to "vec3" 0(56) : error C7011: implicit cast from "vec4" to "vec2" 0(82) : error C7011: implicit cast from "vec4" to "vec2" 0(83) : error C7011: implicit cast from "vec4" to "vec2" 0(84) : error C7011: implicit cast from "vec4" to "vec2" 0(85) : error C7011: implicit cast from "vec4" to "vec2" 0(86) : error C7011: implicit cast from "vec4" to "vec2" 0(87) : error C7011: implicit cast from "vec4" to "vec2" 0(88) : error C7011: implicit cast from "vec4" to "vec2" 0(89) : error C7011: implicit cast from "vec4" to "vec2" Source: #version 120 #define LW_MAX_PASS_SIZE 1024 #define LW_INSTANCED #define LW_SM4 uniform sampler2D texture2; uniform sampler2D texture1; uniform sampler2D texture0; uniform sampler2D texture10; vec3 readNormal(in vec2 coord) { return normalize(texture2D(texture2, coord).xyz*2.0 - 1.0); } vec3 posFromDepth(vec2 coord){ float d = texture2D(texture1, coord).r; vec3 tray = mat3x3(gl_ProjectionMatrixInverse)*vec3((coord.x-0.5)*2.0,(coord.y-0.5)*2.0,1.0); return tray*d; } //Ambient Occlusion form factor: float aoFF(in vec3 ddiff,in vec3 cnorm, in float c1, in float c2){ vec3 vv = normalize(ddiff); float rd = length(ddiff); return (1.0-clamp(dot(readNormal(gl_TexCoord[0]+vec2(c1,c2)),-vv),0.0,1.0)) * clamp(dot( cnorm,vv ),0.0,1.0)* (1.0 - 1.0/sqrt(1.0/(rd*rd) + 1.0)); } //GI form factor: float giFF(in vec3 ddiff,in vec3 cnorm, in float c1, in float c2){ vec3 vv = normalize(ddiff); float rd = length(ddiff); return 1.0*clamp(dot(readNormal(gl_TexCoord[0]+vec2(c1,c2)),-vv),0.0,1.0)* clamp(dot( cnorm,vv ),0.0,1.0)/ (rd*rd+1.0); } void main() { //read current normal,position and color. vec3 n = readNormal(gl_TexCoord[0].st); vec3 p = posFromDepth(gl_TexCoord[0].st); vec3 col = texture2D(texture0, gl_TexCoord[0]).rgb; //randomization texture vec2 fres = vec2(800.0/128.0*5,600.0/128.0*5); vec3 random = texture2D(texture10, gl_TexCoord[0].st*fres.xy); random = random*2.0-vec3(1.0); //initialize variables: float ao = 0.0; vec3 gi = vec3(0.0,0.0,0.0); float incx = 1.0/800.0*0.1; float incy = 1.0/600.0*0.1; float pw = incx; float ph = incy; float cdepth = texture2D(texture1, gl_TexCoord[0]).r; //3 rounds of 8 samples each. for(float i=0.0; i<3.0; ++i) { float npw = (pw+0.0007*random.x)/cdepth; float nph = (ph+0.0007*random.y)/cdepth; vec3 ddiff = posFromDepth(gl_TexCoord[0].st+vec2(npw,nph))-p; vec3 ddiff2 = posFromDepth(gl_TexCoord[0].st+vec2(npw,-nph))-p; vec3 ddiff3 = posFromDepth(gl_TexCoord[0].st+vec2(-npw,nph))-p; vec3 ddiff4 = posFromDepth(gl_TexCoord[0].st+vec2(-npw,-nph))-p; vec3 ddiff5 = posFromDepth(gl_TexCoord[0].st+vec2(0,nph))-p; vec3 ddiff6 = posFromDepth(gl_TexCoord[0].st+vec2(0,-nph))-p; vec3 ddiff7 = posFromDepth(gl_TexCoord[0].st+vec2(npw,0))-p; vec3 ddiff8 = posFromDepth(gl_TexCoord[0].st+vec2(-npw,0))-p; ao+= aoFF(ddiff,n,npw,nph); ao+= aoFF(ddiff2,n,npw,-nph); ao+= aoFF(ddiff3,n,-npw,nph); ao+= aoFF(ddiff4,n,-npw,-nph); ao+= aoFF(ddiff5,n,0,nph); ao+= aoFF(ddiff6,n,0,-nph); ao+= aoFF(ddiff7,n,npw,0); ao+= aoFF(ddiff8,n,-npw,0); gi+= giFF(ddiff,n,npw,nph)*texture2D(texture0, gl_TexCoord[0]+vec2(npw,nph)).rgb; gi+= giFF(ddiff2,n,npw,-nph)*texture2D(texture0, gl_TexCoord[0]+vec2(npw,-nph)).rgb; gi+= giFF(ddiff3,n,-npw,nph)*texture2D(texture0, gl_TexCoord[0]+vec2(-npw,nph)).rgb; gi+= giFF(ddiff4,n,-npw,-nph)*texture2D(texture0, gl_TexCoord[0]+vec2(-npw,-nph)).rgb; exture2D(texture0, gl_TexCoord[0]+vec2(-npw,-nph)).rgb; gi+= giFF(ddiff5,n,0,nph)*texture2D(texture0, gl_TexCoord[0]+vec2(0,nph)).rgb; gi+= giFF(ddiff6,n,0,-nph)*texture2D(texture0, gl_TexCoord[0]+vec2(0,-nph)).rgb; gi+= giFF(ddiff7,n,npw,0)*texture2D(texture0, gl_TexCoord[0]+vec2(npw,0)).rgb; gi+= giFF(ddiff8,n,-npw,0)*texture2D(texture0, gl_TexCoord[0]+vec2(-npw,0)).rgb; //increase sampling area: pw += incx; ph += incy; } ao/=24.0; gi/=24.0; gl_FragColor = vec4(col-vec3(ao)+gi*5.0,1.0); } gi+= giFF(ddiff5,n,0,nph)*texture2D(texture0, gl_TexCoord[0]+vec2(0,nph)).rgb; gi+= giFF(ddiff6,n,0,-nph)*texture2D(texture0, gl_TexCoord[0]+vec2(0,-nph)).rgb; gi+= giFF(ddiff7,n,npw,0)*texture2D(texture0, gl_TexCoord[0]+vec2(npw,0)).rgb; gi+= giFF(ddiff8,n,-npw,0)*texture2D(texture0, gl_TexCoord[0]+vec2(-npw,0)).rgb; //increase sampling area: pw += incx; ph += incy; } ao/=24.0; gi/=24.0; gl_FragColor = vec4(col-vec3(ao)+gi*5.0,1.0); }
-
Change the texture uniform names to texture0, texture1, etc., and it will work perfectly. His results look better than mine. Nice find.
-
A blur operation would probably be cheap and effective. You could just do this in postfilter.frag, where the SSAO texture is read. Just read four pixels instead of one, and average the results. It would also get rid of that edge line.
-
You create an OpenGL viewport, and then Leadwerks will render to that using a custom buffer.
-
Why would performance decrease?
-
If you can create an OpenGL viewport, it will work.
-
In terms of features and speed, they are about the same.
-
OpenGL won't run without an operating system. C++ code won't execute without an operating system. An operating system is the thing that sends ones and zeros to your hard drive, CPU, RAM, and other components.
-
We're not in a normal cycle. DX10 came and went, and no one ever used it. There is nothing that indicates DX11 will be used any more than DX10 ever was. I am not able at this time to talk about what future platforms we are going to pursue. Actually, what we've seen is the "gaming" hardware gets better and better, and "non-gaming" PCs continue to be sold with the same crappy integrated chips, year after year. I don't think your typical office PC is getting any faster. However, the Steam hardware survey indicates most users have good hardware now, and they are your target market.
-
How many DirectX 10 and 11 games are there? I can only think of a few. -Wii (OpenGL) -XBox 360 (DirectX 9) -PlayStation 3 (OpenGL) -iPhone (OpenGL) -Windows XP (OpenGL, DirectX 9) -Windows Vista (OpenGL, DirectX 9, 10, 11) -Windows 7 (OpenGL, DirectX 9, 10, 11) -Mac (OpenGL) -Linux (OpenGL) OpenGL even supports Windows better than DirectX does. To support the same features we do with the same hardware, we would have to write two different renderers with two versions of DirectX, and they would be made obsolete when the next complete rewrite of DirectX came out.
-
Unreal Engine isn't DirectX. Unreal Engine is three different renderers written in DirectX or OpenGL, depending on the platform. A few years ago it made more sense to use DirectX. That isn't the case now. What version to use? DX9? Do you want a new DX9 engine in 2010? If you want DirectX 11, that means no support for XP, which is still the most popular version of Windows, even among Steam users. Then when support for more platforms is added, the entire renderer has to be rewritten. That also means no support for Shader Model 3 cards.