-
Posts
647 -
Joined
-
Last visited
Content Type
Blogs
Forums
Store
Gallery
Videos
Downloads
Everything posted by nick.ace
-
One thing you should keep in mind is that SSR does not reflect anything that isn't on screen. The techniques aren't mutually exclusive though.
-
I like Marty's suggestion if you're on a budget, but AMD products do tend to be less energy efficient if that's a concern, but APUs should be more energy efficient. I would upgrade your CPU if you can though. Aside from the low clock speed (2.5 GHz) and being dual cores, your current CPU has a much higher transistor size than current CPUs. This is a good, but very technical answer on why that matters: http://superuser.com/questions/808776/whats-the-difference-between-mobile-and-desktop-processors Also, keep in mind that modern CPUs are very complex, and the architectural changes in each manufacturing generation can be very large. I disagree with this. If AMD drivers are problematic, then wouldn't it be better to develop your game on that by that logic so that you can find bugs before your users do?
-
I doubt Shadowplay records at a variable framerate, so it probably drops to a lower framerate, so if you're getting around 45 FPS on the other thread you commented on, it samples at 30, and you don't get the stuttering you would normally get with V-sync enabled. Turn off V-Sync if you don't want that stuttering.
-
Actually, volumetric shadows (or lighting) does something like this by deforming meshes using vertex shaders. However, that's way too complicated for this. You'll want to avoid casting rays from the player. Evenly spaced angles will generate too many rays for regions where there's no target. Do the inverse instead. You can make some sort of box around the targets, and then pick from various intervals between the top and the bottom of the box. This will give you a decent estimate. Otherwise, there's an infinite number of rays you would need to check. Pick doesn't stop at the first object. Internally it generates a list of objects since it is calculated out of order. You'll have to use multiple picks no matter what.
-
I didn't realize the end of the hall was a wall. You have to make all of the walls have a certain amount of thickness. Overlapping walls won't fix it unless the walls themselves have thickness.
-
^lol thought that comment was talking about the edge in the picture XD Have you tried increasing the lighting quality even more? Is it only when you move away from the edge or when you're anywhere? Also, do you have a directional light as well?
-
@Crazycarpet Why wouldn't you expect to see big drops at the beginning? Seconds per frame should be linear but frames per second is an inverse function so the drop should be high at first. @Josh It's good to see the bounding boxes recalculation being better designed. Have you tried keeping an internal tree (in array form) using structs for bones instead of classes? This should help speed up parent transformation lookups immensely.
-
Could it be the second ReadString command (perhaps replace it with ReadLine)? I don't know what the remove function does.
-
You can't really start loading assets when you get within a range (that's how streaming is usually used) without causing some sort of noticable lag, but unloading assets should be pretty much instantaneous. There's a command to load a world, but it doesn't work in the background. That being said, one user got it somewhat working, but I haven't really heard much about it. Either way, it's impossible right now to send the models to the GPU in the background (streaming).
-
Where do you see this command? There is a Readfile command, but it only has a path for an argument. Maybe I'm interpreting the question wrong.
-
There's no such thing as CSG shaders. CSG is handled the same as models (so based on the shaders you select in the material).
-
No, there is no streaming. It's a feature I really want too. You can't implement this yourself because it requires specific OpenGL functions to accomplish. You will have to break up your maps, but you can keep a pool of objects in VRAM so you won't have to load from scratch each time you enter a new region, but anything new will have to be loaded, and anything not new will be unloaded.
-
I agree with gamecreator. If there aren't going to be profiling tools (which there really should be), then there needs to be an extensive documentation for this. The only one who knows how the engine is designed in Josh, so he's the only person who can definitely say where the engine performs well and where it doesn't. I really shouldn't be expected to upload a project every time I encounter a performance issue.
-
I love the graphics theme! The colors and textures are very well done.
-
Outdoor scene with a few NPCs cannot get above 20-35 FPS
nick.ace replied to blueapples's topic in General Discussion
And 10k is too much for characters 100m away, and 1k is way too much for character 1000m away. That game is one of the first on the PS4, came out three years ago, and uses forward rendering, and even then, the 3rd LOD level is 10k. Check out this paper: http://graphics.stanford.edu/papers/fragmerging/shade_sig10.pdf And this discussion (to help prove that I'm not making these numbers up): https://www.reddit.com/r/gamedev/comments/26fpq1/polycount_and_system_requirements/ The paper refers to overdraw and a way to change GPUs to better render smaller triangles. This is relevant because the OPs problem isn't related just to vertices for the characters. In the same way 40k characters are inappropriate for certain distances, 10k characters are as well. You put stress on the rasterizer by rendering small triangles, and you also add more fragments to be computed. The point is that 13k is not high-end in today's games, and that 40-60k should be reasonable as long as you use LODs (which you should be using for 10k as well because of the overdraw problem). -
This is ridiculously awesome!!! It should help make AI soooo much easier to program and much more complex. If people start making AI functions and uploading it on the Workshop, I can't even imagine the possibilities! It's looking great so far! Really excited to see where this goes!!!
-
Ok, that's not easy to do. You would have to use shaders pretty cleverly to be able to do this. I think you should avoid this type of animation.
-
Another way you could do it is to just subtract the positions and then calculate the new spot from there: dx = enemy.x - player.x dz = enemy.z - player.z normalized_dx = dx/sqrt(dx^2+dz^2) normalized_dx = dz/sqrt(dx^2+dz^2) final_x = normalized_dx * distance + player.x final_z = normalized_dz * distance + player.z And then move the enemy to that position. This way, you wouldn't have to deal with pivots.
-
You should post your error messages and code to show what's happening/what your struggling with.
-
Shadmar made most (if not all?) of the post-processing shaders in the pack on the Workshop, so he would know best. I think you just organize them by number (lowest number being the first, etc.). But Thirsty Panther is right about experimenting because I've sometimes felt that different orders worked better. A few tips though: -Use SSAO as one of the first shaders -Use fog after SSAO but before everything else -Use SSLR as one of the last shaders -Use color-based shaders last (i.e., grayscale) -Use depth of field last IDK where to put bloom and a few others though.
- 5 replies
-
- 3
-
- Shader
- Post Processing
- (and 4 more)
-
Outdoor scene with a few NPCs cannot get above 20-35 FPS
nick.ace replied to blueapples's topic in General Discussion
If you look at the slides, they say they draw over 11 million triangles regularly. This isn't the only game to do this either (I'm sure you can find more data by searching): http://kotaku.com/just-how-more-detailed-are-ps4-characters-over-ps3-char-507749539 BTW, the PS4 graphics is roughly between the GTX 750 Ti and the GTX 760 in terms of floating point operations per second and cores, and there's no integrated GPU better than that (two newer GPUs by Intel that are their best GPUs compete with it though). Since the GTX 750 Ti is at the lower end (look at some of the Steam system requirements for some of the newer AAA non-VR games), I don't think 30k is unreasonable for characters, but LODs would certainly help. -
I have the GTX 750 Ti, and it was great for a while, but I'm now struggling to play some of the latest games on low settings. I think that might not be a bad low end (or even the 280X depending on when you plan to release) when you consider how the market will look in the future. It seems like almost half of GPUs on Steam have 2 GB VRAM or above, and I think the GTX 750 Ti in on the lower end of 2 GB cards, but I might be wrong. The good news is that GPU throughput should continue to increase rapidly, unlike the CPU market.
-
You can just edit the vegetation shader. I'll update this answer once I find the right line. Edit: So what you have to do is this: Go to your materials for your vegetation, go to the shader tab, press the pencil button next to the "Vegetation" shader A screen should pop up, change the stage to "Vertex" There's a line about adjusting the scale. This scaling variable is what you need to change. You need to do this for all materials of your vegetation. You will see an equation that sets the scale Make two new float variables for your range (i.e. "float small", "float large") Replace the .x scaling range variable with "small" and the .y scaling range variable with "large" Now your vegetation will scale within those ranges.
-
Forget I said that. Is all you have right now a tube? Could you post a video of what your animation is like? I think then it'll be clearer which direction to take.
-
Scaling animation doesn't work in Leadwerks. I don't think it's part of the .fbx file format, but I might be wrong. Your best bet is to either make a shader to scale this yourself (a lot of work and difficult) or you can scale it in your code by scaling the tube (if the tube only has one bone): http://www.leadwerks.com/werkspace/page/api-reference/_/entity/entitysetscale-r31