Jump to content

Ma-Shell

Members
  • Posts

    371
  • Joined

  • Last visited

Everything posted by Ma-Shell

  1. The error message tells you that you're missing a fourth argument (#5, because "self.entity" is implicitly #1 and r,g,b are #2, #3, #4), which should be a number! Looking at the documentation (https://www.leadwerks.com/learn?page=API-Reference_Object_Entity_Camera_SetFogColor), you can see that there is also an alpha-value expected.
  2. #include <stdio.h> #include <unordered_map> unordered_map<std::string, Model*> models; ... std::stringstream mid_stream; mid_stream << -dim << ":" << -df << ":" << dim << (...) std::string mid = mid_stream.str(); if (models.find(mid) == models.end()) { Model* model = Model::Create(); model->SetRotation(orientation); models[mid] = model; } Edit: meh, 5 minutes too late
  3. How about instead having PlayAnimation return a handle/id, which you could later use to query the status?
  4. Ma-Shell

    Steam?

    To the right, there is a box, which says "Controles del propretaro". if you scroll down, you will see the last item in that box to say the spanish equivalent of "Change Visibility". There you can define who can see your item
  5. According to http://enet.bespin.org/Features.html under the heading "Sequencing" (third paragraph), ENet already does that:
  6. Then save the old font: oldfont = context:GetFont() context:SetFont(myfont) [write...] context:SetFont(oldfont)
  7. You need to create a separate buffer, render to this buffer and set this buffer as the texture for the corresponding material. For a code example, look at the code here: https://www.leadwerks.com/community/topic/16413-draw-on-buffer-to-apply-to-sprite/#comment-107677
  8. For me the text is red (given, it's a very dark red...). You should be able to make it more visible by using "diffuse+normal+emission+alphamask.shader" and binding the texture to both, channels 0 (diffuse) and 4 (emission): i.e.: local shader = Shader:Load("Shaders/model/diffuse+normal+emission+alphamask.shader") local currentBuff = Buffer:GetCurrent() local mat = Material:Create() mat:SetShader(shader) local sprite = Sprite:Create() sprite:SetViewMode(0) sprite:SetMaterial(mat) local buffer = Buffer:Create(20, 20, 1, 1) -- draw the text on the buffer Buffer:SetCurrent(buffer) self.context:SetBlendMode(Blend.Alpha) self.context:SetColor(Vec4(1, 0, 0, 1)) self.context:DrawText("Hello", 0, 0) local tex = buffer:GetColorTexture(0) mat:SetTexture(tex, 0) mat:SetTexture(tex, 4) Buffer:SetCurrent(currentBuff) That should make it more crisp.
  9. The shader is the problem. Use "Shaders/model/diffuse+normal+alphamask.shader" instead of "diffuse.shader", then it should work.
  10. You should put the libs into Leadwerks.h with pragma comment: If you add the following to the beginning of Leadwerks.h, it will automatically tell the linker that these libs are needed. So you do not need to add them to the linker settings for every project individually. (You will have to add the include header search directories, though, since there is sadly no pragma for this): #pragma comment(lib, "libcryptoMT.lib") #pragma comment(lib, "libsslMT.lib") #pragma comment(lib, "Rpcrt4.lib") #pragma comment(lib, "crypt32.lib") #pragma comment(lib, "libcurl.lib") #pragma comment(lib, "lua51.lib") #pragma comment(lib, "msimg32.lib") #pragma comment(lib, "steam_api.lib") #pragma comment(lib, "ws2_32.lib") #pragma comment(lib, "Glu32.lib") #pragma comment(lib, "Leadwerks.lib") #pragma comment(lib, "OpenAL32.lib") #pragma comment(lib, "OpenGL32.lib") #pragma comment(lib, "winmm.lib") #pragma comment(lib, "Psapi.lib") #ifdef DEBUG #pragma comment(lib, "libovrd.lib") #pragma comment(lib, "newton_d.lib") #pragma comment(lib, "dContainers_d.lib") #pragma comment(lib, "dCustomJoints_d.lib") #else #pragma comment(lib, "libovr.lib") #pragma comment(lib, "newton.lib") #pragma comment(lib, "dContainers.lib") #pragma comment(lib, "dCustomJoints.lib") #endif This makes updating projects more easy (I only tried it in Visual Studio and I don't believe gcc honors these pragmas, so if you are developing on linux, you need to add the libraries yourself, like it is right now).
  11. Shouldn't there be the name of the game somewhere in there, as well? Or do you just get a list of all servers and then have to read through the descriptions to see, if it is a server for your game?
  12. Ma-Shell

    gnet

    This shows only "ERROR"...
  13. If you DO want to explicitly use the cross-product (which is less efficient), nick.ace's approach has two small errors in the initial vectors for tangent and bitangent (and it's OK to normalize after calculating the cross-product). It should be: Vec3 tangent = Vec3(2.0, center_r - center_l, 0.0) Vec3 bitangent = Vec3(0.0, center_d - center_u, 2.0) Vec3 normal = cross(tangent, bitangent).Normalize()
  14. Yes, they are meant to be vec3. Given, your method has the following inputs: <code>unsigned int x, unsigned int y, float map[]</code> When I wrote f(x,y), I meant map evaluated for the given coordinates, so they correspond to your center_X - variables. i.e.: f(x,y) = map[current_index] f(x-1,y) = center_l f(x+1,y) = center_r f(x,y-1) = center_u f(x,y+1) = center_d This means, you can calculate the left, right, up, down vectors as: Vec3 Left = Vec3(x-1, y, center_l); Vec3 Right = Vec3(x+1, y, center_r); Vec3 Up = Vec3(x, y-1, center_u); Vec3 Down = Vec3(x, y+1, center_d); However, you don't need to define these, as you can see in my post, you can directly represent the results: You would end up with: Vec3 normal = Vec3(2*(center_r-center_l), 2*(center_d-center_u), -4).Normalize(); You can see, this actually ends up the same as what you wrote in your initial post (after it is normalized) only with y and z flipped, which is my fault for assuming, z was the height-coordinate. So in conclusion: What you are doing in your initial post is exactly the result of the cross-product. If you actually DO use the cross-product instead of what you did there, you don't gain anything, but instead you only lose efficiency.
  15. You are right, this can be done by using the crossproduct. For calculating the normal of the point p with coordinates [x; y; f(x,y)], (whereas f(x,y) is the height-value at point x, y) the following four neighbouring points are of interest: Left: l [x-1; y; f(x-1, y)] Right: r [x+1; y; f(x+1, y)] Up: u [x; y-1; f(x, y-1)] Down: d [x; y+1; f(x, y+1)] This means, you have the following four vectors: l -> p: lp [x-(x-1), y-y, f(x,y)-f(x-1,y)] = [1; 0; f(x,y)-f(x-1,y)] p -> r: pr [(x+1)-x, y-y, f(x+1,y)-f(x,y)] = [1; 0; f(x+1,y)-f(x,y)] u -> p: up [x-x, y-(y-1), f(x,y)-f(x,y-1)] = [0; 1; f(x,y)-f(x,y-1)] p -> d: pd [x-x; (y+1)-y; f(x,y+1)-f(x,y)] = [0; 1; f(x,y+1)-f(x,y)] You can build the four cross-products: up x lp, up x pr, pd x lp, pd x pr (I used the right-hand-rule to determine which directions to take the crossproduct) and then take their average for your normal. Since every vector has one component 1 and one 0, the crossproducts are fairly simple: up x lp: [f(x,y)-f(x-1,y); f(x,y)-f(x,y-1); -1] up x pr: [f(x+1,y)-f(x,y); f(x,y)-f(x,y-1); -1] pd x lp: [f(x,y)-f(x-1,y); f(x,y+1)-f(x,y); -1] pd x pr: [f(x+1,y)-f(x,y); f(x,y+1)-f(x,y); -1] Adding these four vectors and then normalizing them should yield your normal. EDIT: You can also just take the two vectors u -> d: ud [x-x; (y+1)-(y-1); f(x,y+1)-f(x,y-1)] = [0; 2; f(x,y+1)-f(x,y-1)] l -> r: lr [(x+1)-(x-1); y-y; f(x+1,y)-f(x-1,y)] = [2; 0; f(x+1,y)-f(x-1,y)] And their crossproduct: ud x lr = [2(f(x+1,y)-f(x-1,y)); 2(f(x,y+1)-f(x,y-1)); -4] And normalize that. Doing so, yields a less accurate version but might be faster, since you only need to calculate one vector instead of four.
  16. It's far from perfect but I wrote a simple Python 3 - script which parses "API-Reference_Object_Entity_SetPosition.xml" (which has to be in the current directory) and exports it as HTML ("out.html") and LaTex ("out.tex"). You will need a LaTex-Compiler (e.g. https://miktex.org/download) to generate a pdf out of this LaTex-file. generate_docu.py.txt
  17. I think, what you're trying to do is called texture splatting. Take a look at this link: http://www.gamasutra.com/blogs/AndreyMishkinis/20130716/196339/Advanced_Terrain_Texture_Splatting.php The shader should give you access to the normals and positions, so you should be able to take these to determine height and slope.
  18. The material itself has some functions to set shader-uniforms (sadly the function-name does not always reflect this, but e.g. these SetFloat, SetVec2, ... are for setting uniforms) I don't really know about these types of buffers for shaders but if you can represent your buffer as a texture, you can do the following: C: m->GetSurface(0)->GetMaterial()->SetTexture("Materials/Developer/window.tex", 3); This will bind the given texture to texture-slot 3 of m's material. Fragment Shader: uniform sampler2D texture3; ... outcolor = texture(texture3, vTexCoords); I haven't tried whether it is possible to access it from the geometry shader, though. I just tried it in the geometry shader and it works there, as well.
  19. Actually you can access the forum in HTTPS, as well. But most links have "http://" in their "href"-field, and thus will route you to the unencrypted pages.
  20. By default it won't change the physics shape. It is, however possible to send the output of the geometry shader to an output stream and then access it from the cpu. I have never done such a thing thoug and don't know how exactly that works, but it should be possible to generate the physics-shape from it. Some reading: https://www.khronos.org/opengl/wiki/Transform_Feedback https://www.khronos.org/opengl/wiki/Geometry_Shader#Output_streams
  21. http://www.leadwerks.com/werkspace/topic/15851-documentation-xml-raw-data/#entry105733 Btw. I copied the PHP code you posted above and tried it locally and it worked...
  22. The difference is that martyj uses js for loading and parsing the XML-files client-side, while PHP does that server-side. CORS is a security mechanism implemented by browsers to prevent cross-origin-XSS.
  23. For me it works with the lines you posted (though I used a custom created camera). Are you sure, that is actually your camera? Does AddPostEffect return anything for you (on error you should get -1, otherwise an index).
  24. Thanks for actually drawing that thing out. So you're calculating with theta, while I'm calculating directly with the angle fov/2.0. So that's where that difference comes from. I actually checked back with wolfram-alpha and actually both of our terms are equal. When I first checked it, I forgot that wolfram-alpha uses rad and I had to transform the 90 before checking... So... now at least I can sleep in peace
  25. Oh, my apologies then. I've been trying to make sense of the math you used, but I actually don't get how that's supposed to work or what my formula does wrong then. I get that "math.pi/180" term is only for deg->rad, as the standard math.tan-function works with rad, whereas the leadwerks-Math::rad-function works with deg, but for the rest, you have "tan(90-fov/2)", where I have "1/tan(fov/2)", and these terms are surely not identical... No matter how I draw things out, I always end up with the term I wrote above.
×
×
  • Create New...