-
Posts
691 -
Joined
-
Last visited
Content Type
Blogs
Forums
Store
Gallery
Videos
Downloads
Everything posted by Mumbles
-
I Want to Use LeadWerks in FPS CREATOR
Mumbles replied to naeembabakheil's topic in General Discussion
I just saw them as list of entities, like lights, or objects (gmf files). That would be because I don't use terrains... Maybe I should try actually saving a terrain (like the original post asks) before thinking I know enough to provide a comment. -
I Want to Use LeadWerks in FPS CREATOR
Mumbles replied to naeembabakheil's topic in General Discussion
As far as I know, the editor (sandbox) does not export to a format that fps creator can use. I think it only exports to .sbx which fpsc doesn't understand. Put simply, I think the editor works only with the Leadwerks Engine Both of these have the same answer. Whilst the best way to make anything with the engine is with a programming language. It is possible to make an entire game just with a lua script. lua scripting is supposed to be much easier to learn than most programming languages. So you should be able to learn a bit of lua without any programming knowledge at all. -
Nice idea -- but it won't happen, and I wouldn't want it to. Everyone else will feel ripped off, and demand the next upgrade for free. When I have something up and running -> -Then- I'll improve it, by buying 2.3...
-
I guess I'm actually quite lucky to not have this problem. But 100m should not qualify as 'far'. It seems that this system was built for tight underground corridors in an FPS type game. What I'm working on is an outdoor type game, where close would be up to 100m, medium up to 300m and far, 800m. Only beyond 800 would I think an infinite distance be necessary (All of these values are assuming no zoom magnification). But like I say, until I upgrade to 2.3, it's not in issue for me to worry about.
-
+1 then -1 Why should one script nullify some actions of another? Scripts should be communists - all equal.
-
Without trying to be nasty - why would you only want to do your 'stuff' once per second? I must be missing something. It sounds like, whatever the frame rate, it's going to be jumpy as objects teleport around once per second... Personally I do 'stuff' 50 times per second, and render, ..no faster than.. 60 times per second.
-
That's exactly the same as writing if(0 >= AppSpeed()*60) { And that's almost always going to be false. It will only be true when AppSpeed() returns 0 (since 0 * 60 = 0). Apparently my screen refreshes at 600 Hz with a response time of 2 ms. I'm assuming that's a typo on the box, and they've put too many zeroes on, since that's amazingly fast - and 2 ms should equal 500 Hz. Even 500 Hz seems hard to believe though (It's a Samsung SyncMaster T220A) Personally, I do this: //Global space float RenderGap = 1000.0f/60.0f; float TimeWaiting = 0.0f; float LastAppTime; int main() { //Calculate logic etc. RenderScreen(); //Note, this is not a leadwerks command - it's defined below } void RenderScreen() { TimeWaiting += AppTime() - LastAppTime; LastAppTime = AppTime(); if(TimeWaiting >= RenderGap) { //TimeWaiting = 0.0f; TimeWaiting -= RenderGap; //Edit - This line gives better accuracy - but the above line still works... //Do lighting, post-processing, etc. Flip(0); } //If it wasn't time to render - this function returns without actually rendering. }
-
Leadwerks Blog Post 06: Incoming Renders
Mumbles commented on DaveLee's blog entry in Dave Lee's Blog
Wow, I look at all this great stuff - and then start to feel quite jealous, since I can't even model a light bulb, let alone one of those energy saving ones... -
It may just be me, but I generally build a machine which has components that are already 1 - 2 years old, and can make the machine last for another 4 years. I never buy brand new components For processors, I bought an AMD Phenom X4 at practically the same time they started to bring out Phenom II's (The socket AM2+ versions. AM3 versions were 2 months later). So I got quite a high-end processor for about £100 (which back in Dec 2008 was probably about $120). There's no need to upgrade that any time soon. And the computer that's actually designed for playing games has an even older processor (Athlon 64 X2 6000+). Again, there's no need to upgrade that either, as more and more processing is done on graphics cards. Both my machines are equipped with GeForce 8 series cards, which weren't that new even when I bought them. And they've probably still got a bit of life in them yet (but if I was building today rather than 18 months ago, I'd probably settle with GeForce 9's instead). After several problems with hard drives, it's now very clear in my mind. Don't buy anything but Seagate. 80GB or smaller for an OS drive, and 1TB or larger for a storage drive.
-
3D units equal whatever you want them to be. But yes, most people use the scale of 1 unit = 1 metre. And whatever scale you decide to use, stick with it
-
Looks quite good actually, probably because there's only one picture, and it doesn't dominate the entire page. All I can think that is maybe missing, is a 'Join Selected' button. Whilst I'm assuming the idea of 'double-click an entry to join it', some people I think would prefer a button. So that way, those that want to double click, can. And those that want to go the slow way, also can. But for a test, I might have put fake pings in of around 50 - 100. Not 8000!
-
BatVink, if I'm not mistaken... The worst part of C/C++ is its syntax. I built up slowly from DBC, then to DBP, and was moderately proficient with that. I looked at C++ and it was so confusing, I just didn't understand it. I even bought a book "Teach yourself C++ in 24 hours". Chapter 5 (or 24) was all about pointers, and at that point I gave up. I wasn't until I was taught Java at uni that I really got a feel for the syntax, because Java is more-or-less the same as C++, just with a few differences. Indeed, coming up from DBP, java was easy to pick up, and most of my first and second years, were actually spending trying to teach everyone else, as they just couldn't seem to learn from the lecturers. Once I knew java, C++ came naturally/ One thing I learned from Java is to not use pointers - they're complicated, and most of the time they're also not needed. Then I realised that C is even quite similar to DBP. There's only a few major differences. I could list them here if you're interested. But as with most programming languages, the largest difference is just the way your code must be written. Most of your DBP knowledge can be carried over to C++ once you know what the syntax is.
-
Maybe with the new version of Newton... In my earlier dev days with DBP (Seems like so long ago) I was using the 1.32 Newton wrapper by one of the users, by the name of Kjelle. For that version, I was actually updating the physics 100 times per second, as that seemed like a nice easy number to scale. Now, that seems a bit over-kill, but everything was smooth. But with that version. the update parameter was instead a raw figure, a float containing the number of seconds to update the world by (I used 0.01). Now it's a fraction of 50/3 milliseconds. So whilst 1.2 does give a perfect 20, I don't think this Newton likes that... As much as I don't really want to, I'll probably move the physics ticker back to 60, and remove the *1.2 from the update world. Although the figures should have added up to the same, per second - it just seems smoother when sticking with 60 fps.
-
Presumably this only affects animations. perhaps picking the closest key frame to the tween value? I would suppose that camera motion stuttering would be completely ignored by this tweening thing as there is no keyframing. Just that maybe 1 in possibly every 5 frames, the camera appears to be moving forward by 0.5 instead of 0.25 - that's the issue I'm getting while using 1.2 multiples to effectively update the physics 50 times per second. It's not ..that.. noticeable, but still something I'd like to see go away. I remember reading that the official line was simply to give AppSpeed() as the update world parameter, but I'm trying not to do that, as I can only picture the horrors of what could happen when my project turns into multiplayer - there'd be absolute hell trying to synchronise everyone. That's how I see it at this time, and so would prefer to have a fixed number of world updates that every client must adhere to, which should prevent the server from having to do to much frame interpolation when verifying the integrity of a client's instructions. 50 is more scalable than 60 and it was only today when I say the Acorn's frame rate of 50 that I had the idea. Whilst locking the frame rate at 50 is no problem at all, updating the physics at 50 seems to be more problematic. So, perhaps variable physics update speed would be something for LE 3.0 in however many months/years it's ready?
-
I'm not meaning fps, according to fraps, it's nicely locked at 50 already. Not that I would force anyone to have 50 frames, that's their choice. I just prefer to update my world in 20 ms batches (and personally, I don't see any need to re-render if no updating has taken place, but I will always provide an option of 60 fps - or no frame-rate lock, so people can choose whichever they prefer). Pseudo-code I have so far, is: //Setting up game while not QuitGame Get current time result = (last render time) - (current time) if result < 20 //World not ready to be updated - sleep, to reduce CPU usage, then test again sleep 1 continue //as in, its c++ meaning, rather than sleep 1 ms before proceeding end if //Capture input //Apply forces to physics bodies NumUpdates = result / 20 UpdateWorld(NumUpdates * 1.2) //Remove multiplication for 60 updates per second //Render last render time = current time wend with passing 1.2 or 2.4 to update world you can see that every couple of frames, the controller appears to move, just fractionally further than it does at other times. I'm guessing that this happens when for example, the world time is increased from 4.8, to 6 - maybe the engine is internally doing two updates here: once to bring it up from from 4 to 5, and then again from 5 to 6. Whereas from the previous update would just go from 3 to 4 (because 3.6 is less than 4). Of course, without knowing the internals of what Josh has done, it's purely a guess, but I know that this doesn't happen when the * 1.2 is omitted. Also, that's not quite how I determine if the world is ready to be updated or not. After all, if it was 39 ms between frames, the next update should occur only 1 ms later - but with that pseudo-code, it will wait another 20. My real code will take such a situation into account, and only wait 1 ms. I just simplified it, because that's not where I'm having a minor-issue. I should also add that this isn't a bug-report or a feature request. Just pointing out that, It would be nice if I could provide a value like 20 ms and it be totally smooth... The stutter is not that large, so most people would probably not find it to be a huge problem - but in the strive to perfection, you know...
-
Afternoon all. Here's another rare case where I actually make one of my own threads. Hopefully this one however, will not an answer that could have been found in a matter of seconds by simply searching for different keywords. I got a bit bored recently, and just for a nostalgic moment, I loaded up an app on my computer called Arculator. For those that don't know, it's an emulator for old Acorn Archimedes computers. At primary school, we had loads of such computers - so any time not in lessons, would usually be spent on one of these computers. Indeed, I had little to no interest in 'normal' activities for someone of that age (5 - 11). However, there's one thing that was missing from those computers, which is present in the emulator - and that's the fraps FPS counter. Unusually, it seemed that the frame rate was locked at 50 fps, which in my mind was a bit of an unusual number, after all, isn't it normally 60? I didn't take long to realise, that this was quite deliberate, and will probably have been a limitation hard coded into RiscOS itself. Why? Because it was a British computer, thus using our 50i TV frame rate, rather than the U.S. standard of 60i (59.94). Indeed, I only really find frame rates of <20 to be really unplayable, so, with a bit of tweaking, I simply edited my current rendering code to 50 fps instead. After all, the time between renders is a perfect 20 ms, rather than [50/3] ms, so it's easier to calculate. Also, I changed my UpdateWorld() call to reflect the new trial frame-rate. UpdateWorld(NumUpdates); to UpdateWorld(NumUpdates * (1.2f)); Unfortunately, this new change does make the physics stutter a bit. Before, the values were always whole numbers (usually only ever 1 - occaisionally 2) and it was quite smooth, even when more than one update was required due to slow processing. Now, it's multiples 1.2 - and even with about 2 - 4% cpu usage, and a very constant 50 fps, it jumps about quite a bit. So is this 60 updates thing a limit imposed by the engine, by newton or is there a hidden option to change the update world timestep to something else? Additionally, am I the only one who would prefer to work with numbers that are much easier to scale, like 1, 10, 50, 100, or even 1000? [Edit] What's the chance I'll be told that the 2.3 update which I passed on, would have given such a possibility? And just for the sake of nostalgia for anyone else to, uh, enjoy (maybe?) here's a pic of some nice little explosions.
-
They had that as early Tiberian Sun. For SP you got a load of wordy loading, but for MP you instead just a got a coloured bar for each player (they colour being the colour they'd chosen before the game started). It turned out to be so efficient I believe they've used it ever since for the C&C series, for both single and multiplayer. I also think that style would be suitable for any game with even slightly long loading times. But for the record, that does not need multi-threading - other people's bars do not need to updated smoothly. You only need to update their bars at the time you update your own. (or once you've finished loading, and you are now waiting for everyone else.)
-
That's a case where I think threaded loading would be advantageous - but it's not supported. In the case of an MMO, having a nice 60fps slow down to about 15-20 for a few seconds while resources are loaded would, I think, be better than having a pause while the resources are loaded, followed by the inevitable lag during the catch-up. For other games, it's less of an issue. A loading screen with a progress bar does not need to be threaded. And indeed, for FPS, RTS and most others, loading all upfront is viable. The largest problem with loading all resources up front is the amount of RAM needed. for an MMO that would have lots of objects, and need them all loaded, would need lots of RAM, and agressively swapping between RAM and the pagefile can cause further stutters (which we have no control over). For other games, there aren't (normally) as many resources to load, they can all fit in RAM nicely and for most levels, "mission-specific" items do not need to be loaded at all. For an MMO, you are probably best loading resources only when needed, and make them as small as possible to reduce pause while loading things.
-
I thought this was the matrix
-
I don't know - I don't use 3ds max. I export models through Ultimate Unwrap 3D - pretty nifty little tool...
-
you need this file to make that tutorial work. http://developer.leadwerks.com/Tutorials/CPP/Loading_A_Scene_Files.zip But the file is password protected. You might need to ask for a password if you've not received it already... (The password is NOT your registration code) There might be some small games, but I've heard of any big games that are finished yet. This engine can be used to make RPG. The only real limit to this engine is how much time you can put into it, making an RPG is no quick thing...
-
There is no 'correct' modelling app. Whatever modeller you use, Ultimate Unwrap 3D should be able to read your mesh, with animation intact - then use uu3d's gmf plug-in. With that set up, you can use: Free Blender Cheap Milkshape Expensive MAX or just about any other modeller in existence
-
I know - but it's useless. I get told that my vista laptop is **** because I choose to have aero disabled (Score of dead-on '3'). The next lowest score I think was 3.8, with the average being about 4... It's been so long since I ran it so I don't remember the scores exactly. But if it's going to mark you down based on your visual preferences, then it's not worth bothering with. I would want a system that reflects my computer's capabilities, not its active settings.
-
I didn't mean that - I meant that obviously, for prebuilt, off-the-shelf machines, the onus would be on the manufacturer to apply the grade. Whereas a custom build would either require the full system specs to be listed on the game (as they currently are) or there would have to be a way to obtain the grade. I would guessed that the grading idea would totally replace minimum specifications if the idea ever caught on. That's what I was thinking you were going to suggest. But I thought I'd ask in case you had a different idea.
-
How would it keep up with time though? I'm not opposed to the idea at all, but let's say that I release a grade 3 game tomorrow, and so all grade 3 computers can play it. Next year, another grade 3 game comes out, but the hardware from last year's grade 3 computers isn't really up to the task anymore... Would there be a date system whereby your computer is grade 3 as of the manufacture date (say, 07 May 2010), and the game labelled as requiring at least a grade 3 computer from 07 May 2010 or earlier? Would a grade 3, this time next year, be branded as a grade 8? (Assuming there are 5 grades) This way there's no confusion with people wondering "which grade 3?" But if so, what would the largest number be? You just couldn't have people rolling 2^64-1 off the the tip of their tongue. Or any other way I've not thought of... Additionally, where would grades be obtained for custom-build machines? I certainly prefer to mix and match the components I want, and just wish it was as easy for laptops as it is for fixed machines... It's one of the downsides of IBM's original idea. With a console, for a 360 game, you just need a 360 because the hardware is static and virtually can't be changed (whilst keeping the warranty intact)