Jump to content

Canardia

Developers
  • Posts

    4,127
  • Joined

  • Last visited

Everything posted by Canardia

  1. I'm using SDL_net, because I need SDL for many other things which LE doesn't support also, like Joysticks, Forcefeedback, Console Controllers, etc... With SDL_net I have reliable and fast cross-platform UDP networking.
  2. Canardia

    iWerks

    You could also install Debian as multi-boot on the iMac. I would also buy a Mac if they had Motorola or PowerPC CPUs, but with Intel it's just the same as a PC.
  3. Seems like your makefile is missing the luabind libraries.
  4. It was at some finnish training center ( www.tieturi.fi ), but if you follow the wiki link above, you could also learn those things yourself.
  5. Looks like you're trying to link the same file twice.
  6. You should use g++, which is the C++ compiler of GNU, while gcc is only a C compiler.
  7. I think the LE 3 implicit headers could also be OOP, since it seems only OOP languages support them anyway.
  8. There's basically 2 ways to use DLL, so, DyLib: implicit or explicit linking. In LE 2 we had to use explicit linking (typedefs, LoadLibrary), because it was created with BlitzMax. In LE 3 we can also use implicit linking (drag and drop lib file, DLL loads automatically), but we still need also an explicit linked DLL for other languages which don't support this. At least C, C++ and C# supports this, perhaps also Delphi. http://msdn.microsoft.com/en-us/library/9yd93633.aspx
  9. Yeah, but also extreme class usage, like a seperate class for each state, for example On/Off.
  10. One thing is also that the physics debug wireframes are not accurate, they are only a rough approximation, so you shouldn't trust what they show, as the physics bodies might actually be at your mesh already. You can probably best test this by turning of physics debug mode and see if the meshes collide accurately with eachother at high speeds. Just make sure you don't move a body more than the size of it is, because then there won't be any collision, even if sweptcollision tries to fix that, it doesn't work always.
  11. I just had a 3 day course about Advanced C++ programming. It was pretty cool, and I enjoyed every second of it. I already knew most of the C++ language itself, but I've never taken any course about the different ways of programming, and which way suits best each solution. This was exactly what I wanted to learn, and I feel much more professional now The course talked about the following things, and we had to write also real C++ code for most of the ways we just learned and compile and run it. It was organized pretty smart, as many topics built upon a previous topic, and the code we wrote earlier. That was fun also: Design Patterns: different models and their usage GoF-models (GoF=Gang of Four: http://en.wikipedia.org/wiki/Design_Patterns ) Creational Patterns: Abstract Factory Builder Factory Method Prototype Singleton Structural Patterns: Adapter Bridge Composite Decorator Facade Flyweight Proxy Behavioral Patterns: Chain of Responsibility Command Interpreter Iterator Mediator Memento Observer State Strategy Template Method Visitor Antipatterns: usual problems in class design solutions to problems Each of those topics exposed a different way of programming in C++, and it was pretty amazing what kind of features are hidden in C++, of which most people probably never thought of before. I had no idea how powerful and easy C++ really is, when you use it the real C++ way. Basically it was like, you should use classes for everything, and never do big if() constructs and other hardcoded and procedural logic. Doing things right in C++ gives also a huge speed boost in execution and programming time, and makes code really reusable, as they explained why each const, virtual, static, volatile, mutable, stack, heap, algorithm, etc... is useful.
  12. You can also call Sleep(0) for times below 2ms, and just make a while loop to check if the wanted time has elapsed. It will still give CPU cycles to the OS, and work pretty nicely. I've already made an abstract BaseApplication class, but I think I want to make it further cleaner C++, and make also an abstract Timer class. What I also noticed is that Sleep() is not even accurate with higher values, like values of 100 or more, because it still keeps jumping between the wanted time, and +1ms. But I found a solution for that problem also, which I implemented into my abstract class. Basically you let Sleep() only do the wanted time-10ms, and sync the last 10ms yourself in your loop. Sure, it will take a few cycles more CPU time, but since most users have a bloated Windows with more than 50 processes running anyway, it won't make any difference. I have only 14 processes running, and I even deleted (renamed) explorer.exe because it took like 5 minutes CPU time when booting. It's totally useless anyway, as Windows works fine with taskman.exe only
  13. You should take the value the function returns.
  14. With the new timing, you can make your own accurate Sleep() function, which really sleeps for 1ms, and not 2.
  15. Oh cool, the same Linux article says in the end, it works the same way on Mac too. So now you have 1ms accuracy on Windows, Linux, Mac! Basically since all phones are Linux (only iPhone is Mac), you can use the same code for all phones too.
  16. I found an article how to do it on Linux too: http://stackoverflow.com/questions/588307/c-obtaining-milliseconds-time-on-linux-clock-doesnt-seem-to-work-properly Now you have 1ms accuracy on Windows and Linux! #include <sys/time.h> #include <stdio.h> long timeGetTime(); long system_starttime=timeGetTime(); long timeGetTime() { struct timeval start; long mtime; gettimeofday(&start, NULL); mtime = (1000*start.tv_sec + start.tv_usec/1000.0) + 0.5 - system_starttime; return mtime; } int main() { double t1,t2,tt; for(int z=0;z<10;z++) { t1=timeGetTime(); t2=timeGetTime(); while(t2<=t1)t2=timeGetTime(); tt=t2-t1; printf("%17.8f %17.8f %17.8f\n", t1/1000.0, t2/1000.0, tt/1000.0); } return 0; } Output: 0.00000000 0.00100000 0.00100000 0.00100000 0.00200000 0.00100000 0.00200000 0.00300000 0.00100000 0.00300000 0.00400000 0.00100000 0.00400000 0.00500000 0.00100000 0.00500000 0.00600000 0.00100000 0.00600000 0.00700000 0.00100000 0.00700000 0.00800000 0.00100000 0.00800000 0.00900000 0.00100000 0.00900000 0.01000000 0.00100000
  17. Yeah, but by default timeGetTime() is set to 15ms accuracy, so you need to change the timer accuracy to 1ms using timeBeginPeriod(1) (or better the complex formula using devicecaps, to make sure 1 is allowed). There also another funny thing: when you change the accuracy with timeBeginPeriod() to 1ms, it doesn't change until after 2ms delay, but that should not be a real problem, since it can be done when the engine starts.
  18. I found some code on Microsoft site: http://msdn.microsoft.com/en-us/library/dd743626(v=VS.85).aspx http://msdn.microsoft.com/en-us/library/dd757629(VS.85).aspx Now you have 1ms accuracy in C++ too! Still need to test it on Linux. #include "stdio.h" #include "time.h" #include "windows.h" #pragma comment(lib,"winmm.lib") int main() { #define TARGET_RESOLUTION 1 // 1-millisecond target resolution TIMECAPS tc; UINT wTimerRes; if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR) { // Error; application can't continue. } wTimerRes = min(max(tc.wPeriodMin, TARGET_RESOLUTION), tc.wPeriodMax); timeBeginPeriod(wTimerRes); // Actually the above is not needed, since the following command alone works also, // but maybe it's still better to use the above, who knows how Microsoft has // designed Windows: // // timeBeginPeriod(1); double t1,t2,tt; for(int z=0;z<10;z++) { t1=clock(); t2=clock(); while(t2<=t1)t2=clock(); // wait for smallest change (15ms on windows) tt=(t2-t1); printf("%17.8f %17.8f %17.8f\n", t1/(double)CLOCKS_PER_SEC, t2/(double)CLOCKS_PER_SEC, tt/(double)CLOCKS_PER_SEC); } printf("-------------------------------------------------------\n"); for(int z=0;z<10;z++) { t1=GetTickCount(); t2=GetTickCount(); while(t2<=t1)t2=GetTickCount(); // wait for smallest change (15ms on windows) tt=(t2-t1); printf("%17.8f %17.8f %17.8f\n", t1/1000.0, t2/1000.0, tt/1000.0); } printf("-------------------------------------------------------\n"); for(int z=0;z<10;z++) { t1=timeGetTime(); t2=timeGetTime(); while(t2<=t1)t2=timeGetTime(); // wait for smallest change (1ms on windows) tt=t2-t1; printf("%17.8f %17.8f %17.8f\n", t1/1000.0, t2/1000.0, tt/1000.0); } return 0; } Output: 0.00000000 0.01500000 0.01500000 0.01500000 0.03100000 0.01600000 0.03100000 0.04600000 0.01500000 0.04600000 0.06200000 0.01600000 0.06200000 0.07800000 0.01600000 0.07800000 0.09300000 0.01500000 0.09300000 0.10900000 0.01600000 0.10900000 0.12500000 0.01600000 0.12500000 0.14000000 0.01500000 0.14000000 0.15600000 0.01600000 ------------------------------------------------------- 7210.39000000 7210.40600000 0.01600000 7210.40600000 7210.42100000 0.01500000 7210.42100000 7210.43700000 0.01600000 7210.43700000 7210.45300000 0.01600000 7210.45300000 7210.46800000 0.01500000 7210.46800000 7210.48400000 0.01600000 7210.48400000 7210.50000000 0.01600000 7210.50000000 7210.51500000 0.01500000 7210.51500000 7210.53100000 0.01600000 7210.53100000 7210.54600000 0.01500000 ------------------------------------------------------- 7210.53400000 7210.53500000 0.00100000 7210.53500000 7210.53600000 0.00100000 7210.53600000 7210.53700000 0.00100000 7210.53700000 7210.53800000 0.00100000 7210.53800000 7210.53900000 0.00100000 7210.53900000 7210.54000000 0.00100000 7210.54000000 7210.54100000 0.00100000 7210.54100000 7210.54200000 0.00100000 7210.54200000 7210.54300000 0.00100000 7210.54300000 7210.54400000 0.00100000
  19. BlitzMax has 1ms accuracy, so there must be a better way still: SuperStrict Local t1:Double, t2:Double, tt:Double; For Local z:Int=0 To 9 t1=MilliSecs(); t2=MilliSecs(); While(t2<=t1) t2=MilliSecs(); Wend tt=t2-t1; Print t1/1000:Double+" "+t2/1000:Double+" "+tt/1000:Double; Next End Output: 5836.3750000000000 5836.3760000000002 0.0010000000000000000 5836.3760000000002 5836.3770000000004 0.0010000000000000000 5836.3770000000004 5836.3779999999997 0.0010000000000000000 5836.3779999999997 5836.3789999999999 0.0010000000000000000 5836.3789999999999 5836.3800000000001 0.0010000000000000000 5836.3800000000001 5836.3810000000003 0.0010000000000000000 5836.3810000000003 5836.3819999999996 0.0010000000000000000 5836.3819999999996 5836.3829999999998 0.0010000000000000000 5836.3829999999998 5836.3840000000000 0.0010000000000000000 5836.3840000000000 5836.3850000000002 0.0010000000000000000
  20. The new thing is that clock() starts with 0 when the program starts, and GetTickCount() starts from when the computer was started. So when you compare seconds as float or double, especially with float you will get accuracy problems very soon. Also new is that on Linux clock() has 10ms accuracy, while the Windows commands don't work at all on Linux, they don't exist in ANSI standard C++: 0.00000000 0.01000000 0.01000000 0.01000000 0.02000000 0.01000000 0.02000000 0.03000000 0.01000000 0.03000000 0.04000000 0.01000000 0.04000000 0.05000000 0.01000000 0.05000000 0.06000000 0.01000000 0.06000000 0.07000000 0.01000000 0.07000000 0.08000000 0.01000000 0.08000000 0.09000000 0.01000000 0.09000000 0.10000000 0.01000000
  21. That article is quite useless since it doesn't even mention the clock() command. On all machines I've tried, clock() the smallest millisecond accuracy on the machine. I made a test program which shows that Windows API calls use the same resolution as clock(). However clock() is more accurate since it starts with 0, so there will be no double/float precision errors when measuring the result: #include "stdio.h" #include "time.h" #include "windows.h" int main() { double t1,t2,tt; for(int z=0;z<10;z++) { t1=clock(); t2=clock(); // wait for smallest change (15ms on windows) while(t2<=t1)t2=clock(); tt=(t2-t1); printf("%17.8f %17.8f %17.8f\n", t1/(double)CLOCKS_PER_SEC, t2/(double)CLOCKS_PER_SEC, tt/(double)CLOCKS_PER_SEC); } printf("-------------------------------------------------------\n"); for(int z=0;z<10;z++) { t1=GetTickCount(); t2=GetTickCount(); // wait for smallest change (15ms on windows) while(t2<=t1)t2=GetTickCount(); tt=(t2-t1); printf("%17.8f %17.8f %17.8f\n", t1/1000.0, t2/1000.0, tt/1000.0); } return 0; } Output: 0.00000000 0.01500000 0.01500000 0.01500000 0.03100000 0.01600000 0.03100000 0.04600000 0.01500000 0.04600000 0.06200000 0.01600000 0.06200000 0.07800000 0.01600000 0.07800000 0.09300000 0.01500000 0.09300000 0.10900000 0.01600000 0.10900000 0.12500000 0.01600000 0.12500000 0.14000000 0.01500000 0.14000000 0.15600000 0.01600000 ------------------------------------------------------- 22676.37500000 22676.39000000 0.01500000 22676.39000000 22676.40600000 0.01600000 22676.40600000 22676.42100000 0.01500000 22676.42100000 22676.43700000 0.01600000 22676.43700000 22676.45300000 0.01600000 22676.45300000 22676.46800000 0.01500000 22676.46800000 22676.48400000 0.01600000 22676.48400000 22676.50000000 0.01600000 22676.50000000 22676.51500000 0.01500000 22676.51500000 22676.53100000 0.01600000
  22. Hehe, I had a lot of fun watching this video tutorial knowing the backgrounds. Look who's talking The tutorial is now accurate though. Sometimes you need also to curve the AppSpeed(), but that is not always needed, it was needed in some specific version though.
  23. If you use double instead of float, you get 5 times faster speed (you can verify it with GNU/VS2008 by timing the execution of a few float vs double calculations): double ratio = 200.0 / 800.0;
  24. It was Josh I think I will make my own website with advanced tutorials, since I'm tired that other people's web sites change so often.
×
×
  • Create New...