Drew_Benton
Developers-
Posts
27 -
Joined
-
Last visited
Profile Information
-
Location
Texas
Recent Profile Visitors
5,133 profile views
Drew_Benton's Achievements
-
On the topic of next steps with performance and how far UltraEngine has come since Leadwerks, now that there is an ECS system, I wanted to ask you @Josh if you have ever checked into Unity DOTS? I first started reading about it a year ago or so, and it seemed like an interesting implementation of ECS. The TL;DR as I understand it, is basically you hand craft your component layouts to be cache friendly, allocate raw arrays of component data, process them in a hardware friendly way, then reap crazy performance benefits over the traditionally OOP design that people use to implement ECS that results in memory being scattered about (and thus high cache misses). What I'm most uncertain about is how to actually apply that in practice to a real game as opposed a tech demo. This video was by far the most comprehensive in the details of data layout optimizations and its impact on ECS: I was a bit skeptical about real world results, but I came across a demo I was able to do some minor updates to test locally in Unity with, and sure enough, it was pretty legit: Finally, an interesting larger scale crowd simulation demo someone made: I wonder if anything like this would further help UltraEngine with even more scale?
-
I'm on Windows 10, and the WM_PAINT did happen on invalidation when clicking in the middle of the thin border between windows, but just for the parent window and not the child window. I had some code to PeekMessage and check what was going on before UltraEngine processed the events, so that's how I arrived at the conclusion to not check the window source. I can't attach a MP4 file, but here's an example: https://gyazo.com/71f17118ad6d81f388b0f748257570cc I wonder if making the main window the render window, and a second window the UI window would result in the intended behavior without additional modifications?
-
@reepblue's code seems to work great, because it avoids trying to solve the issue of "when to re-render". I think this is the preferred way to go about an event driven system like this on Windows. I feel the Example 3 code is missing a lot of common render invalidation events (conceptually speaking), which are causing the issues you are seeing. I ran the example and can reproduce the exact issues you are having, so I think it's code related and not hardware/driver related. For reference, I have a RTX 3080 TI, also using 527.56 drivers running 2x 4k monitors. For example, on WIndows, I feel you should dirty the render view to cause a redraw when: The app is moved The app is resized When the parent window repaints Consider the following modified Example 3: #include "UltraEngine.h" using namespace UltraEngine; const int SidePanelWidth = 200; const int Indent = 8; // Callback function for resizing the viewport bool ResizeViewport(const Event& ev, shared_ptr<Object> extra) { // If the window resize event is captured auto window = ev.source->As<Window>(); // Get the new size of the applications window iVec2 sz = window->ClientSize(); auto viewport = extra->As<Window>(); // Set the position and size of the viewport window viewport->SetShape(SidePanelWidth, Indent, sz.x - SidePanelWidth - Indent, sz.y - Indent * 2); return true; } //Custom event ID const EventId EVENT_VIEWPORTRENDER = EventId(101); int main(int argc, const char* argv[]) { // Disable asynchronous rendering so window resizing will work with 3D graphics AsyncRender(false); // Get the available displays auto displays = GetDisplays(); // Create a window auto window = CreateWindow("Ultra Engine", 0, 0, 1280, 720, displays[0], WINDOW_CENTER | WINDOW_TITLEBAR | WINDOW_RESIZABLE); // Create user interface auto ui = CreateInterface(window); // Get the size of the user interface iVec2 sz = ui->background->ClientSize(); // Create a treeview widget auto treeview = CreateTreeView(Indent, Indent, SidePanelWidth - Indent * 2, sz.y - Indent * 2, ui->root); // Anchor left, top and bottom of treeview widget treeview->SetLayout(1, 0, 1, 1); // Add nodes to the treeview widget treeview->root->AddNode("Object 1"); treeview->root->AddNode("Object 2"); treeview->root->AddNode("Object 3"); // Create a viewport window auto viewport = CreateWindow("", SidePanelWidth, Indent, sz.x - SidePanelWidth - Indent, sz.y - Indent * 2, window, WINDOW_CHILD); // Adjust the size of the viewport when the applications window is resized (this will callback to our ResizeViewport() function) ListenEvent(EVENT_WINDOWSIZE, window, ResizeViewport, viewport); // Create a framebuffer auto framebuffer = CreateFramebuffer(viewport); // Create a world auto world = CreateWorld(); // Create a camera auto camera = CreateCamera(world); camera->SetClearColor(0.125); camera->SetPosition(0, 0, -4); // Create a light auto light = CreateBoxLight(world); light->SetRotation(35, 45, 0); light->SetRange(-10, 10); // Create a model auto model = CreateSphere(world); model->SetColor(0, 0, 1); // This varialble will be used for viewport refreshing bool dirty = false; // Main loop while (true) { // Wait for event const Event ev = WaitEvent(); // Evaluate event switch (ev.id) { case EVENT_WINDOWMOVE: Print("Window move"); if (not dirty) { dirty = true; EmitEvent(EVENT_VIEWPORTRENDER, viewport); Print("viewport refresh"); } break; case EVENT_WINDOWSIZE: Print("Window size"); if (not dirty) { dirty = true; EmitEvent(EVENT_VIEWPORTRENDER, viewport); Print("viewport refresh"); } break; //Close window when escape key is pressed case EVENT_KEYDOWN: if (ev.source == window and ev.data == KEY_ESCAPE) return 0; break; case EVENT_WINDOWCLOSE: if (ev.source == window) return 0; break; case EVENT_WINDOWPAINT: //if (ev.source == viewport) { Print("Window paint"); if (not dirty) { // This prevents excessive paint events from building up, especially during window resizing // This event is added to the end of the event queue, so if a lot of paint events build up, it will // only cause a single render to be performed. dirty = true; EmitEvent(EVENT_VIEWPORTRENDER, viewport); Print("viewport refresh"); } } break; case EVENT_VIEWPORTRENDER: world->Render(framebuffer); dirty = false; Print("Viewport render"); break; } } return 0; } This should solve the recent issues you've posted about because: If you move the Windows offscreen and back on, that counts as a move, which triggers a render, ensuring the viewport doesn't stay invalidated from being offscreen. When you resize the window, the render view gets invalided as expected during the resize itself, but it will redraw once you complete the resize. When you click in the empty space between the two windows, the parent window gets a paint message in such a way the child is not currently invalidated, but should be, which is why the viewport disappears. That behavior is fixed by not checking the message source in the code for the paint message, which should also fix other invalidation issues stemming from the parent. On one hand, maybe the engine could get some event processing changes to specifically address some of these issues, but from my experiences trying to track down and understand obscure Windows behaviors when it comes to events is usually not worth it. I just think it's far easier to model your code like reepblue did, and just avoid most of those issues in the first place by always updating/rendering. The other alternative is to possibly render selectively to an image file, and then use simpler, but more comprehensive Win32 message processing to ensure only the minimal amount of redrawing happens outside of a rendering context where you can manage the HDC of a single HWND and not deal with multiple windows and different behaviors of messages across a parent/child setup. Doesn't sound like that's what you're after here though, but if you wanted to minimize 3d rendering due to needing to limit graphics resource usage, I'd consider something like that maybe.
-
CreateActor causing heap corruption when used on a Camera
Drew_Benton replied to Drew_Benton's topic in Bug Reports
You're right. Upon digging even more and deleting various cached files and removing extra source code, the real problem was that my 'ComponentSystem.h' was generated once, but didn't get regenerated again by the pre-processor: // This file is generated by the pre-processor. Do not modify it. // Generated on: Wed Jan 18 00:48:19 2023 However, my 'ComponentSystem.cpp' file did: // This file is generated by the pre-processor. Do not modify it. // Generated on: Wed Jan 18 10:31:21 2023 I realized this was the actual problem when I was getting compile errors for code that wasn't even in the project anymore. Upon deleting 'ComponentSystem.h' and getting it re-generated, the error now goes away. -
CreateActor causing heap corruption when used on a Camera
Drew_Benton replied to Drew_Benton's topic in Bug Reports
I found the exact, replicable problem as I was zipping it up to attach for you. At one point, I wanted to switch between a few of the different examples, so I renamed "main.cpp" to "main2.cpp" and excluded it from my Solution. I then added a new "main.cpp", but I just now noticed Visual Studio put it in the root folder by default, instead of the "Source" folder. That was what was causing the problem, because as soon as I move "main.cpp" back into the "Source" folder, the issue goes away. Moving it back one level above "Source" triggers the same error again. Mystery solved, I'm still getting used to the new required project layouts and whatnot, so I'll have to remember to keep track of where new files get put! -
CreateActor causing heap corruption when used on a Camera
Drew_Benton replied to Drew_Benton's topic in Bug Reports
Thank you for looking into this so quickly. I updated and tried again, but still had the same issue, so I tried what SpiderPig suggested: A fresh project seems to fix the issue. I know just how finicky C++ can get when code generation is involved (noticed components need to be in the components folder for the preprocessor), so as I continue to experiment, I'll try this first from now on if I run into the problem again. Thanks! -
UltraEngine Version: 1.0.1 Adding a CameraControls component to the Camera now seems to cause a heap corruption error on exit. The code I'm using is simply: auto actor = CreateActor(camera); actor->AddComponent<CameraControls>(); For example: #include "UltraEngine.h" #include "ComponentSystem.h" using namespace UltraEngine; int main(int argc, const char* argv[]) { //Get the displays auto displays = GetDisplays(); //Create a window auto window = CreateWindow("Ultra Engine", 0, 0, 1280, 720, displays[0], WINDOW_CENTER | WINDOW_TITLEBAR); //Create a framebuffer auto framebuffer = CreateFramebuffer(window); //Create a world auto world = CreateWorld(); //Create a camera auto camera = CreateCamera(world); camera->SetClearColor(0.125); camera->SetFov(70); camera->Move(0, 2, -8); auto actor = CreateActor(camera); actor->AddComponent<CameraControls>(); //Create light auto light = CreateBoxLight(world); light->SetRotation(45, 35, 0); light->SetRange(-10, 10); //Load FreeImage plugin auto plugin = LoadPlugin("Plugins/FITextureLoader"); //Model by PixelMannen //https://opengameart.org/content/fox-and-shiba auto model = LoadModel(world, "https://github.com/UltraEngine/Documentation/raw/master/Assets/Models/Characters/Fox.glb"); model->SetScale(0.05); model->Animate(1); model->SetRotation(0, -90, 0); auto neck = model->skeleton->FindBone("b_Neck_04"); Vec3 rotation; //Main loop while (window->Closed() == false and window->KeyDown(KEY_ESCAPE) == false) { world->Update(); rotation.y = Cos(float(Millisecs()) / 10.0f) * 65.0f; neck->SetRotation(rotation); world->Render(framebuffer); } return 0; } This example, which was working in the YT video, now also exhibits this same issue: https://www.ultraengine.com/learn/Terrain_SetMaterial?lang=cpp Message: Stack Trace: ucrtbased.dll!00007ffd7e4fce3d() Unknown ucrtbased.dll!00007ffd7e500275() Unknown > Ultra1_d.exe!operator delete(void * block) Line 38 C++ Ultra1_d.exe!operator delete(void * block, unsigned __int64 __formal) Line 32 C++ Ultra1_d.exe!std::_Ref_count_obj2<Actor>::`scalar deleting destructor'(unsigned int) C++ Ultra1_d.exe!std::_Ref_count_obj2<Actor>::_Delete_this() Line 2053 C++ Ultra1_d.exe!std::_Ref_count_base::_Decwref() Line 1119 C++ Ultra1_d.exe!std::_Ptr_base<UltraEngine::ActorBase>::_Decwref() Line 1399 C++ Ultra1_d.exe!std::weak_ptr<UltraEngine::ActorBase>::~weak_ptr<UltraEngine::ActorBase>() Line 2996 C++ Ultra1_d.exe!UltraEngine::Entity::~Entity(void) Unknown Ultra1_d.exe!UltraEngine::Camera::~Camera(void) Unknown Ultra1_d.exe!UltraEngine::Camera::`vector deleting destructor'(unsigned int) Unknown Ultra1_d.exe!std::_Destroy_in_place<class UltraEngine::Camera>(class UltraEngine::Camera &) Unknown Ultra1_d.exe!std::_Ref_count_obj2<class UltraEngine::Camera>::_Destroy(void) Unknown Ultra1_d.exe!std::_Ref_count_base::_Decref() Line 1111 C++ Ultra1_d.exe!std::_Ptr_base<UltraEngine::Camera>::_Decref() Line 1337 C++ Ultra1_d.exe!std::shared_ptr<UltraEngine::Camera>::~shared_ptr<UltraEngine::Camera>() Line 1620 C++ Ultra1_d.exe!main(int argc, const char * * argv) Line 57 C++ Ultra1_d.exe!invoke_main() Line 79 C++ Ultra1_d.exe!__scrt_common_main_seh() Line 288 C++ Ultra1_d.exe!__scrt_common_main() Line 331 C++ Ultra1_d.exe!mainCRTStartup(void * __formal) Line 17 C++ kernel32.dll!00007ffeb3b87614() Unknown ntdll.dll!00007ffeb52226a1() Unknown Thanks!
-
Drew_Benton started following Ultra Engine SDK Now Available
-
Ultra Engine SDK Now Available
Drew_Benton commented on Admin's blog entry in Ultra Software Company Blog
Congrats on this next impressive milestone Josh! It's really respectable just how far you've come since the days of the BlitzMax Leadwerks version, and just how much time you've dedicated to this project. Checking my email, I first bought Leadwerks Jan 3, 2009, and that seems like a lifetime ago now. I grabbed a yearly sub to UltraEngine to continue supporting the dream. Not sure if this year will finally be the year I seriously pursue a hobbyist gamedev project or not, but I enjoyed watching the "Introduction to Ultra Engine API" video on YT, and I still love the pragmatic way your API works, which was what got me into the original version to begin with. Anyways, keep up the great work. I look forward to an exciting year of further updates and awesome features! -
The general rule of thumb is to never copy any user/3rd party header files/libs into the base compiler paths. Instead, you just want to place it (the complete enet package) in a stable path that won't be changing, for example, "C:\dev". Once you have it in a stable path, you can: 1. Click Tools->Options 2. Projects and Settings->VC++ Directories. 3. Choose "Library files" and add the directory to the end of the list that contains the library files. 4. Choose "Include files" and add the directory to the end of the list that contains the base paths to the header files. Now, you can #include<enet.h> and it will search the registered directories first and find it. Likewise, when the linker goes to search its paths for the library you added in the Additional Dependencies, it'll go there first. See: what is the difference between #include <filename> and #include “filename” as well. This method is for global libraries though, things you set once and want to reuse for many projects. You maintain only 1 version of the library. If you only want to setup the current project only, what you do instead is: 1. Click Project->Properties 2. Choose Configuration Properties->C/C++ 3. Click the right most "..." button inside the "Additional Include Directories" field. 4. Add the directory to the end of the list that contains the base path for the header files 5. Choose Configuration Properties->Linker 6. Click the right most "..." button inside the "Additional Library Directories" field. 7. Add the directory to the end of the list that contains the library files. 8. Repeat for each "Configuration" (Debug/Release for example) Now, only the current project is setup to look for the library from a common path. IMPORTANT NOTE: Depending on how the API is setup determines which header folder you add. For example, it is advised to use the 'include' folder as the base header path (#include <enet/enet.h>) rather than using the 'enet' folder as the base path (#include<enet>). The reason why you usually take the first approach is to avoid header file name conflicts. The global approach is good for existing libraries that are common dependencies among many projects. The local approach is good for when you have different versions of the same library that you need to use in different projects. For example, there are many versions of ZLIB. If you need to use specific versions across different projects, then you have to set the paths for each library individually. If you were only using one version of ZLIB for all projects, then you can just setup the global paths instead. One last important note. Maintaining dependencies can be very tricky if you are not careful. For example, let's say you use the global approach. You migrate computers in the future and forget to save the the exact version you were using. Or, you send the code to someone who does not have the same dependency in the path you did. All sorts of coding headaches can arise as a result of this. I myself would rather keep all dependencies in a workspace folder for the projects I am using it with, so when I backup the project or send it to someone, everything they need is right there. Also, if you go and change anything in the global path, you will need to recompile all projects that depended on it and retest them to make sure they do not break. I've seen a lot of projects get ruined from simple things like this and I've made the mistake one too many times! Setting up a personal source code repository can help with the matter as well.
-
The only 3rd party tool out there that could do it is Ultimate Unwrap 3D. UU3D is a must have tool, even outside of Leadwerks, but assuming you had that, you'd just download the "Leadwerks Engine (GMF)" plugin from their 3rd Party Plugins page. Then inside UU3D, you'd import your FBX (File->Open) then save it as the GMF file (File->Save As). Leadwerks requires DDS textures, so you'd have to make sure you account for that. There might be other limitations in regards to animations or bones or what have you, so you'd need to check out the resources on content creation on the site as well.
-
Just for the sake of writing, here's my computer story. Also, I'm not QQ'ing about anything, it's just the timing of it (weekend) really sucks since I had some stuff I was planning on doing. Anyways though - Saturday May 15th, I ordered a 2TB backup drive and a sata/ide to usb adapter. I figured I'd go ahead and start backing up all my stuff during the week because one of my hard drives (750gb Seagate Barracuda) had been reporting errors (spin retry count) for over a year now. During that time, I just stopped using the drive and left all the data intact, copying all but a couple of large media folders to my other 750gb HD (I had bought 3 a while ago). From Monday to Friday, I spent all day every day working on backing up my data. I have about 5 hard drives laying around with stuff on them from over the years, so I had to consolidate it all first onto one drive. From there, I had to duplicate that across the 2TB drive and then to my third 750gb drive. The biggest problem I had when moving all that data was lockups from Teracopy and then antivirus interruptions from AVG. All in all, it took all week to backup everything, but I finally got it done and all my extra hard drives formatted. Ideally, I need to random fill wipe them all, but I don't have the hardware for that since I only have one adapter and it takes a very long time to do drives that large. So, after all the software backups were completed, it was time to remove hard drives from my system. I had left my case open on the floor with a few fans blowing across it and the extra hard drives I had attached internally but were laying outside the case due to space constraints. I've worked with computers a while, so I know how careful you have to be with them. After I took out all the hard drives I had to boot back up a couple of different times to ensure I had labeled them right, marking the bad one, the one that's going to be my new storage, and then the previous storage that I'm saving with the 2TB. Along that way, Windows notified me my system configuration had changed too much and I'd need to activate again. Bummer. It's a genuine Windows copy, but I got it from MSDNAA while I was in college, so sometimes those can be quirky on reactivates. Anyways, I finally had everything working fine so I needed to put in my final HD and then close up the case. Somewhere along the way, I must have bumped a cord or something and didn't notice because as soon as I started up the PC made a really scary beep, Since it was Friday morning like 3am, I cut the power so it'd not wake up everyone else in the house. I had to wait about 3-4 more hours before I could get back to working on it. After the time passed, I booted up the PC again to try and figure out what happened. It just powered on for a couple of seconds, then powered off. It did this over and over until I cut the power. I figured I had a short somewhere since I've seen computers do similar things before. I took it all apart and cleaned everything out. While it was totally disassembled on my floor, I tried booting it up again. Success! Carefully, I put it all back together back into the case and started it up again. Success! By now I was feeling pretty good since I'd not have to replace anything. However, when I got into Windows, the sound wasn't working. I have one of those asus xonar cards that requires a power cord so I jiggled that in the case and that seemed to fix it. I had only a few wires hanging out of the case still since I was taking very small steps in putting it back together. I didn't want anything to happen again. As I put in the last two cables and slide the case over, it happened again. The PC shut off. !@%!#^@#^!#$ It started doing the same thing with the endless reboots. Annoyed, I took it all apart again to try and figure out what happened. This time though, it never made it out of the endless reboot cycle. I'm not sure what happened, but I'm thinking the power cord on my sound card and the 1x PCI-E slot it fits in is to blame. Maybe it got loose or out of place a bit on the slightest movement and as a result, caused a short in the mobo. I tried booting up using only a CPU and PSU but it was still stuck in the endless cycle. I tried removing the battery as well and was going to clear the CMOS but at that point I figured it was a lost cause. I read up online about the model I have (x38-ds4) and the "endless reboot" problem it seems to get. I figure that was triggered somehow and it wasn't an electrical short that brought down my PC. I haven't done any PC maintenance in years because I know computers get "settled" and as soon as you try to do something with them, they have a tendency to break. That's why I don't even mess with computers or upgrade them as much anymore as I used to, it's just too iffy. Anyways though, so now I'm looking at what my options are. I've been waiting since 2009 to upgrade my PC, but I've not yet found an upgrade that would give a good price/performance ratio. I mean I have an xeon X3350, which is almost literally the same as a Q9540 (the xenon was a lot cheaper when I bought it, but it still cost a lot). So it's a powerful little CPU, but it doesn't have HT. That means if I were to upgrade, I'd only get noticeable performance increases if I went with a i7-920 or above CPU. Anything else is simply not worth it. Therein lies the problem. Why spend another 1k+ for a new i7-920 computer when I'm not really going to be able to make the most use of it? The same is true of upgrading the graphics card, what am I going to use a 1gb 5870 or gtx 480 for? So, I've put off upgrading or parts buying for almost 2 years now because I'm "fine" with what I have. My 8800gt is like 4-5 generations old now but it still works fine. If I am going to upgrade, it needs to be a significant upgrade that justifies buying new stuff to last another number of years. For a power user like me, I look at it like this - each 1k that is spent means the computer should last at least 1 year. So if I put down ~2k for a PC, it should last at least 2 years for me and then still be usable by others with less demanding needs. I pass all my old computers down through the family so they all end up paying for themselves. Enter the 980x. 6-core cpu with HT, now there's an upgrade that would be significant enough to make. However, $1000 for the CPU alone? No way. I just can't justify spending that much money on something that is going to be replaced in like a year. It'll still have value for many years to come, but it'd never pay for itself if I got it. So, scratch that. Instead, I'll just wait for the consumer level 6 core w/ HT cpus which should be coming out in the 3Q (i7-970 supposedly). Now I'm not that much of a waiter, because if you just wait for the next best thing, you'll never get to enjoy anything as it all passes you by. However, if I can get by just fine with what I have, then I don't mind waiting to upgrade to something that really seems like it'd be worth the investment. Where does that leave me in regards to my current PC? 1. Pay ~100 for a replacement mobo to get a working system back up. Assuming everything else works still fine, then I can get back to how things were and continue to wait to upgrade. However, I've been itching to do some upgrading for a while now so that brings me to 2. 2. Upgrade CPU, Ram, Mobo, and Video Card (1-2 generations old, not latest) and reuse HDDs, Case, PSU. This is more cost efficient in the long run since I save in rebuying those parts, and this computer is not the type to be handed down anyways due to how big it is. 3. Wait it out for the next big upgrade. I'll make use of my desktop replacement laptop (which is what I'm on now) to get the most out of a poor investment. I was thinking at the time, it'd be a good idea to get it because of what I thought I needed, but it turned out to be a bad decision. It's not a bad product, it's just not what I needed: HP Pavilion HDX9494NR 4. Downgrade. Make use of as much as I can to put together a lesser computer. The next paragraph explains this a bit more. I still have a C2D 8400 cpu laying around unused that I could make use of as well. I have enough parts to build two smaller PCs actually come to think about it. I'd just need mobos and cases and another PSU really. There are some other factors to consider as well. Right now, my desktop PC warms my room by at least 2-3 degrees Fahrenheit. That on top of the fact it's about to be summer time here in Texas, that is going to be bothersome. It's also rather noisy and large and uses a lot of power. It's so large and heavy in fact that I've been wanting to scale down significantly for a while now. I'm thinking about buying a more specialized case and changing out a few things to make it more efficient and less "polluting" to my room. That's why I'm in no rush to buy a mobo to get it back working; I want to think of a longer term strategy to take while the opportunity presents itself. Monetary issues are not a constraint right now and I do have time to think it out now, since it's the weekend. Of course, I'm not asking anyone for advice on what to do, I'm just thinking aloud for my own edification. I like writing That's basically where I am right now. I'm just thinking aloud to help wake up and start thinking of the possibilities ahead. I guess at least I have my backups done so I'm not preoccupied at the thought of having lost anything. Also since I'm not employed right now, I don't have the need to act asap and can take some time to think things through. Maybe it was a good thing after all with the timing. I'll probably come back to this post after I've done some more research and made a decision. For now though, I need to start working on a project that I was originally intending to spend all of last week on. Dang, I don't like getting behind.
-
I guess it's been a while since I added a blog entry. Since I don't have anything specific to write about, I've stuck with a generic boring title. I spent all week working on backing up the past 5 years of my computer life to an external hard drive (WD Elements) as well as another internal one. One of my hard drives has been having some problems lately and I've ignored it for over a year. I decided to stop chancing it and simply put in the time to save all my stuff that I've worked on and accumulated over the past 5 years. It was really time consuming, but now I can work in peace without the worry of losing my stuff. To help make copying easier, I used Teracopy. It's an excellent little utility to help manage larger file copies. I did run into some problems with it since I was moving hundreds of GB and 100k's of files at a time, but it was manageable. Most of the problems I think I had weren't with the program alone as much as with Windows and the hard drives themselves. Who knows though, at least the task is done now! Last week, I decided to give some new technology a try. I got a Zotac Mag to give a whirl. My ideal use for it is to run as a low power, noise, heat server for some of my future programs. So far, I'm loving it! I first put on XP and it all went really nice. However, I didn't need XP for the server stuff so I put on Sever 2003 instead. I'm very impressed with it. It "feels" fast and while I know a lot of that has to do with not having that much stuff installed on it, it still seems to be able to handle a lot of things I'd thought it'd struggle with. It does have support for 3D hardware accelerated graphics via the Nvidia Ion chipset, which makes it far more useful to me than any of the intel integrated stuff that can't really do much. In fact, I can ran Leadwerks just fine on it although the framerate is around the 20s in the editor most of the time, but it's still very responsive and usable. I am very satisfied with the purchase and will be looking to see how I can fit such machines into my deployment strategies if I ever get something to "ship". The only thing I am not happy about is the cost of laptop memory. It requires that style of DDR 2 800 and that would set me back another $100 for 2 x 2GB sticks. I figure it's not worth paying 30% more of the price for the resulting performance increase, which won't be that much currently. Eventually though, I'd add more if I had the need. I even thought about throwing in a SSD, but that too won't even give a decent ROI since it's just a light weight server that won't be even using the HD that much. In other exciting news, NetDog 2.X.X is here! I didn't get too far into 1.7.1, but it looks like the new version 2.X.X is going to help make development a lot more fun and exciting. I'll probably start looking into it soon to get down the changes and reevaluate some of my current projects I wanted to use it in to see how much more they could benefit with it as opposed to trying to do it myself. I really want to get a working prototype done sometime in the next few months, but I don't think I can focus on that long enough to finish. I still need to go back and finish my cegui tool as well. I'm not in any rush though because I still don't want to lock myself into one version of Leadwerks with 2.32 around the corner. At some point, I'm just going to clamp down on versions and just get something done, but I have the luxury of not having to now, so I won't. I think that's my biggest problem to productivity, luxury of time. Oh well... I have another project I am working on right now, but I guess I'll save that for another day. I'm waiting for my last folder to backup as I write this. Then I can remove all the hard drives attached to my system and finally get back to programming, although I might just call it a day soon. The whole backing up files and verifying stuff has been really draining to me, not to mention boring. But it must be done, so that's that.
-
Thanks for the replies guys. I should just mention, I'm not the author of ENet or Netwerks! I kinda forget we have the source code to Netwerks as part of the SDK and the ENet module comes with Blitzmax (I'm a license holder). I guess I could make some changes to them both and send them to Josh later for some basic improvements in the library. I might end up doing that since it shouldn't be that hard. I'd still suggest people look into something higher level first if they are trying to make a game and need more features. However, for very simple programs and demos, it should be fine.
-
The recent thread about Netwerks (wrapper for ENet) got me thinking some more about the library. I decided to fire up the latest source distribution and get hacking away at it. In this blog post, I'll share some of the modifications I have done to the library to make it a bit more usable as a core networking library to build on top of. Let me start off by saying that ENet is a pretty well designed low level library. I've always liked the design, which you can read about here. The documentation is rather sparse and the code has some areas where some more comments would help, but the source code is provided and it comes under a really friendly MIT-style license. There is some room for a few improvements with the library though. 1. Memory failure management - The library seems to have some mixed logic when it comes to memory allocation failures. There is code in certain portions that attempts to handle NULL pointers, but the main API function for allocation memory simple calls abort if a memory allocation fails. Naturally, in any program you don't want a library simply calling abort under the hood, leaving you and your users wondering what happened. The first set of changes I've made is to make the library consistently handle NULL memory allocations. This means at any time, if a memory allocation fails, the library will fall through execution and error out of the top level API calls without bringing down your entire application. Of course, I still have some more testing to do to make sure I didn't miss anything, but so far so good. For this set of modifications, I had to make a few API functions return error codes rather than nothing. This was done so at the first sign of any error, execution can be stopped by falling through rather than continuing operations and possibly corrupting vital data. I have added a Panic function that is called when a memory allocation failure is detected as well. I wanted to separate the responsibility of handling a critical error such as a memory allocation failure from the function that allocated memory. At least now if the programmer wants their application to abort on a memory error, they can do so with being able to know that is what happened first rather than being left in the dark. 2. Packet Size Limitation - In my reply earlier, I mentioned something about how ENet supports any sized packets through the use of fragments, but this can be exploited by clients to crash your server by requesting too much memory too fast, which inevitably leads to abort being called and taking down the server. Such bugs have been found in commercial mmo servers and as a result, malicious users have been able to negativity impact business. If you have the source code for ENet, all you have to do is find the line: totalLength = ENET_NET_TO_HOST_32 (command -> sendFragment.totalLength); Then, you can check against some predefined constant. If the packet is larger, I just return the result of enet_peer_disconnect (one of the functions I've modified to have a return value now, but under normal operations, it returns 0.) A change such as this is pretty vital to ensure clients cannot crash your server with huge packet allocation requests. 3. Callback function tracking - For this modification, I've defined a few sets of macros to compile in the __FUNCTION__, __LINE__, and __FILE__ preprocessor commands to report the malloc and free callback activity. This was done so I can track where allocation/deallocation requests are made throughout the library. I won't keep this modification in since it's served its purpose. It did help in showing where memory allocations were occurring in high frequencies. I'll take a look at changing the mechanics of memory allocation and deallocation in those areas to make the library more efficient and memory friendly (by using a boost pool for example). Other parts of the library only allocate memory once, so those areas are fine as-is. I like having debug messages outputted since trying to track through the Visual Studio debugger has a noticeable impact on performance when breakpoints trigger. Well those are the 3 main things I worked on tonight with ENet. The first two things are vital to making the library more usable in a commercial environment. The third was just to further my understanding of the library itself. I'm currently adding in some random memory failure tests to ensure at any time if a memory allocation fails, I've added logic in all the areas to handle it properly. I need to review all the code manually as well, but I'll have to do that on a fresh set of eyes. Tonight I'll run some simple client sending variable sized messages to the server to check for any bugs. I think I want to spend a little of time to get a better idea of how NetDog works through object replication by experimenting with an ENet base and build up my own system. I guess I could also check out RakNet's code, but I don't care for the code design. [Edit 1] After doing some testing, I had a few crashes whenever the server or client window was suspended and the resumed. I tracked down the bug to some logic I added to free packets on an error, but they were already being freed by the caller function. I checked all the code I added and double checked the logic of the calling function and removed my offending code where needed. Now, no more crashes! During that testing, I also decided to remove all the gotos in favor of returning a function call that wrapped the logic. While doing that, I realized I need to work in a better system for error propagation through the use of a global panic flag. Adding that logic was not hard at all, since I just wrote a small function with a static variable that's called before the panic function is invoked. Looking through the code some more, I need to add more parameter checking to handle NULL pointers. I'll get started on that now. [Edit 2] I've been cleaning up the library and condensing it into one file recently. I had to reformat the spacing by hand, which was not fun, but at least it's in a format I'm happy with. There's bound to be parts I've missed, so I'll have to make many passes through it in the upcoming days on fresh sets of eyes to find those inconsistencies. Likewise, I have to do the same to check error handling mechanics for all functions. I think I'll write a new public API for the library, and then wrap up calls to internal code with SEH exception handling to ensure if a fatal exception is generated inside the library, information about it will be able to be logged to help track down the problem.
-
The article, 1500 Archers on a 28.8: Network Programming in Age of Empires and Beyond, is one of the main references to use for RTS network programming to get an idea of the complexity involved. In addition, What every programmer needs to know about game networking, also gives some additional information regarding the topic. The networking model you would use for a RTS would be a "deterministic peer-to-peer lockstep networking model". The task of coming up with a deterministic simulation, regardless of game genre is no easy task. As soon as you use floating point types in your calculations or have say a physics system that uses floating points, determinism goes out the window. For such matters as those, you'd want to read up on Floating Point Determinism, which should give you a good idea of the things you need to keep in mind when implementing your logic. You can count on existing physics middleware not being deterministic by default, so you have to resort to using the fancy physics API to being graphic eye candy and implement your own system on top of it that you can guarantee to be deterministic if you need physics to be a core part of your RTS. Gamedev.net has some good threads to read up on comments about RTS network programming as well. Do a few searches there to get some additional ideas. From a thread there: I think the problem would be better solved with TCP since you need reliable, in order packets that are not going to be sent as a fast as the input would go in say a FPS game. The benefits of UDP are best seen when you can discard earlier packets in favor of more recent packets that are received, but such a model is not applicable in a RTS since everything has to be deterministic if you don't want the game to fall apart. You could make it work with UDP though, but it just depends on the final design of the game and how much data you are sending, which also affects how the TCP model would work as well. There's no easy answer here as compared with other game genres, but I'd just start with one that is easier for you to use and just try to make it to work. With that said, I'd suggest using RakNet over Netwerks if you are going with UDP for now. If you are trying to make a game and have not ever worked with TCP before, then I'd suggest sticking to an existing library rather than trying to write your own because of how much can go wrong. Existing libraries like RakNet are tried and tested and your current goal is to write a game, not a networking library. There are low level TCP supported libraries around, such as boost::asio or POCO, but I'd advise getting started with these for the same reason to avoid ENet/Netwerks: you want something higher level so you can focus on your game rather than the networking library. If you do have the time to spend to learn networking stuff though or it interests you, then by all means, try whatever you like. I ended up doing that and spent a while learning low level TCP and UDP concepts because I wanted to develop my own library rather than not understand what was going on under the hood of existing libraries. However, you will lose a lot of time and productivity, so I'd only recommend it if you aren't working on a main project. This is explained a bit in the earlier articles, but you'd have a central server for lobby, chat, match making and then have a client/server model where the 'server' is the peer host of the game. A P2P model can work, but if you notice in the 1500 archers article, it was programmed by Ensemble Studios, whom I'd safely say had more resources and people than any of us have access to. I think if you just stick to the Client/Server model of peers in the game, you'd be able to have less to deal with when it comes to managing connections. How exactly do networked games work? They work the same way a non-networked game works, with the exception a networked game has to process "messages" from external sources that occurred in the past (lag). A non-networked game can be thought of a specialization of a networked game where there is 0 network latency, since all events are processed immediately. You can easily simulate this as well. All you have to do is write a simple non-networked demo where you buffer key input by some delay (to simulate the ping to a server) and then after said time has elapsed, process those key inputs. You will notice the higher the delay is, the more inaccurate the simulation becomes. This is where Client Side Prediction comes into play as well as interpolation/extrapolation (see Entity Position Interpolation Code and Dead reckoning for example). How does one side keep track of the player on the other side or the ammo a player has or a the damage a players bullet does? Simply through the use of messages. In a client/authoritative server architecture, the server is the referee is validating events before passing them on to other clients. So a client connects to the server and the server creates a player object. This event is sent to other connected clients with the basic information of position, model, etc... If a client sends the event to shoot their BFG, it's the server's job to check to see if the player has that weapon first, and then if it has enough ammo to fire. If it does, then the server sends out the event. If it doesn't, the client gets a message that they are out of ammo and need to correct their game state with the server state. Object replication and state management are the key concepts here for such interactions. I know that packets are "data", but what defines a "message"? The programmer, you, defines network messages to mean certain things. Then, the client and server process those messages and update their simulation of the game world based on it, taking into account the time the event occurred. The easiest way to think about it is like the Win32 event model. When you get a message in your Window proc, you know how to process the message based on the specifications Microsoft has given you. So when you get a WM_KEYDOWN message, you know exactly how to use the wParam and lParam fields. The identifier WM_KEYDOWN is the opcode for the message and then the data is stored in two 32bit types. For your own networking messages, you might choose to have a byte (256 unique), word (65k unique), or dword (4bln unique) opcode and then based on that opcode, you define the structure for the message. This is what defines your networking protocol. This protocol is the same type of protocol that you are already using when you use TCP or UDP. In either of those links, you'll see the format of the underlying packets that are sent across that protocol. So, let's say you are using UDP and your packet protocol is as follows: Header Opcode [2 bytes] Body Data [0..1022 bytes] Let's say you assign opcode 0 to be an operation where the client sends their name. Let's say this is your first time designing packets, so you implement it like you would in a regular program. The higher level packet structure for opcode 0 would look like: word opcode int nameLength char * name To process the packet, your logic would look something like: void ParseOpcode0() { nameLength = ReadInt(); std::string tmpName; tmpName.resize(nameLength); ReadArray(&tmpName[0], nameLength); } Looks harmless enough, right? The golden rule of network programming is to never trust client data. Many professional commercial companies forget this rule all the time and pay as a result when hackers exploit their systems using flaws in their code. The "correct" way to parse that packet would be as follows: void ParseOpcode0() { nameLength = ReadInt(); if(nameLength < 0 || nameLength > 16) // let's say 16 max characters for name // log error and disconnect client std::string tmpName; tmpName.resize(nameLength); ReadArray(&tmpName[0], nameLength); for(int x = 0; x < nameLength; ++x) // validate character tmpName[x] to ensure it's alpha numeric, _, or any other rules that apply // Now validate contents of string, i.e. filter curse words, GM, admin, etc..., make sure first 3 characters are letters, w/e } Pretty much any message you define, you need to make sure to validate the contents during parsing and be sure to handle all possible problems. This is always easier said than done, but it's only a small part of the challenge of network programming. You also need to take into account Endianness if you are going to be working across different platforms and are going to be sending data back and forth. Otherwise, a value like 0x0000000F on a little-endian machine turns out to be 0x0F000000 on a big-endian machine, which is way off and can wreck havoc on your system (most people run into this problem when interfacing with java servers). Another issue to consider is signed/unsigned data types and how overflow can affect your logic if you do not properly check values. For example, in the code above, since nameLength was an integer, a negative value is possible, so you need to check < 0 and > max. Let's say you had an unsigned value and only checked > max, which would work fine in this case, but if the type ever were to change back to a signed int, then you have a bug in the logic and managing issues like those are a real pain once you enter production. As you program, you have to keep such things in mind and really pay attention to the logic operations you perform on different data types. That was just an example, ideally, your nameLengh would just be an unsigned character to save space and further help prevent overflow issues since 255 is the max size possible. Taking things one step further, packet formats can vary based on flags set in the data. Let's say you have a multi-format packet like this: word opcode int id bool hasName if hasName int nameLength char * name endif bool hasPosition if hasPosition float x float y float z endif Your processing logic would simply follow the structure and parse out the packet as it was sent. So, without error checking and validation: id = ReadInt() hasName = ReadByte() if(hasName != 0) { nameLength = ReadInt(); ReadArray(&tmpName[0], nameLength); } hasPosition = ReadByte() if(hasPosition != 0) { x = ReadFloat(); y = ReadFloat(); z = ReadFloat(); } So, packets can get quite complex. With input driven games like a RTS or FPS, you won't have such complicated packets, but you'd see similar in games that deal with persistent worlds. Object Replication - The ability to create, destroy, serialize, and transmit game objects. RakNet offers the ReplicaManager3 interface for this (I've never used it myself though). NetDog's interface for this can be found here: Objects. State Management - Maintaining a synchronized state of all objects in your game as needed across the network. For example, this is the interface NetDog provides: Object Driven Model Tutorial. Patching - Traditional sense. If you want to have the means to update client files through the game itself (think content downloads, maps, media, etc..) rather than having to do it externally, this is important. Some of these things might seem easy at first glance, but they definitely aren't! Depending on your game type, you will have different needs and requirements to different degrees. In a RTS, you are working with commands being sent over the network, so you don't have much object replication going on. Instead, you have to worry about state management to ensure all simulations are synchronized. Hope that helps some. There is a lot of theory to game network programming and it can get quite complex. There's also a lot of different perspectives and opinions about how things can (or should) be done, so just take this post as one perspective on the matter. There may be some inaccuracies here and there in my post, so make sure to do some more research on anything I've mentioned for other explanations. Good luck!