Jump to content

NightQuest

Members
  • Posts

    32
  • Joined

  • Last visited

Profile Information

  • Location
    Oregon, USA

NightQuest's Achievements

Newbie

Newbie (1/14)

  • Week One Done
  • One Month Later
  • One Year In

Recent Badges

5

Reputation

  1. There's also HelpScribble, which is made by the same guy that makes RegexBuddy
  2. Forgive me if I'm wrong, but I think your problem may be the const const char* pixelBuf = new char[tex->GetMipmapSize(0)]; tex->GetPixels(pixelBuf); When I did this, I had to cast it GLubyte* tPixels = new GLubyte[tex->GetMipmapSize(0)]; tex->GetPixels(reinterpret_cast<const char*>(tPixels)); I have no idea why it's passed as a const char* since it's obviously modified; if anything, it should be passed as a char* const.
  3. it looks like a bug, honestly. This works, though - just use it the same way: class FS : public FileSystem { FS() : FileSystem() {} public: static int GetFileSize(const std::string& path) { std::string file = RealPath(path); struct stat filestat; stat(&path[0], &filestat); return static_cast<int>(filestat.st_size); } };
  4. Same in the new ones, it's only the last parameter that matters, but it's Window::FullScreen instead of 32. The rest of the code is just fluff. You may want to get the system width/height for that window, actually. I don't believe LE has a method to determine the current monitor width/height (only supported ones), though so you'd have to rely on native calls. On Windows (I don't know how on *nix): unsigned int width = GetSystemMetrics(SM_CXSCREEN); unsigned int height = GetSystemMetrics(SM_CYSCREEN);
  5. in C++: unsigned int windowStyle = Window::Titlebar | Window::Center; if( stoi(System::GetProperty("fullscreen")) ) windowStyle = Window::FullScreen; unsigned int width = stoi(System::GetProperty("width", "1920")); unsigned int height = stoi(System::GetProperty("height", "1080")); Window* window = Leadwerks::Window::Create("Test", 0, 0, width, height, windowStyle );
  6. One way it could be done: Compare file with previous versioned file (would require having a .old or having the file hash somewhere), If identical, update. If not identical, offer to update. If user selects No, show them a list of changes from the previous version so they can implement them manually (diff).
  7. Threw this together, seems to work rather well. Source.zip
  8. Thanks to this, something like this should work EDIT: I installed XUbuntu 14.04 on dual-boot to test this, below is the working version bool isActiveWindow(Leadwerks::Window* window) { if( window == nullptr ) return false; #if defined(__linux__) && !defined(__ANDROID__) ::Window root = 0L; ::Window active = 0L; Atom _NET_ACTIVE_WINDOW = XInternAtom(window->display, "_NET_ACTIVE_WINDOW", False); Atom actual_type; int actual_format; unsigned long count, bytesAfter; unsigned char* properties = NULL; root = DefaultRootWindow(window->display); if( XGetWindowProperty(window->display, root, _NET_ACTIVE_WINDOW, 0L, sizeof(::Window), False, XA_WINDOW, &actual_type, &actual_format, &count, &bytesAfter, &properties) == Success ) { active = *reinterpret_cast<::Window*>(properties); XFree(properties); } return (window->window == active); #else return window->Active(); #endif }
  9. Those are awesome, but they do not give you the current displaymode - maybe a int System::GetCurrentGraphicsMode()?
  10. We have Window::setLayout which is awesome for resizing.. but what if we want borderless window? Right now, I'm using this platform-specific code for that: if( window->KeyHit(Key::F11) ) { int screenWidth = GetSystemMetrics(SM_CXSCREEN); int screenHeight = GetSystemMetrics(SM_CYSCREEN); if( fullscreen ) { long style = reinterpret_cast<long>(window->GetUserData()); SetWindowLongPtr(window->hwnd, GWL_STYLE, style); int width = stoi(System::GetProperty("width", "1280")); int height = stoi(System::GetProperty("height", "720")); window->SetLayout((screenWidth - width) / 2, (screenHeight - height) / 2, width, height); } else { long style = GetWindowLongPtr(window->hwnd, GWL_STYLE); window->SetUserData(reinterpret_cast<void*>(style)); SetWindowLongPtr(window->hwnd, GWL_STYLE, style & ~(WS_BORDER | WS_OVERLAPPEDWINDOW)); window->SetLayout(0, 0, screenWidth, screenHeight); } fullscreen = !fullscreen; } This would allow us to simply do: if( window->KeyHit(Key::F11) ) { fullscreen = !fullscreen; int width = fullscreen ? GetSystemMetrics(SM_CXSCREEN) : stoi(System::GetProperty("width", "1280")); int height = fullscreen ? GetSystemMetrics(SM_CYSCREEN) : stoi(System::GetProperty("height", "720")); int style = Window::Center; if( fullscreen ) style &= ~Window::Titlebar; else style |= Window::Titlebar; window->setStyle(style); window->setLayout(0, 0, width, height); } And maybe we could get a Vec2 System::GetCurrentMonitorResolution(); or something to retrieve the current monitors display resolution? While I don't mind the Win32API, I'd like to be able to easily do these things on both platforms. GTK+ has gtk-window-set-decorated, and X11 has DefaultScreenOfDisplay.
  11. Thank you for the transparency! I love seeing updates like this. I look forward to seeing these things become a reality.
  12. I was able to get it to take the top left at the correct scale by passing the final size to buffer, but not the texture. However, It looks like World::Render() always renders at 0,0 This caused me to look at camera::setViewport(), but I cannot seem to use it correctly. Has anyone used this before? bool ScreenshotHighRes(Camera* camera, const std::string& filename, GLint scale) { if( scale <= 1 ) return Context::GetCurrent()->Screenshot(filename); bool ret = false; GLint tileWidth = Context::GetCurrent()->GetWidth(), tileHeight = Context::GetCurrent()->GetHeight(); GLint finalWidth = tileWidth * scale, finalHeight = tileHeight * scale; Buffer* oBuffer = Buffer::GetCurrent(); Buffer* buffer = Buffer::Create(finalWidth, finalHeight); Texture* tex = Texture::Create(tileWidth, tileHeight); buffer->SetColorTexture(tex); Buffer::SetCurrent(buffer); GLuint texSize = tex->GetMipmapSize(0); GLubyte* tPixels = new GLubyte[texSize]; memset(tPixels, 0, texSize); GLubyte* pixels = new GLubyte[(tileWidth * 3) * tileHeight]; memset(pixels, 0, (tileWidth * 3) * tileHeight); GLuint rowLength = tileWidth * 3; GLubyte* line = new GLubyte[rowLength]; memset(line, 0, rowLength); GLuint count = 0; for( GLint y = 0; y < scale; ++y ) { for( GLint x = 0; x < scale; ++x ) { camera->SetViewport((x - 1)*tileWidth, (y - 1)*tileHeight, tileWidth, tileHeight); World::GetCurrent()->Render(); tex->GetPixels(reinterpret_cast<const char*>(tPixels)); // BGRA -> RGB for( GLuint p = 0, t = 0; t < texSize; t += 4, p += 3 ) { pixels[p] = tPixels[t + 2]; pixels[p + 1] = tPixels[t + 1]; pixels[p + 2] = tPixels[t]; } // Flip image vertically for( GLint row = 0; row < tileHeight / 2; row++ ) { memcpy(line, pixels + (row * rowLength), rowLength); memcpy(pixels + (row * rowLength), pixels + ((tileHeight - row - 1) * rowLength), rowLength); memcpy(pixels + ((tileHeight - row - 1) * rowLength), line, rowLength); } // Write TGA TGAHeader tgah = { 0 }; tgah.ImageType = 2; tgah.ImageWidth = tileWidth; tgah.ImageHeight = tileHeight; tgah.PixelDepth = 24; Stream* file = FileSystem::WriteFile(filename + "_" + to_string(x) + "_" + to_string(y) + ".tga"); if( file ) { file->Write(&tgah, sizeof(TGAHeader)); file->Write(pixels, (tileWidth * 3) * tileHeight); file->Release(); count++; } } } if( count == scale ) ret = true; delete[] tPixels; delete[] pixels; delete[] line; Buffer::SetCurrent(oBuffer); tex->Release(); buffer->Release(); return ret; }
  13. That's actually exactly what I'm trying to do. This is a pretty foreign thing to me, so I'm sure I'm on the wrong track.. But from what I can tell I need to adjust the viewport while preserving the camera's location and angle as to not mess with parallax. Before this, I had tried to use gluPickMatrix but that didn't produce any effect at all; I assumed this was because it's deprecated so I gave this a shot.. Which also seemingly did nothing.. :/ Sorry for typos, wrote this on my phone.
  14. From what I can tell, this should work (and is far faster - it happens within a second) - but it looks like World::Render() might be interfering with the matrix being set - is there a way to force it to use mine? bool ScreenshotHighRes(const std::string& filename, GLint scale) { if( scale <= 1 ) return Context::GetCurrent()->Screenshot(filename); bool ret = false; GLint tileWidth = Context::GetCurrent()->GetWidth(), tileHeight = Context::GetCurrent()->GetHeight(); GLint finalWidth = tileWidth * scale, finalHeight = tileHeight * scale; Buffer* oBuffer = Buffer::GetCurrent(); Buffer* buffer = Buffer::Create(tileWidth, tileHeight); Texture* tex = Texture::Create(tileWidth, tileHeight); buffer->SetColorTexture(tex); Buffer::SetCurrent(buffer); GLuint texSize = tex->GetMipmapSize(0); GLubyte* tPixels = new GLubyte[texSize]; memset(tPixels, 0, texSize); GLubyte* pixels = new GLubyte[(tileWidth * 3) * tileHeight]; memset(pixels, 0, (tileWidth * 3) * tileHeight); GLuint rowLength = tileWidth * 3; GLubyte* line = new GLubyte[rowLength]; memset(line, 0, rowLength); GLfloat pm[16]; glGetFloatv(GL_PROJECTION_MATRIX, &pm[0]); glMatrixMode(GL_PROJECTION); glPushMatrix(); for( GLint y = 0; y < scale; ++y ) { for( GLint x = 0; x < scale; ++x ) { buffer->Clear(); // Zoom in to the tile glLoadIdentity(); glTranslatef(tileWidth - 2.0*x - 1, tileHeight - 2.0*y - 1, 0.0); glScalef(tileWidth, tileHeight, 1); glMultMatrixf(&pm[0]); World::GetCurrent()->Render(); tex->GetPixels(reinterpret_cast<const char*>(tPixels)); // BGRA -> RGB for( GLuint p = 0, t = 0; t < texSize; t += 4, p += 3 ) { pixels[p] = tPixels[t + 2]; pixels[p + 1] = tPixels[t + 1]; pixels[p + 2] = tPixels[t]; } // Flip image vertically for( GLint row = 0; row < tileHeight / 2; row++ ) { memcpy(line, pixels + (row * rowLength), rowLength); memcpy(pixels + (row * rowLength), pixels + ((tileHeight - row - 1) * rowLength), rowLength); memcpy(pixels + ((tileHeight - row - 1) * rowLength), line, rowLength); } // Write TGA TGAHeader tgah = { 0 }; tgah.ImageType = 2; tgah.ImageWidth = tileWidth; tgah.ImageHeight = tileHeight; tgah.PixelDepth = 24; Stream* file = FileSystem::WriteFile(filename + "_" + to_string(x) + "_" + to_string(y) + ".tga"); if( file ) { file->Write(&tgah, sizeof(TGAHeader)); file->Write(pixels, (tileWidth * 3) * tileHeight); file->Release(); } } } glPopMatrix(); delete[] tPixels; delete[] pixels; delete[] line; Buffer::SetCurrent(oBuffer); buffer->GetColorTexture()->Release(); buffer->Release(); return ret; }
  15. It is, but my aim was to low-end proof it - by stitching it together like that, someone with a low-end machine would still be able to get a 8K screen if they wanted to. That's essentially what I was going for - taking the view and chopping it up into, say 3x3 screenshots - then using a smiliar method as above to move them all into a single image. I may have to just settle for this, as it works pretty good - it's just limiting on some machines.
×
×
  • Create New...