I'm making a game with Box2D with a top left coordinate system. I multiply positions by an M_TO_PX_RATIO of 10.0f to convert from meters to pixels.
I noticed that when I set gravity to 9.8, the simulation is rather slow. When I set gravity to 9.8 * M_TO_PX_RATIO, the simulation runs at the correct speed, like normal gravity. However, the high gravity causes jittering.
Am I doing it properly? When using a top left, pixel coordinate system, must I do anything else to account for meters to pixels?
Box2D is a meter-based physics engine, and as such, it would not behoove you to try to force it to "snap" to pixels. The easiest way to get around this is simply allow Box2D to calculate all of the physics in the background, and simply render all of your pixel sprites to the screen at the closest pixel coordinate (I like to floor my x and y value to prevent sprite jitter).
Here's an extremely simple psudo-code version of what I'm trying to explain:
// Draw function mySprite.render(Floor(calculatedXLocation), Floor(calculatedYLocation));
This will give the illusion of the sprite being locked to the pixel grid, but will allow your physics to run however you want them to run without being hampered by an odd time-step or gravity.
the amount of blocks, or platforms. I am checking all of the blocks using a for loop, and the number that the loop is currently on is represented by integer i. The coordinate system might seem a little weird...I am trying to create a 2D platformer (Mario-type) game and I am some having some issues with handling collisions properly. I am writing this game in C++, using SDL for input, image loading, font... I need to start with. What I need is an easy way to both detect which side the collision occurred on and handle the collisions properly. So, basically, if the player collides with the top
; glBegin(GL_QUADS); // Top-left vertex (corner) glTexCoord2f( sourceX, sourceY); glVertex2i(x, y); // Bottom-left vertex (corner) glTexCoord2f( sourceX + texscale, sourceY); glVertex2i( x + tileWidth, y); // Bottom-right vertex (corner) glTexCoord2f( sourceX + texscale, texscale + sourceY); glVertex2i( x + tileWidth, y + tileHeight); // Top-right vertex (corner) glTexCoord2f( sourceX, texscale + sourceY); glVertex2i(x, y + tileHeight); glEnd(); glLoadIdentity(); My initialization code for OpenGL: // Set the OpenGL state after creating the context with SDL_SetVideoMode glClearColor(0
I know that if you want to display a sprite on screen (in 2D) you can use glOrtho and essentially make 1 gl unit equal to 1 pixel, so when I plot out the vertices for say a 128x128 image (on a quad), I can define the vertices as -64/64, -64-64, etc and then when I map my texture coords to that quad, the image is displayed at a 1:1 ratio. However, lets say I wanted to not use glOrtho and wanted to have a perspective view, so I can combine 2D sprites with 3D models and whatnot? I'm at a loss on how to convert/set up the coordinates for the planes/quads I want to draw images to, in a way
The problem is that I'm trying to use my meshes with Bullet Physics for the collision part of my game. When I attempted doing this method with my GLM(model loading library by nate robins) model, I get a segmentation fault in the debug, so I figured that it doesnt like the coordinate variables of the model. If i use blender to export my model as a collision file, what type of file should I use? I have heard of a .bullet exporter, but i dont know hot to integrate this python script into my Blender 2.5 program.
Edit: I changed the way that gravity was applied so that it is applied even when a player is standing on a block, so instead of the game outputting that you are stepping on a tile as true and false... tile. I think that I will make a bool that checks if the player was colliding with the tile in the last frame, and if so set the velocities to equal that of the tile plus whatever they are doing (jumping, gravity, walking left and right). I am trying to create moving tiles such as the ones in basically any platformer. The game is a 2d sidescroller, and the problem that I am running
the viewport data ZeroMemory(&viewport, sizeof(D3D10_VIEWPORT)); // clear out the struct for use viewport.TopLeftX = 0; // set the left to 0 viewport.TopLeftY = 0; // set the top to 0...(); }; #endif If I call Init() from game.cpp, the code works fine, if I call cube->init(), just as this function is entered (not when the other one is left), the device pointer suddenly has an address...Basically when placed in the same file this works fine, but if placed in separate files (and I have tested this) just after Init() is called, the pointer to ID3D10* device's value is suddenly
well... I'm building the animation system of my game engine (the skeletal and skinned animation stuff), and I came to a point where I added so much functionality in the frame and node structures... that make really hard to work with when coding some functions that use them. I was thinking of making ie. SimpleMesh and HierarchyMesh objects, which will also require that the renderer can deal... shader in every node. Other option I was thinking was making some helper functions to deal with the simpler cases, which would set some default parameters and would ask only the most basic ones
Right now I create my boxes where 1 meter is 85 pixels. Gravity is 10. And fixtureDef.restitution = 0.1f; fixtureDef.friction = 0.5f; fixtureDef.density = 1.0f; The problem I'm having is illustrated in the image I have provided: As you can see, there is a small gap between many, but not all the crates. What could cause this? Thanks
something. So I can change the height but nothing more. I can't move it left, right - anything. Also when I set 0 everywhere ( I mean in btVector3) the object is jumping up rather than falling down... is to move my object for example left not only top or bottom. Hope It will help in understanding. ...I have got stupid and annoying problem in my app. I am using bullet physics and I've started with hello world example on wiki: http://bulletphysics.org/mediawiki-1.5.8/index.php/Hello_World