I'm making a (non-isometric) side scrolling 2D game and I want each fragment that I draw to cast a small drop shadow when it is near another object. What sort of algorithms are used in fragment shaders to cast shadows in 2D games?
It really depends on how your game graphics are set up and how you want this effect to work. In 2d "shadows" aren't really uniquely defined so you need to decide what sort of behaviour you want from them.
If you're looking for something like drop-shadows in Photoshop then you first need the relative depths of objects to be defined somehow, either individually or in bins (i.e foreground, middle, background).
To actually cast a drop-shadow a simple method would be to render all onscreen shadow-casting objects to a texture w/ a simple shader that outputs opaque black (for solid pixels) or transparent/0-alpha (for empty pixels); then use this "shadow buffer" w/ multiply blending on the shadow-receiving objects -- you can offset the shadow buffer to make the drop-shadow fall e.g to the left and down.
If you want objects to both cast and receive shadows (i.e object-by-object shadows instead of layer-by-layer) you could try a similar approach where you iterate over all objects from front to back, rendering the object w/ the current shadow-buffer applied and then adding the object to the shadow buffer. This would involve a lot of texture binding/unbinding though (assuming you have hundreds of objects) so I'm not sure if it's a good solution.
you need to provide much better information to get a very useful answer, you are vague. (what does near mean in 2d? that doesn'f fit with how shadows work at all...)
the lazy, easy solution is to not use shaders at all - draw everything twice, a first pass with flat colour offset down and to the left/right - this is the drop shadow, then render everything normally over the top. you can combine this with a blur (using a shader if you must) in between the two passes for a bit of polish, and unless you are on a mobile platform or pushing lots of triangles the expense should be fine.
if you have depth to your scene (you really need to clarify this, it is easily the most important point) then you might want to use depth testing to not cast shadows onto "nothing". If that is not quite enough then maybe vary brightness and blur depending on the depth of the existing scene fragments vs. the current object fragments being drawn. varying shadows based on depth is fairly complicated, and you typically need to resolve and sample a render target to do this, as well as writing some vertex + fragment program - in which case you might as well just use shadow mapping (which is well researched and documented - you should Google that if you decide it is appropriate)
I have a 2D tile map, where each tile is marked as blocking or non-blocking. Each tile is 16x16 pixels and sub-tile movement is possible. I'm attempting to generate a navigation mesh from this data, at run-time, using C++. Is there any well known library that can be used to generate a navigation mesh, given a 2D tile map? Alternatively, is there any algorithm that can be used to intelligently generate a navigation mesh from the tile data? As far as I can tell, all of the popular solutions are aimed at 3D maps and don't perform well when given the simplified case of a 2D map.
or ideas. finally I was wondering if it was possible to... use the source code of an existing 3D engine such as Bullet and transform it to be 2D based? use the source code of a 2D Rigid body physics...hey so I've decided to Code my own 2D soft-body physics engine in C++ since apparently none exist and I'm starting only with a general idea/understanding on how physics work and could be simulated: by giving points and connections between points properties such as elasticity, density, mass, shape retention, friction, stickiness, etc. What I want is a starting point: resources and helpful
I know that if you want to display a sprite on screen (in 2D) you can use glOrtho and essentially make 1 gl unit equal to 1 pixel, so when I plot out the vertices for say a 128x128 image (on a quad), I can define the vertices as -64/64, -64-64, etc and then when I map my texture coords to that quad, the image is displayed at a 1:1 ratio. However, lets say I wanted to not use glOrtho and wanted to have a perspective view, so I can combine 2D sprites with 3D models and whatnot? I'm at a loss on how to convert/set up the coordinates for the planes/quads I want to draw images to, in a way
I'm looking into building a cross-platform opensource 2D RPG style game engine for ChaiScript. I want to be able to do all of the graphics with SVG and need joystick input. I also need the libraries I use to be opensource and compatible with the BSD license. I'm familiar with allegro, ClanLib, and SDL. As far as I can tell, none of these libraries have built in or obvious integration for SVG... support in a 2D C++ library while minimizing dependencies as much as possible (preferably avoiding Qt altogether)?
I'm trying to make a simple 2D animation file format. It'll be very rudimentary: only an XML file containing some parameters (such as frame duration) and metadata, and some images, each representing a frame. I'd like to have the whole animation (frames and XML document) packed in a single file. How do you suggest I do that? What libraries are there that would allow easy access to the files inside the animation file itself? The language I'm using is C++ and the platform is Windows, but I'd rather not use a platform dependent library, if possible.
I'm using vertex array to draw 2d geometry, but I can't achieve smoothness. This is the code I'm using: glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_VERTEX_ARRAY); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA); glColorPointer(4, GL_UNSIGNED_BYTE, 0, shared_colors); glVertexPointer(3, GL_FLOAT, 0, shared_vertex); glDrawArrays(GL_LINES, 0, shared_counter); glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_COLOR_ARRAY); glDisable(GL_BLEND); Some advice?
After searching for a long time, I'm surprised this question was not asked yet. In a 2D, tiled-map game, how do you handle the map ? I'd be glad to have your point of view in any languages, though I'm more interested in C++ implementations. A 2D array, a 2D vector, a class handling a linked-list with ad hoc computing to handle coordinates, a boost::matrix... ? What solution do you use and why ?
I'm wondering if there is a simple way to convert 3D coordinates to 2D coordinates. Also, if it's possible, to convert in the reverse direction. I'm using OpenGL(GLUT) in my C++ project. I am also using SFML for the 2D information (sprites text etc.) I found out that I can use gluProject(), but I have no idea how to use this. I'm asking for a simple example of using gluProject() or another example to convert 3D coordinates (such as from the player) to 2D coordinates. If I can't get the simple process I'm confident that I can figure out the rest.
, could be added to levels. Whether I do this hinges on whether I can get the fov routine I'm using working with the layers. Note that I don't care that it doesn't make sense to some of you that I'm trying to have a 3d field of view routine for a 2d roguelike. The roguelike does have a 3d space, and I want to determine what the player can see, even if what the player can see can't logically be displayed. Let me worry about how to handle that problem. I would be grateful for some advice in how to approach this problem. Perhaps this is utterly crazy, and I should just make a 2d roguelike... Update
I am a beginner-intermediate C++/Java programmer and my long term goal is to be a game programmer. I have decided to start off with 2D and work my way towards 3D. I would like to use SDL to start off with, but I am wondering if it is maybe not such a great idea. Given the fact that I am working towards 3D, would it be advisable to use SDL or jump into OpenGL without the Z axis?