gpgpu vs. physX for physics simulation

  • gpgpu vs. physX for physics simulation Notabene

    First theoretical question. What is better (faster)? Develop your own gpgpu techniques for physics simulation (cloth, fluids, colisions...) or to use PhysX?
    (If i say develop i mean implement existing algorithms like navier-strokes...)

    I don't care about what will take more time to develop. What will be faster for end user? As i understand that physx are accelerated through PPU units in gpu, does it mean that physical simulation can run in paralel with rastarization? Are PPUs different units than unified shader units used as vertex/geometry/pixel/gpgpu shader units?

    And little non-theoretical question: Is physx able to do sofisticated simulation equal to lets say Autodesk's Maya fluid solver?

    Are there any c++ gpu accelerated physics frameworks to try? (I am interested in both physx and gpgpu, commercial engines are ok too).

    Edit: Simulation does not have to be realtime i'm thinking just about acceleration.

  • If you have a "special case" where you know exactly what you want, then in theory you should be able to at least match, if not beat for your scenario, one of the generic physics engines that run on the GPU. Will you write something that beats the off-the-shelf solution in a short amount of time, no. They've got a lot of smart people doing nothing but that for a long time (plus the GPU manufacturers helping them out to optimise).

    The physics simulation will not run in parallel with rasterisation on the same graphics card (it's the same hardware used for vertex/pixel processing). If you have 2 cards then I think that's the ideal scenario for doing both physics and rendering via the GPU (unless you have very basic visuals that leaves a lot of spare time on your fast GPU).

    I've not used any of the GPU physics systems myself, however I'd expect PhysX to be designed for performing real-time simulation work, where Maya's fluid solver doesn't have to work in real-time and can therefore run a much more sophisticated simulation, however the only thing stopping PhysX being able to match it would be how long it takes to do the simulation.

c++ physics physics-engine gpgpu
Related questions and answers
  • Possible Duplicate: Are there existing FOSS component-based frameworks? What open source game engines with component-based design of game objects do you know? And which best of them? I mean best not in Graphics or Physics, but best in context of Behaviour, Messaging, etc. This question is the result of inspiration by another question Thank you!!!

  • this summer and I have a good grounding on the web side of things. I've also done some very basic c++ (what i would consider basic). My c++ skills basically comes down to different tutorials on things... of this then why would I be here asking this question? The reason is as shocked as I am about this. I don't like to lose. I have my own queries on how appealing this project would be to someone looking... for. By this I mean I've always heard that graphics programming is hard. So for someone with basic c++ skills and an OK level of discrete mathematics, is it feasible to complete this project spec

  • I was recently listen to a talk that Jonathan Blow gave, you can find it here. In the talk, he was talking about what data structures he (and he seemed to imply many others) use, and why. Which is to say that he simply used arrays to store data, and use the naive approach to find data in it, meaning iterate through the array until you found what you wanted, assuming it wasn't in a performance... that. All this seems well and good, except I have one question, why not linked lists rather than arrays? Sure, if you had to make your own linked list class, I would be on board, but in c++ anyway, assuming

  • I enter the "gamestate" I call some functions such as setting up a basic scene, creating the physics simulation. I am doing that as follows. void GameState::enter() { ... // Setup Physics... physics properties. I know there will be errors here as I get the items colliding with the ground but not with each other properly. So I would appreciate some input on what I am doing wrong. void... every frame to handle input and render etc I call an UpdatePhysics method to update the physics simulation. void GameState::UpdatePhysics(unsigned int TDeltaTime) { World->stepSimulation(TDeltaTime

  • representation of my player (what is exposed to my physics engine/manager) is simply a point-mass where his feet are, rather than complicating the problem by treating him as a sphere or a box... that has some height. Feel free to move this over to math, physics, or stack overflow, but I'm pretty sure this is where it belongs as it is a programming question related to games. Here's my current... to this (if newPos.y < y) This is all fine and dandy, but I'm wondering if there's anything that I can optimize. For example, my first thought is to store the plane's equation with the triangle's vertex information

  • . These structures are used to create models that are used in OpenGL and OpenGL ES and then for some collision processing. During dev and Beta I am quite ok with maintaining data copies for the physics and the display. But I started thinking about the future of this engine as I already invested some spare time over a year and a half in it. Now I would like to duplicate a few routines into OpenCL kernels... memory and maintain the Component structure. Do I even need a system and components to display and manipulate the structures? Note: This is not a homework. It is more a question of interest to get

  • There isn't much more to the question. I'm not concerned about overhead, as I'm sure they are both fine for my purposes. Basically, I am familiar with Box2D concepts because of the Farseer Physics Engine, but I want to use Bullet when I make the jump to 3D stuff. Perhaps Bullet has some educational value for me even in the 2D realm? The generalized version of the question is: should I use a 3D physics engine for a 2D game if I plan to utilize a 3D physics engine in the future? Or is this a waste of time which would not provide educational value?

  • What i still can't figure out is which would be the more sane way / easier and faster way to draw the map on the screen.. I mean i will use many tiles for my maps in my side scroller.. But problem is should i make the maps in whole images like one .png file for each map (Example) or should i draw the tiles by code like a for loop in c++.. Which way is most recomended or where can i read about which way is the best.

  • in the shader code) for( int n = 0; n < vShaderArgs.size(); n ++) glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() ); // Create and bind to a vertex array object, which..., vShaderArgs[n].nStride, (GLubyte *) NULL + vShaderArgs[n].nFieldOffset ); AppLog::Ref().OutputGlErrors(); } This doesn't render correctly at all. I get a pattern of white specks...I'm writing a generic ShaderProgram class that compiles a set of Shader objects, passes args to the shader (like vertex position, vertex normal, tex coords etc), then links the shader components

Data information