Frustum Culling with VBOs

  • Frustum Culling with VBOs Krankzinnig

    I have terrain being rendered in my project using VBOs in OpenGL. I would like to apply some Frustum Culling but have no idea how to access each polygon as its drawn to check if it is in view. I think this is where octrees come into play but I have no idea how this is really done. Does anyone know a good tutorial on how to do this specific thing or have some pseudocode for me?

    Thanks in advanced!

  • More information on how you are storing the terrain data would be helpful here. Are you storing it chunks? Are you constantly rebuilding the VBOs? Is all of the terrain in one VBO and you are rendering pieces of it at a time?

    Trying to cull per polygon is probably going to introduce more overhead than it saves. The larger a chunk of terrain you can test at once, the quicker the culling will be, with the obvious trade-off of accuracy. In the end, you will need to do some profiling to figure out what the optimal point of that trade-off is, so creating a system that allows you to dynamically change how your terrain is broken up will be very helpful.

  • A GPU can already cull the polygons on a per-polygon basis, but this happens at rasterization/setup time which happens after vertex shading (using Vertex/Geometry/Hull/Domain shaders). This means that the GPU can still end up shading polygon vertices that will end up getting culled. If this happens, the GPU will have done a lot of work for nothing and is therefore a waste of GPU cycles.

    For this reason, we try not waste GPU cycles by culling invisible geometry on the CPU side (or GPU side using DX10+ features such as DrawIndirect and/or predicated rendering) in batches of polygons by simply not calling draw for those batches.

    Typically that batch of polygons can be represented by a sphere or box that surrounds all those polygons. This bounding sphere/box is then tested against the frustum and if it touches the frustum, the draw call is invoked for those polygons. If not, the draw call is skipped. An octree is simply another way of determining which batches of geometry are visible, except it organizes that data in a hierarchical fashion. Octrees are not necessarily suitable for all geometry.

    Some fairly good references that explain the sphere/frustum technique follow:

c++ opengl optimization
Related questions and answers
  • I think this isn't possible, but I just want to check this: Is it possible to create a face in opengl that has two normals? So: I want the inside and outside of some cilinder to be drawn, but I want the lights to do as expected and not calculate it for the normal given. I was trying to do this with backface culling off, so I would have both faces, but the light was wrongly calculated of course. Is this possible, or do I have to draw an inside and an outside? So draw twice?

  • I've been using OpenGL for a short time now- and I'd like some clarification on VBOs. As I understand it, a VBO is an object stored on VRAM. In immediate mode, to apply a texture we simply bind it and specify the texture coordinates before the vertex coordinates, however it doesn't seem that way with VBOs. Question 1: Can I specify texture data in a VBO? What portion of the data? Normally when... to send those corresponding objects an "apply" request, and have them apply themselves? Or is there a better way? Question 3: If I have a completely dynamic map, but with objects that won't change very

  • out what I have done wrong, but it is hard, so perhaps someone has any idea of what may it be. I can reproduce it easily on my engine and my scene consists only of: 1 terrain page 1 mesh... to fire that I have found is the hash being changed after the node was inserted on the map. Any ideas how can I find a solution to it? Which data and information I can provide to help debug this? Also I have seen in the forum some posts about this assertion, but no one that I have seen explain ways to debug or to fix it. Ogre version 1.7.1 Windows 7 render Direct3d 9

  • I have got some problem in my OpenGL game. I am using bullet physics and I want to achive quite simple effect - I want one object (a sphere) to roll and hit another (box) which will fall down. I have got almost everything but still I have got some errors. When the box is hit it is rotating almost how I would like to. Well, almost means that when ball hits it it falls and rotates however...); glScalef(0.5,5.0,0.5); glutSolidCube(1.0f); Can someone tell me what is wrong in it? I have got no idea what else I can do.

  • * . This is the part I really have no idea on. Any advice on solving something like this is appreciated, even if it means some kind of design change on my part. I'm just looking for a clear way on how... that it needs. For example, IPhysics, IRenderable, IWeapon, etc. There are a few cases where I'm uncertain how to obtain a specific interface from another. For example. A Gun might inherit from IWeapon, IPhysics, and IRenderable. I then also have a player which inherits from IPlayer, IPhysics, and IRenderable. I need a way so that, when Player and Gun collide, if Player is not currently wielding

  • I've been looking at creating some 2D rendering systems in D3D9, basically because I don't like ID3DXSprite. For the output of the vertex shader, what co-ordinate system does the run-time expect vertex co-ordinates in, like homogenous device co-ordinates? And secondly, does the depth of the resulting vertices have any effect except on the culling/depth buffering? I wanted to "reserve" a section of depth for use in 2D drawing, and then have a section for 3D geometry, to ensure that 3D geometry is always behind 2D.

  • I been working in a project with a team for a Software Engineering class and we think that using the parallax scrolling will help our game to look really nice but we are not really sure if our idea for implementation it's the correct one, so I hope that someon will give us some guidance about our plan. First, we have three classes, Level, Tileset, Layer, the first one has two vectors of Layers and Tilesets, so our idea is load all the data from a TMX file of the first level in a vector>, but we only draw the part of the map that it's currently in camera, so inside a cycle we draw every layer

  • . In these scenarios, it never goes as well as if I had done it myself but I usually win if it is plausible. What I want to know is, how are these battles are calculated? Does anyone know how it is done, or have an idea of how to implement it? I had an idea of just doing the battle normally but disabling all graphic display so it would conclude very fast, but I'm not sure this would work too well. ... of cpu vs cpu battles and it has to choose a winner. This could be done with a random number generator but I feel there is something more there. The second is in some games, mostly strategy games, you can

  • I'm not asking for code or anything, just some advice on what to google, because I have no idea what to call this process. I made a video player (with full controls) with python and DirectX and would like to just simply import the rendering of the display into a c++ DirectX fullscreen app. That way, it can be customizable to the user. Any advice is greatly appreciated.

Data information