How can I make the switch from immediate mode to VBOs?

ultifinitus
  • How can I make the switch from immediate mode to VBOs? ultifinitus

    I've been using OpenGL for a short time now- and I'd like some clarification on VBOs. As I understand it, a VBO is an object stored on VRAM. In immediate mode, to apply a texture we simply bind it and specify the texture coordinates before the vertex coordinates, however it doesn't seem that way with VBOs.

    Question 1: Can I specify texture data in a VBO? What portion of the data?

    Normally when I make a game in 2d I'd store all of my objects in instances of different classes, then when my drawing routine comes up- just draw the objects that are on screen. With a 3D environment it would be a bit more difficult to just draw the objects on the screen but with some math I'm sure it's doable.

    Question 2: Once we figure out which objects need to be rendered, would it be acceptably fast to send those corresponding objects an "apply" request, and have them apply themselves? Or is there a better way?

    Question 3: If I have a completely dynamic map, but with objects that won't change very often, are there any guidelines as to the performance differences between GL_STATIC_DRAW and GL_DYNAMIC_DRAW?

    Final Question: Should I even consider using vertex lists? Or are VBOs just a better option?

  • Answer 1:

    Of course you can, without texturing capabilities vertex buffers would be next to useless. A VBO is just a chunk of binary data with the format you specify (position channel, normal channel, texcoord channel, etc.) that gets interpreted by a shader. When drawing an object, you are telling the GPU to fetch that batch of data and apply the operations specified in the shader (vertex transform, light calculation, texture fetches, etc.).

    So, take for example you have a VBO with a quad stored in it. The VBO stores the position and texture coordinate info of the four vertices. Now the only thing you have to do is bind the VBO, bind an IBO to specify the triangles that compose your quad, tell your shader to use a texture of your choice and draw the whole thing. And now that you have the VBO and IBO active, you only need to modify the texture parameter of the shader and call again the draw function to reuse your quad into a different object.

    Answer 2:

    As specified in answer 1, when you have your VBO created and stored in VRAM, you only have to update your shader variables and call glDrawElements again to redraw the object.

    Answer 3:

    Well, you can apply GL_DYNAMIC_DRAW to the entire map, if that's what you want. The difference between these two options is that the driver chooses to store dynamic buffers in the fastest portion of VRAM it can find, so if you use it for objects that won't have their geometry repeatedly changed, you are wasting it. The OpenGL reference for glBufferData gives you a hint on what is the best usage for your buffer.

    Final answer:

    glDrawArrays is just a primitive version of VBOs, where the data is divided in channels but re-uploaded to the GPU on every call. Just stick with VBOs if you want the best performance.

  • Even though i'm completely satisfied with r2d2rigo's answer, I would like to explain how i will visualize about VBO's :D

    First, Even-though VBO means Vertex Buffer Object, It doesn't mean that it holds only vextex coordinates.

        A Vertex will have attributes such as coordinates,texture 
        coordinates,normals,color.The whole thing makes a well defined vertex. 
    

    So here you need to understand a VBO like - "It can be able to store the whole vertex attributes". Basically a VBO is a "buffer" that you can bind to any of the vertex info.

    Answer 1 : Yes (Check the above description)

    Answer 2: VBO's are usually "faster" than usual vertex lists, as there is no need of pushing the vertex info every frame(if the data is static). Even If the underlying architecture is unified memory architecture, it will have benefits as there is no "overwrite" of data unnecessarily.

    Answer3 : STATIC_DRAW/DYNAMIC_DRAW are just flags to tell the underlying GPU where to keep the buffers.The underlying GPU will decide where to keep the buffers based on the flags you give.So this should be in mind for better throughput.But, in Unified Memory Architecture(where GPU will be sharing Main RAM),I think it will not make any significant difference.

Tags
opengl c++ vbo
Related questions and answers
  • ); glRotatef(Objects[i].rot.x, 1.0f, 0.0f, 0.0f); // Draw the faces using an index to the vertex array glDrawElements(GL_TRIANGLES, Objects[i].MatFaces...].rot.y, 0.0f, 1.0f, 0.0f); glRotatef(Objects[i].rot.x, 1.0f, 0.0f, 0.0f); // Draw the faces using an index to the vertex array glDrawElements...I am quite new to OpenGL, I have managed after long trial and error to integrate Nehe's Cel-Shading rendering with my Model loaders, and have them drawn using the Toon shade and outline

  • ); glDisableClientState( GL_COLOR_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); } Back to the Vertex Array Object though. My code for creating the Vertex... object from an array of CustomVertex objects. This array is said to be interleaved. It renders successfully with the following code: void VertexBufferObject::Draw() { if( ! m_bInitialized... onscreen, in the shape of the terrain rectangle, but there are no regular lines etc. Here's the code I use for rendering: void ShaderProgram::Draw() { using namespace AntiMatter; if( ! m

  • Basically when placed in the same file this works fine, but if placed in separate files (and I have tested this) just after Init() is called, the pointer to ID3D10* device's value is suddenly...(); }; #endif If I call Init() from game.cpp, the code works fine, if I call cube->init(), just as this function is entered (not when the other one is left), the device pointer suddenly has an address...(zbd)); zbd.Width = SCREEN_WIDTH; // set the width of the depth buffer zbd.Height = SCREEN_HEIGHT; // set the height of the depth buffer zbd.ArraySize = 1; // we are only creating one texture

  • working of windows/linux/osx), I have a toggle to toggle between using the FBO(post-processing) and not. The shaders are working, but it seems the FBO didn't load the texture unit bound to it. The following...); glDrawArrays(GL_TRIANGLES, 0, 6); glBindTexture(GL_TEXTURE_2D, 0); } Tried everything I can think of or find in a FBO tutorial or have read about. I don't get any errors and it returns as complete. (Also I can't seem to get it glTexImage2d a image of size width and height. says invalid values, and if I try to use GL_TEXTURE_RECTANGLE it says invalid enum :-/ but that's for a different question

  • I'm having trouble implementing render to texture with OpenGL 3. My issue is that after rendering to the frame buffer, it appears the rendered object becomes deformed, which may imply a bad transformation has occurred somewhere. Which doesn't make sense as the object renders fine when not using my frame buffer (see bottom of post). The current result is such: Current result http://k.minus.com...); glBindFramebuffer(GL_FRAMEBUFFER, 0); } Here is my drawing box code, which just takes a transformation matrix and calls the appropriate functions. The current values of P is a projection matrix

  • that make really hard to work with when coding some functions that use them. I was thinking of making ie. SimpleMesh and HierarchyMesh objects, which will also require that the renderer can deal with different types of objects in the same scene. I was also thinking about making a MeshNode class and then make a Mesh object that contains them, but then I have some conflict on where to store some data... for my needs. Ah and I forgot to mention that nodes can have hierarchy, but this is not a problem if the frame data is reduced. The structure instances don't get duplicated unless any value changes. When

  • I have a very simple effect file shown below. I am using this to draw 2D lines, however it is not behaving how I expected and I can't seem to get my head round why. If I draw a line that goes from 0,0 to 100, 100 for example, I would expect it to draw a line from one corner of the screen to a little way in. I would expect it to treat these numbers as screen coordinates. Instead, the line is huge! A line of about 2 long fills the whole screen. Why is this? How can I modify my shader to 'think' in screen coordinates? // a struct for the vertex shader return value struct VSOut { float4 Col

  • I would have thought that if the object is on-screen that this function should return screen coordinates. When used in conjunction with the directX draw text function, it works fine. Textual overlays... (texture coords for the screen go from -1 to 1 in each axis) but headPos has the value of -1.#IND000 in both the x and y. I think this may be because D3DXVec3Project is returning MASSIVE numbers, although I...() will return a vector3 that has a 0 z component and x and y values from about 2 to roughly 33. Now how on earth can me with a calculator be better than the computer? what am I missing? 0.0f / 720.0f) * 2.0f

  • I'm trying to map a brick texture on the edge of a fountain and I'm using gluDisk for that. How can I make the right coordinates for the disk? My code looks like this and I have only found a function...(); // push 3 glTranslatef(0,0,height); // spherical texture generation // this piece of code doesn't work as I intended glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP); glTexGeni(GL_T...... glPopMatrix(); // pop 2 // more drawing here... glPopMatrix(); // pop 1 } To refine my question a bit. This is an image of what it is at default (left) and of what I want (right). The texture should

Data information