Trying to implement Render to Texture

dcousens
  • Trying to implement Render to Texture dcousens

    I'm having trouble implementing render to texture with OpenGL 3.

    My issue is that after rendering to the frame buffer, it appears the rendered object becomes deformed, which may imply a bad transformation has occurred somewhere. Which doesn't make sense as the object renders fine when not using my frame buffer (see bottom of post).

    The current result is such:

    Current result http://k.minus.com/jZVgUuLYRtapv.jpg

    And the expected result was this (or something similar, this has just been GIMP'd): Expected http://k.minus.com/jA5rLM8lmXQYL.jpg

    It therefore implies that I'm doing something wrong in my frame buffer set up code, or elsewhere. But I can't see what.


    The FBO is set up through the following function:

    unsigned int fbo_id;
    unsigned int depth_buffer;
    int m_FBOWidth, m_FBOHeight;
    unsigned int m_TextureID;
    
    void initFBO() {
        m_FBOWidth = screen_width;
        m_FBOHeight = screen_height;
    
        glGenRenderbuffers(1, &depth_buffer);
        glBindRenderbuffer(GL_RENDERBUFFER, depth_buffer);
        glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, m_FBOWidth, m_FBOHeight);
    
        glGenTextures(1, &m_TextureID);
        glBindTexture(GL_TEXTURE_2D, m_TextureID);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_FBOWidth, m_FBOHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
        glBindTexture(GL_TEXTURE_2D, 0);
    
        glGenFramebuffers(1, &fbo_id);
        glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo_id);
    
        glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depth_buffer);
        glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_TextureID, 0);
    
        assert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE);
        glBindFramebuffer(GL_FRAMEBUFFER, 0);
    }
    

    Here is my drawing box code, which just takes a transformation matrix and calls the appropriate functions. The current values of P is a projection matrix, and an identity matrix for the view matrix (V).

    void drawBox(const Matrix4& M) {
        const Matrix4 MVP = M * V * P;
    
        if (boundshader) {
            glUniformMatrix4fv((*boundshader)("MVP"), 1, GL_FALSE, &MVP[0]);
        }
    
        glBindVertexArray(vaoID);
        glDrawElements(GL_TRIANGLES, sizeof(cube.polygon)/sizeof(cube.polygon[0]), GL_UNSIGNED_INT, 0);
    }
    
    void drawStaticBox() {
        Matrix4 M(1);
        translate(M, Vector3(0,0,-50));
    
        drawBox(M);
    }
    
    void drawRotatingBox() {
        Matrix4 M(1);
        rotate(M, rotation(Vector3(1, 0, 0), rotation_x));
        rotate(M, rotation(Vector3(0, 1, 0), rotation_y));
        rotate(M, rotation(Vector3(0, 0, 1), rotation_z));
        translate(M, Vector3(0,0,-50));
    
        drawBox(M);
    }
    

    And the display function called by GLUT.

    void OnRender() {
        /////////////////////////////////////////
        // Render to FBO
    
        glClearColor(0, 0, 0.2f,0);
    
        glBindFramebuffer(GL_FRAMEBUFFER, fbo_id);
        glViewport(0, 0, m_FBOWidth, m_FBOHeight);
        glClear(GL_COLOR_BUFFER_BIT| GL_DEPTH_BUFFER_BIT);
        GL_CHECK_ERRORS
    
        colorshader.Use();
        boundshader = &colorshader;
    
        drawRotatingBox();
    
        colorshader.UnUse();
    
        /////////////////////////////////////////
        // Render to Window
    
        glClearColor(0, 0, 0, 0);
    
        glBindFramebuffer(GL_FRAMEBUFFER, 0);
        glViewport(0, 0, screen_width, screen_height);
        glClear(GL_COLOR_BUFFER_BIT| GL_DEPTH_BUFFER_BIT);
        GL_CHECK_ERRORS
    
        texshader.Use();
        boundshader = &texshader;
    
        glBindTexture(GL_TEXTURE_2D, m_TextureID);
        drawStaticBox();
    
        texshader.UnUse();
    
        // Swap le buffers
        glutSwapBuffers();
    }
    

    And... the obligatory texture shader code

    vertex

    #version 330
    
    in vec2 vUV;
    in vec3 vVertex;
    smooth out vec2 vTexCoord;
    
    uniform mat4 MVP;
    void main()
    {
       vTexCoord = vUV;
       gl_Position = MVP*vec4(vVertex,1);
    }
    

    fragment

    #version 330
    smooth in vec2 vTexCoord;
    out vec4 vFragColor;
    
    uniform sampler2D textureMap;
    
    void main(void)
    {
       vFragColor = texture(textureMap, vTexCoord);
    }
    

    The following is what is rendered when not using the FBO logic: What is rendered to the FBO http://k.minus.com/jiP7kTOSLLvHk.jpg


    ... Help?

    Any ideas on what I may be doing wrong? Further source available on request.

  • Offhand looks like your UV coordinates are messed up. Try rendering the textured cube with just a static test image?

  • It looks as if part of the cube is getting near-clipped in the FBO version, which leads me to suspect something wrong with the projection matrix and/or viewport. Double-check those, maybe try explicitly setting glDepthRange(0, 1)?

Tags
c++ opengl c deferred-rendering
Related questions and answers
  • is the init code for the FBO and it's texture: glGenTextures(1,&fboimg); glBindTexture(GL_TEXTURE_2D,fboimg); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D... if we have it enabled if(postFlag&&fboSetup) { glFlush(); glBindFramebuffer(GL_FRAMEBUFFER, 0); glUseProgram(fboProgram); glClear(GL_COLOR_BUFFER_BIT); glBindTexture(GL

  • (1, &shaderTexture[0]); // Get A Free Texture ID ( NEW ) glBindTexture (GL_TEXTURE_1D, shaderTexture[0]); // Bind This Texture. From Now On It Will Be 1D...); // Set The Color Of The Model ( NEW ) // ORIGINAL DRAWING CODE //Draw the model as an interpolation between the two frames glBegin(GL_TRIANGLES); for(int i = 0; i... }; // Color Of The Lines ( NEW ) // ORIGINAL PART OF THE FUNCTION //Figure out the two frames between which we are interpolating int frameIndex1 = (int)(time * (endFrame - startFrame + 1

  • I'm using SDL & openGL to render a tile-map. The issue is that the tile-map rendering is extremely messed up, and I'm just a bit unsure what I'm doing wrong exactly. It should just be the first..., 0, 0, 0); glDisable(GL_DEPTH_TEST); glEnable(GL_TEXTURE_2D); //Enable 2D rendering glViewport(0, 0, Width, Height); //Set Up openGL viewport (screen) glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrtho(0, Width, Height, 0, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); Just in-case, here's my image loading code, I think that perhaps this may be were the problem lies somehow

  • (GL_LIGHT0); glEnable(GL_LIGHT1); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glColor3f(0.0f, 0.0f, 0.0f); int vertexIndex = 0, normalIndex; glRotatef(90, 0, 1, 0); glPushMatrix(); for(int... is identified, so that's all good. I'm writing the game in C++ with OpenGl/GLFW The drawing function is: int win_width; int win_height; glfwGetWindowSize(&win_width, &win_height); float win_aspect = (float)win_width / (float)win_height; glViewport(0, 0, win_width, win_height); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(90, win_aspect, 1, 100.0); glMatrixMode(GL_MODELVIEW

  • (zbd)); zbd.Width = SCREEN_WIDTH; // set the width of the depth buffer zbd.Height = SCREEN_HEIGHT; // set the height of the depth buffer zbd.ArraySize = 1; // we are only creating one texture...; scd.BufferDesc.Width = SCREEN_WIDTH; scd.BufferDesc.Height = SCREEN_HEIGHT; scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; scd.OutputWindow = hWnd; scd.SampleDesc.Count = 1; scd.SampleDesc.Quality = 0... viewport.Width = SCREEN_WIDTH; // set the width to the window's width viewport.Height = SCREEN_HEIGHT; // set the height to the window's height viewport.MinDepth = 0; // the closest an object can

  • _TRIANGLES, m_nNumIndices, GL_UNSIGNED_INT, ((char*)NULL + 0) ); glDisableClientState( GL_VERTEX_ARRAY ); glDisableClientState( GL_TEXTURE_COORD_ARRAY ); glDisableClientState( GL_NORMAL_ARRAY ); glDisableClientState( GL_COLOR_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); } Back to the Vertex Array Object though. My code for creating the Vertex... in the shader code) for( int n = 0; n < vShaderArgs.size(); n ++) glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() ); // Create and bind to a vertex array object, which

  • = xyCoords.y; m_font->DrawTextA(NULL, stateStream.str().c_str(), -1, &amp;rct, DT_LEFT|DT_NOCLIP , D3DXCOLOR(1,0,0,1)); This all works perfectly. Where am I going wrong with the first bit of code...) * 2.0f - 1.0f, (xyCoords.y / SCREEN_HEIGHT) * 2.0f - 1.0f); lineRenderer.Draw2DLine(origin, headPos, D3DXCOLOR(1, 0, 0, 1), D3DXCOLOR(1, 1, 0, 1)); origin has a value of -1, -1 which is fine (texture coords for the screen go from -1 to 1 in each axis) but headPos has the value of -1.#IND000 in both the x and y. I think this may be because D3DXVec3Project is returning MASSIVE numbers, although I

  • physics properties. I know there will be errors here as I get the items colliding with the ground but not with each other properly. So I would appreciate some input on what I am doing wrong. void... every frame to handle input and render etc I call an UpdatePhysics method to update the physics simulation. void GameState::UpdatePhysics(unsigned int TDeltaTime) { World->stepSimulation(TDeltaTime... this on what I had done there, but modifying it to fit with Ogre. It is working but not correctly and I would like some help to understand what it is I am doing wrong. I have a state system and when

  • _Commands, m_MD2Header.num_glcommands * sizeof(int)); // Read all the data. for(int i = 0; i < m_MD2Header.num_frames; ++i) { md2_frame_t* Frame = (md2_frame_t*)&amp;Buffer...; m_MD2Header.num_xyz; ++vertex) { Vertices[i][0] = (Frame->vertices[i].v[0] * Frame->scale[0]) + Frame->translate[0]; Vertices[i][1] = (Frame->vertices[i].v[1... the header. memset((void*)&amp;m_MD2Header, 0, sizeof(md2_t)); File.read((char*)&amp;m_MD2Header, sizeof(md2_t)); // Check if the file is actually a MD2 model. if(m_MD2Header.ident

Data information