Formula for replicating glTexGen in OpenGL ES 2.0 GLSL

  • Formula for replicating glTexGen in OpenGL ES 2.0 GLSL visualjc

    I also posted this on the main StackExchange, but this seems like a better place, but for give me for the double post if it shows up twice.

    I have been trying for several hours to implement a GLSL replacement for glTexGen with GL_OBJECT_LINEAR. For OpenGL ES 2.0. In Ogl GLSL there is the gl_TextureMatrix that makes this easier, but thats not available on OpenGL ES 2.0 / OpenGL ES Shader Language 1.0

    Several sites have mentioned that this should be "easy" to do in a GLSL vert shader. But I just can not get it to work.

    My hunch is that I'm not setting the planes up correctly, or I'm missing something in my understanding.

    I've pored over the web. But most sites are talking about projected textures, I'm just looking to create UV's based on planar projection. The models are being built in Maya, have 50k polygons and the modeler is using planer mapping, but Maya will not export the UV's. So I'm trying to figure this out.

    I've looked at the glTexGen manpage information:

    g = p1xo + p2yo + p3zo + p4wo

    What is g? Is g the value of s in the texture2d call?

    I've looked at the site:

    Mathematics of glTexGen

    Another size explains the same function:

    coord = P1*X + P2*Y + P3*Z + P4*W

    I don't get how coord (a UV vec2 in my mind) is equal to the dot product (a scalar value)? Same problem I had before with "g".

    What do I set the plane to be? In my opengl c++ 3.0 code, I set it to [0, 0, 1, 0] (basically unit z) and glTexGen works great.

    I'm still missing something.

    My vert shader looks basically like this: WVPMatrix = World View Project Matrix. POSITION is the model vertex position.

    varying vec4 kOutBaseTCoord;
    void main()
        gl_Position = WVPMatrix * vec4(POSITION, 1.0);
        vec4 sPlane = vec4(1.0, 0.0, 0.0, 0.0);
        vec4 tPlane = vec4(0.0, 1.0, 0.0, 0.0);
        vec4 rPlane = vec4(0.0, 0.0, 0.0, 0.0);
        vec4 qPlane = vec4(0.0, 0.0, 0.0, 0.0);
        kOutBaseTCoord.s = dot(vec4(POSITION, 1.0), sPlane);
        kOutBaseTCoord.t = dot(vec4(POSITION, 1.0), tPlane);
        //kOutBaseTCoord.r = dot(vec4(POSITION, 1.0), rPlane);
        //kOutBaseTCoord.q = dot(vec4(POSITION, 1.0), qPlane);

    The frag shader

    precision mediump float;
    uniform sampler2D BaseSampler;
    varying mediump vec4 kOutBaseTCoord;
    void main()
        //gl_FragColor = vec4(, 0.0, 1.0);
        gl_FragColor = texture2D(BaseSampler,;

    I've tried texture2DProj in frag shader

    Here are some of the other links I've looked up

    TexGen not working with GLSL, with fixed pipeline is ok

  • The scalar product is a special case of the more general matrix multiplication. In the case of

    coord = P1*X + P2*Y + P3*Z + P4*W

    imagine that P1 to P4 are actually vectors the same size as your texture coordinates. You can also think of P1 to P4 as columns of your object-coordinates to texture-coordinates matrix. In your case, where you want 2D coordinates, it'd be a 2x4 matrix. The columns of this matrix are what you would ordinarily submit as the planes when using glTexGen. The first 3 elements in each row are the direction of the respective texture axis. The length of this direction is inversely proportional to the stretch of the texture in that direction (I.e. the longer the direction, the more greater the texture coordinates and the texture will be smaller and repeat more often, if you use GL_REPEAT). You need to pick those direction orthogonally to your plane's normal! The last component in each row is used to move the texture around on that plane.

    I'm not entirely sure that this explanation is what you're looking for, but I hope it helps.

opengl-es glsl
Related questions and answers
  • wrong in my frame buffer set up code, or elsewhere. But I can't see what. The FBO is set up through the following function: unsigned int fbo_id; unsigned int depth_buffer; int m_FBOWidth, m_FBOHeight...); glBindFramebuffer(GL_FRAMEBUFFER, 0); } Here is my drawing box code, which just takes a transformation matrix and calls the appropriate functions. The current values of P is a projection matrix...; gl_Position = MVP*vec4(vVertex,1); } fragment #version 330 smooth in vec2 vTexCoord; out vec4 vFragColor; uniform sampler2D textureMap; void main(void) { vFragColor = texture(textureMap

  • = F + (1.0 - F) * pow((1.0 - dot(-i, n)), FresnelPower); Refract = refract(i, n, Eta); Refract = vec3(gl_TextureMatrix[0] * vec4(Refract, 1.0)); Reflect = reflect(i, n); Reflect = vec3(gl_TextureMatrix[0] * vec4(Reflect, 1.0)); gl_Position = ftransform(); } Fragment shader: varying vec3 Reflect; varying vec3 Refract; varying float Ratio; uniform samplerCube Cubemap...); gl_FragColor = vec4(color, 1.0); } First thing I notice is the varying keyword. I couldn't really find out if this is deprecated or not. I'm using OpenGL 4+. So shouldn't that be changed to in or out

  • ); // Set The Color Of The Model ( NEW ) // ORIGINAL DRAWING CODE //Draw the model as an interpolation between the two frames glBegin(GL_TRIANGLES); for(int i = 0; i... NEHE'S TUT glBegin (GL_TRIANGLES); // Tell OpenGL What We Want To Draw for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle...I am quite new to OpenGL, I have managed after long trial and error to integrate Nehe's Cel-Shading rendering with my Model loaders, and have them drawn using the Toon shade and outline

  • I'm writing a generic ShaderProgram class that compiles a set of Shader objects, passes args to the shader (like vertex position, vertex normal, tex coords etc), then links the shader components... ); glDisableClientState( GL_COLOR_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); } Back to the Vertex Array Object though. My code for creating the Vertex...: float m_Position[3]; // x, y, z // offset 0, size = 3*sizeof(float) float m_TexCoords[2]; // u, v // offset 3*sizeof(float), size = 2*sizeof(float) float m_Normal[3

  • I'm trying to get a 2 pass post-processing system going in OpenGL in a cross-platform manor using FBOs. I'm starting the dev on mac OSX (since in the past I've found it the most finicky to get...); glDrawArrays(GL_TRIANGLES, 0, 6); glBindTexture(GL_TEXTURE_2D, 0); } Tried everything I can think of or find in a FBO tutorial or have read about. I don't get any errors and it returns as complete...(mats.mvHandle,1,GL_FALSE,glm::value_ptr(mats.modelViewMatrix)); //bind to vertex array object glBindVertexArray(vaoHandle); //render scene glDrawArrays(GL_TRIANGLES, 0, 240*3 ); //do post-processing

  • ); glLoadIdentity(); gluLookAt(0, 0, 30.0, 0, 0, 0, 0.0, 1.0, 0.0); glEnable(GL_DEPTH); glEnable(GL_DEPTH_TEST); glEnable(GL_COLOR_MATERIAL); glEnable(GL_NORMALIZE); glEnable(GL_LIGHTING); glEnable... a = 0; a < (int)groups.size(); a++) { if(groups[a].type == "prop") { //Code for rotation glPopMatrix(); glPushMatrix(); float x,y,z; x = y = z = 0; int... the exact centre of the propeller, that's what the for loop before the rotation is for. It finds the average of the y and z co-ordinates. After I find it, I translate to -y,-z , rotate

  • know if my speculations are ok, as I don't have much experience with 3d animations yet. I want to make a well decision as any option I choose would require a lot of work to get it to render and I... for my needs. Ah and I forgot to mention that nodes can have hierarchy, but this is not a problem if the frame data is reduced. The structure instances don't get duplicated unless any value changes. When... them to be attached to the node in most cases, even if I allow setting global lights to the scene. @Nicol: Yes that's what I'm trying to figure out. You can see the code doesn't rely on any hardware

  • them. This is what I am trying to replace it with (in the exact same part of the program) int vCount = 0; for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i....... Maybe I'm doing something extra here? I managed to implement a std::vector collection like recommended: for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i... at the same time: glBegin(GL_TRIANGLES); for(int i = 0; i < numTriangles; i++) { MD2Triangle* triangle = triangles + i; for(int j = 0; j < 3; j++) { MD2Vertex* v1

  • I'm trying to use SDL + OpenGL but I don't believe hardware acceleration is working because the framerate for around 18000 polys is about 24fps on a quad core machine but is a hopeless 1-2fps on an Intel Atom. Even the quad core starts to struggle when the poly count rises above this. I've checked my code over but I'm clearly missing something obvious. I've changed my SDL initialisation code to use the same code as in the SDL OpenGL test. It reports that SDL_GL_ACCELERATED_VISUAL is 1 but that hw_available in SDL_VideoInfo is 0 Also the vendor is reported correctly as Nvidia on both

Data information