vector rotations for branches of a 3d tree

  • vector rotations for branches of a 3d tree user3643

    I'm attempting to create a 3d tree procedurally. I'm hoping that someone can check my vector rotation maths, as I'm a bit confused.

    I'm using an l-system (a recursive algorithm for generating branches).

    The trunk of the tree is the root node. It's orientation is aligned to the y axis.

    In the next iteration of the tree (e.g. the first branches), I might create a branch that is oriented say by +10 degrees in the X axis and a similar amount in the Z axis, relative to the trunk.

    I know that I should keep a rotation matrix at each branch, so that it can be applied to child branches, along with any modifications to the child branch.

    My questions then:

    for the trunk, the rotation matrix - is that just the identity matrix * initial orientation vector ?

    for the first branch (and subsequent branches) - I'll "inherit" the rotation matrix of the parent branch, and apply x and z rotations to that also.


    using glm::normalize;
    using glm::rotateX;
    using glm::vec4;    
    using glm::mat4;
    using glm::rotate;
    vec4 vYAxis     = vec4(0.0f, 1.0f, 0.0f, 0.0f);
    vec4 vInitial   = normalize( rotateX( vYAxis, 10.0f ) );
    mat4 mRotation  = mat4(1.0);
    // trunk rotation matrix = identity * initial orientation vector
    mRotation *= vInitial;
    // first branch = parent rotation matrix * this branches rotations
    mRotation *= rotate( 10.0f, 1.0f, 0.0f, 0.0f );     // x rotation
    mRotation *= rotate( 10.0f, 0.0f, 0.0f, 1.0f );     // z rotation

    Are my maths and approach correct, or am I completely wrong?

    Finally, I'm using the glm library with OpenGL / C++ for this. Is the order of x rotation and z rotation important?

  • From what I can see your approach is good, and your math seems alright as well. I would compile the code and see what the result is to make sure though.

c++ opengl vector rotation glm
Related questions and answers
  • , matProjection, matFinal; // increase the time varaible and send to the effect static float Time = 0.0f; Time += 0.001f; // create a rotation matrix D3DXMatrixRotationY(&matRotate, Time); // create a view matrix D3DXMatrixLookAtLH(&matView, &D3DXVECTOR3(0.0f, 9.0f, 24.0f), // the camera position &D3DXVECTOR3(0.0f, 0.0f, 0.0f), // the look-at position &D3DXVECTOR3(0.0f, 1.0f, 0.0f)); // the up direction // create a projection matrix D3DXMatrixPerspectiveFovLH(&

  • () will return a vector3 that has a 0 z component and x and y values from about 2 to roughly 33. Now how on earth can me with a calculator be better than the computer? what am I missing? 0.0f / 720.0f) * 2.0f... = xyCoords.y; m_font->DrawTextA(NULL, stateStream.str().c_str(), -1, &rct, DT_LEFT|DT_NOCLIP , D3DXCOLOR(1,0,0,1)); This all works perfectly. Where am I going wrong with the first bit of code... (texture coords for the screen go from -1 to 1 in each axis) but headPos has the value of -1.#IND000 in both the x and y. I think this may be because D3DXVec3Project is returning MASSIVE numbers, although I

  • ); glBindFramebuffer(GL_FRAMEBUFFER, 0); } Here is my drawing box code, which just takes a transformation matrix and calls the appropriate functions. The current values of P is a projection matrix... wrong in my frame buffer set up code, or elsewhere. But I can't see what. The FBO is set up through the following function: unsigned int fbo_id; unsigned int depth_buffer; int m_FBOWidth, m_FBOHeight...); translate(M, Vector3(0,0,-50)); drawBox(M); } void drawRotatingBox() { Matrix4 M(1); rotate(M, rotation(Vector3(1, 0, 0), rotation_x)); rotate(M, rotation(Vector3(0, 1, 0), rotation_y

  • = SDL_DisplayFormat(zoomedImage); SDL_FreeSurface(zoomedImage); } return optimizedImage; } is the code I actually use to create the GL texture... GLuint thisIsMyTexture..._FILTER, GL_LINEAR ); ...and the drawing code itself. glBindTexture(GL_TEXTURE_2D,thisIsMyTexture); glBegin( GL_QUADS ); glTexCoord2f(0.0f, 0.0f); glVertex3f(0.0f, 0.0f, 1.0f); glTexCoord2f(1.0f...I am making a game using OpenGL, with SDL_Image employed to load the textures. When drawn on screen, there's no alpha -- in an exact square around the opaque contents of the texture, black is drawn

  • I don't get how coord (a UV vec2 in my mind) is equal to the dot product (a scalar value)? Same problem I had before with "g". What do I set the plane to be? In my opengl c++ 3.0 code, I set it to [0, 0, 1, 0] (basically unit z) and glTexGen works great. I'm still missing something. My vert shader looks basically like this: WVPMatrix = World View Project Matrix. POSITION is the model vertex... Language 1.0 Several sites have mentioned that this should be "easy" to do in a GLSL vert shader. But I just can not get it to work. My hunch is that I'm not setting the planes up correctly, or I'm missing

  • ); pColour->SetFloatVector(colour); // create a scale matrix D3DXMatrixScaling(&matScale, scale.x, scale.y, scale.z); // create a rotation matrix D3DXMatrixRotationYawPitchRoll(&matRotate...); pRotation->SetMatrix(&temporaryLines[i].rotation._11); // set the rotation matrix in the effect pPass->Apply(0); device->DrawIndexed(2, 0, 0); } temporaryLines.clear..., 0.0f}; // the light's vector float4 LightCol = {1.0f, 1.0f, 1.0f, 1.0f}; // the light's color float4 AmbientCol = {0.3f, 0.3f, 0.3f, 1.0f}; // the ambient light's color

  • a = 0; a < (int)groups.size(); a++) { if(groups[a].type == "prop") { //Code for rotation glPopMatrix(); glPushMatrix(); float x,y,z; x = y = z = 0; int...(vertices[vertexIndex].Dimensions[_x], vertices[vertexIndex].Dimensions[_y], vertices[vertexIndex].Dimensions[_z]); } glEnd(); } } glPopMatrix(); glfwSwapBuffers(); Since I don't know the exact centre of the propeller, that's what the for loop before the rotation is for. It finds the average of the y and z co-ordinates. After I find it, I translate to -y,-z , rotate

  • , Objects[i].pos.z); // Rotate the model glRotatef(Objects[i].rot.z, 0.0f, 0.0f, 1.0f); glRotatef(Objects[i].rot.y, 0.0f, 1.0f, 0.0f); glRotatef(Objects[i].rot.x, 1.0f, 0.0f, 0.0f); // Draw the faces using an index to the vertex array glDrawElements(GL_TRIANGLES, Objects[i].MatFaces..., Objects[i].pos.y, Objects[i].pos.z); // Rotate the model glRotatef(Objects[i].rot.z, 0.0f, 0.0f, 1.0f); glRotatef(Objects[i

  • ); When using this code I'm able to detect a ray intersection via a sphere, but I have questions when determining an intersection via a plane. First off should I be using my vRayOrig & vRayDir...(); // Transform vector from screen to 3D space vec.x = (((2.0f * pos.x) / w) - 1.0f) / pMatProj._11; vec.y = -(((2.0f * pos.y) / h) - 1.0f) / pMatProj._22; vec.z = 1.0f...I'm developing a picking system that will use rays that intersect volumes and I'm having trouble with ray intersection versus a plane. I was able to figure out spheres fairly easily, but planes

Data information