Ray Intersecting Plane Formula in C++/DirectX

joelretdev
  • Ray Intersecting Plane Formula in C++/DirectX joelretdev

    I'm developing a picking system that will use rays that intersect volumes and I'm having trouble with ray intersection versus a plane. I was able to figure out spheres fairly easily, but planes are giving me trouble. I've tried to understand various sources and get hung up on some of the variables used within their explanations.

    Here is a snippet of my code:

       bool Picking()
     {
      D3DXVECTOR3 vec;
      D3DXVECTOR3 vRayDir;
      D3DXVECTOR3 vRayOrig;
      D3DXVECTOR3 vROO, vROD; // vect ray obj orig, vec ray obj dir
      D3DXMATRIX m;
      D3DXMATRIX mInverse;
      D3DXMATRIX worldMat;
    
                    // Obtain project matrix
      D3DXMATRIX pMatProj =   CDirectXRenderer::GetInstance()->Director()->Proj();
                    // Obtain mouse position
      D3DXVECTOR3 pos = CGUIManager::GetInstance()->GUIObjectList.front().pos;
    
                    // Get window width & height
      float w = CDirectXRenderer::GetInstance()->GetWidth();
      float h = CDirectXRenderer::GetInstance()->GetHeight();
    
                    // Transform vector from screen to 3D space
      vec.x =  (((2.0f * pos.x) / w) - 1.0f) / pMatProj._11;
      vec.y = -(((2.0f * pos.y) / h) - 1.0f) / pMatProj._22;
      vec.z = 1.0f;
    
                    // Create a view inverse matrix
      D3DXMatrixInverse(&m, NULL, &CDirectXRenderer::GetInstance()->Director()->View());
    
                    // Determine our ray's direction
      vRayDir.x = vec.x * m._11 + vec.y * m._21 + vec.z * m._31;
      vRayDir.y = vec.x * m._12 + vec.y * m._22 + vec.z * m._32;
      vRayDir.z = vec.x * m._13 + vec.y * m._23 + vec.z * m._33;
    
                    // Determine our ray's origin
      vRayOrig.x = m._41;
      vRayOrig.y = m._42;
      vRayOrig.z = m._43;
    
    
      D3DXMatrixIdentity(&worldMat);
      //worldMat = aliveActors[0]->GetTrans();
      D3DXMatrixInverse(&mInverse, NULL, &worldMat); 
    
      D3DXVec3TransformCoord(&vROO, &vRayOrig, &mInverse);
      D3DXVec3TransformNormal(&vROD, &vRayDir, &mInverse);
      D3DXVec3Normalize(&vROD, &vROD);
    

    When using this code I'm able to detect a ray intersection via a sphere, but I have questions when determining an intersection via a plane. First off should I be using my vRayOrig & vRayDir variables for the plane intersection tests or should I be using the new vectors that are created for use in object space?

    When looking at a site like this for example: http://www.tar.hu/gamealgorithms/ch22lev1sec2.html

    I'm curious as to what D is in the equation AX + BY + CZ + D = 0 and how does it factor in to determining a plane intersection?

    Any help will be appreciated, thanks.

  • For your first question, about vRayOrig & vRayDir, you should use what you need to use -- certainly the plane and the ray must be defined in the same space, or the results you get out of the intersection test will be meaningless. So you need to either define or transform the plane into the same space as the ray, or the other way around.

    For the second question, that's the implicit equation of a plane. D itself is just some constant for the plane; when the plane is normalized, it can be interpreted as "the distance to the origin" (A, B and C can be interpreted as the X, Y, Z components of the plane's normal), but you need to be careful with that.

    Look at it like this. A plane can be defined by some normal vector N, and a point P. All points in the plane will be perpendicular to the normal vector, which is to say the vector from P to any point T will be (T - P) and will be perpendicular to N. The dot product of two perpendicular vectors is zero. Thus:

                              N dot (T - P) = 0
    Nx(Tx - Px) + Ny(Ty - Py) + Nz(Tz - Pz) = 0    (dot product)
    NxTx - NxPx + NyTy - NyPy + NzTz - NzPz = 0    (distribute)
    NxTx + NyTy + NzTz - NxPx - NyPy - NzPz = 0    (collect and group)
    

    In case it's not clear, "Nx" is the X component of N, et cetera. Now that we've got that expansion, remember that (A,B,C) are often the names of the plane's normal's components, so we can rename:

    ATx + BTy + CTz - APx - BPy - CPz = 0    (collect and group)
    

    That should look a little more familiar. If we call the whole second half of the equation D, we end up with ATx + BTy + CTz + D = 0 or in general Ax + By + Cz + D = 0 -- so D, is in fact (-APx - BPy - CPz) where P is the point that defines the plane.

    This is useful for ray intersection because it will be zero for any (x,y,z) on the plane -- and the point that the ray passes through the plane will be on the plane. That's the computation that the page you linked to is demonstrating. For any point (x,y,z) that isn't on the plane, the evaluation of (Ax + By + Cz + D) will be nonzero.

Tags
c++ 3d mathematics linear-algebra
Related questions and answers
  • buffer ID3D10Texture2D* pDepthBuffer; device->CreateTexture2D(&zbd, NULL, &pDepthBuffer); // create the texture // create the depth buffer D3D10_DEPTH_STENCIL_VIEW_DESC dsvd; ZeroMemory... buffer and use it to create the render target ID3D10Texture2D* pBackBuffer; swapchain->GetBuffer(0, __uuidof(ID3D10Texture2D), (LPVOID*)&pBackBuffer); device->CreateRenderTargetView..., 1.0f)); device->ClearDepthStencilView(dsv, D3D10_CLEAR_DEPTH, 1.0f, 0); // select which input layout we are using device->IASetInputLayout(pVertexLayout); // select which primtive

  • , In); // Get The Current Line ( NEW ) shaderData[i][0] = shaderData[i][1] = shaderData[i][2] = float(atof (Line)); // Copy Over The Value ( NEW ) } fclose...]; Vec3f pos = v1->pos * (1 - frac) + v2->pos * frac; Vec3f normal = v1->normal * (1 - frac) + v2->normal * frac; if (normal[0] == 0 && normal[1...I am quite new to OpenGL, I have managed after long trial and error to integrate Nehe's Cel-Shading rendering with my Model loaders, and have them drawn using the Toon shade and outline

  • ); pRotation-&gt;SetMatrix(&amp;temporaryLines[i].rotation._11); // set the rotation matrix in the effect pPass-&gt;Apply(0); device-&gt;DrawIndexed(2, 0, 0); } temporaryLines.clear... matrix in the effect pPass-&gt;Apply(0); device-&gt;DrawIndexed(mesh.Indices(), 0, 0); //input specific } Here is occasionally works: void BatchLineRenderer::RenderLines(D3DXMATRIX matView... = temporaryLines.size(); for(int i = 0; i < allLines; i++) { pColour-&gt;SetFloatVector(temporaryLines[i].colour); // in the line loop too? // combine the matrices and render D3DXMATRIX

  • I'm trying to implement a basic scene graph in DirectX using C++. I am using a left child-right sibling binary tree to do this. I'm having trouble updating each node's world transformation relative to its parent (and its parent's parent etc.). I'm struggling to get it to work recursively, though I can get it to work like this: for(int i = 0; i < NUM_OBJECTS; i++) { // Initialize...; } } toWorldXForm is the object's world transform and toParentXForm is the object's transform relative to the parent. I want to do this using a method within my object class (the code above

  • I am making a game using OpenGL, with SDL_Image employed to load the textures. When drawn on screen, there's no alpha -- in an exact square around the opaque contents of the texture, black is drawn; outside that space, white is drawn. Here is my OpenGL init code... SDL_SetVideoMode(windowWidth, windowHeight, 16, SDL_HWSURFACE|SDL_GL_DOUBLEBUFFER|SDL_OPENGL); glClearColor(0, 0, 0, 0...; glGenTextures(1, &amp;thisIsMyTexture); temporarySurface = loadImage("theimagetomytexture.png"); glBindTexture(GL_TEXTURE_2D,thisIsMyTexture); glTexImage2D(GL_TEXTURE_2D, 0, 4, temporarySurface-&gt;w

  • = xyCoords.y; m_font-&gt;DrawTextA(NULL, stateStream.str().c_str(), -1, &amp;rct, DT_LEFT|DT_NOCLIP , D3DXCOLOR(1,0,0,1)); This all works perfectly. Where am I going wrong with the first bit of code... = D3DXVECTOR2( (xyCoords.x / 1280.0f) * 2.0f - 1.0f, (xyCoords.y / 720.0f) * 2.0f - 1.0f); for origin I get (-1, 1.2) and for headPos I get (0, 1.401e-043#DEN). FYI HeadPosition...I would have thought that if the object is on-screen that this function should return screen coordinates. When used in conjunction with the directX draw text function, it works fine. Textual overlays

  • code shows the changes I made to get accurate physics. void GameState::createScene() { m_pSceneMgr-&gt;createLight("Light")-&gt;setPosition(75,75,75); // Physics // As a test we want a floor plane...::createScene() { m_pSceneMgr-&gt;createLight("Light")-&gt;setPosition(75,75,75); // Physics // As a test we want a floor plane for things to collide with Ogre::Entity *ent; Ogre::Plane p; p.normal = Ogre... *) (node)); // Add it to the physics world World-&gt;addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; // End Physics } I then have a method to create a cube and give it rigid body

  • doesn't change or gets changed back to 0 somehow. When putting a stop in my msVS++ it works perfectly... and when holding F5 (shortcut for "continue") it partly works but numb_coll still gets set to 0...}; // for the old/new screen command // Some nCurses setup int r = 0, c = 0; // current row and column (upper-left is (0,0)) const int nrows = 56, // number of rows in window ncols = 79... GeneratePath( vector<string>&amp; buff){// the buff is the seed too int wall= RandNumb(80)/2, space = (RandNumb(75)/2)+5, wall2= 80-(space+wall); int swall= 0

  • ]; Vec3f pos = v1-&gt;pos * (1 - frac) + v2-&gt;pos * frac; Vec3f normal = v1-&gt;normal * (1 - frac) + v2-&gt;normal * frac; if (normal[0] == 0 &amp;&amp; normal[1] == 0 &amp...]; Vec3f pos = v1-&gt;pos * (1 - frac) + v2-&gt;pos * frac; Vec3f normal = v1-&gt;normal * (1 - frac) + v2-&gt;normal * frac; if (normal[0] == 0 &amp;&amp; normal[1] == 0 &amp...; Vec3f normal = v1-&gt;normal * (1 - frac) + v2-&gt;normal * frac; if (normal[0] == 0 &amp;&amp; normal[1] == 0 &amp;&amp; normal[2] == 0) { normal = Vec3f(0, 0, 1

Data information