Why are my scene's depth values not being written to my DepthStencilView?

dotminic
  • Why are my scene's depth values not being written to my DepthStencilView? dotminic

    I'm rendering to a depth map in order to use it as a shader resource view, but when I sample the depth map in my shader, the red component has a value of 1 while all other channels have a value of 0.

    The Texture2D I use to create the DepthStencilView is bound with the D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE flags, the DepthStencilView has the DXGI_FORMAT_D32_FLOAT format, and the ShaderResourceView's format is D3D11_SRV_DIMENSION_TEXTURE2D.

    I'm setting the depth map render target, then i'm drawing my scene, and once that is done, I'm the back buffer render target and depth stencil are set on the output merger, and I'm using the depth map shader resource view as a texture in my shader, but the depth value in the red channel is constantly 1. I'm not getting any runtime errors from D3D, and no compile time warning or anything.

    I'm not sure what I'm missing here at all. I have the impression the depth value is always being set to 1.

    I have not set any depth/stencil states, and AFAICT depth writing is enabled by default. The geometry is being rendered correctly so I'm pretty sure depth writing is enabled.

    The device is created with the appropriate debug flags;

    #if defined(DEBUG) || defined(_DEBUG)
        deviceFlags |= D3D11_CREATE_DEVICE_DEBUG | D3D11_RLDO_DETAIL;
    #endif
    

    This is how I create my depth map. I've omitted error checking for the sake of brevity

    D3D11_TEXTURE2D_DESC td;
    
    td.Width     = width;
    td.Height    = height;
    td.MipLevels = 1;
    td.ArraySize = 1;
    td.Format    = DXGI_FORMAT_R32_TYPELESS;
    td.SampleDesc.Count   = 1;  
    td.SampleDesc.Quality = 0;  
    td.Usage          = D3D11_USAGE_DEFAULT;
    td.BindFlags      = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
    td.CPUAccessFlags = 0; 
    td.MiscFlags      = 0;
    _device->CreateTexture2D(&texDesc, 0, &this->_depthMap);
    
    D3D11_DEPTH_STENCIL_VIEW_DESC dsvd;
    ZeroMemory(&dsvd, sizeof(dsvd));
    dsvd.Format = DXGI_FORMAT_D32_FLOAT;
    dsvd.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
    dsvd.Texture2D.MipSlice = 0;
    _device->CreateDepthStencilView(this->_depthMap, &dsvd, &this->_dmapDSV);
    
    D3D11_SHADER_RESOURCE_VIEW_DESC srvd;
    srvd.Format = DXGI_FORMAT_R32_FLOAT;
    srvd.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
    srvd.Texture2D.MipLevels = texDesc.MipLevels;
    srvd.Texture2D.MostDetailedMip = 0;
    _device->CreateShaderResourceView(this->_depthMap, &srvd, &this->_dmapSRV);
    

  • Depth-writing being enabled is definitely the first thing to check. If I had $5 for every time I thought something was definitely enabled by default...

    So try explicitly enabling it just to be absolutely certain. Also double-check your depth range and ensure that it's not something like minz 1 and maxz 1 (which would give the same result even with depth writing enabled).

Tags
c++ directx11 depth-buffer
Related questions and answers
  • buffer and use it to create the render target ID3D10Texture2D* pBackBuffer; swapchain->GetBuffer(0, __uuidof(ID3D10Texture2D), (LPVOID*)&pBackBuffer); device->CreateRenderTargetView(pBackBuffer, NULL, &rtv); pBackBuffer->Release(); // set the back buffer as the render target device->OMSetRenderTargets(1, &rtv, dsv); D3D10_VIEWPORT viewport; // create a struct to hold... buffer ID3D10Texture2D* pDepthBuffer; device->CreateTexture2D(&zbd, NULL, &pDepthBuffer); // create the texture // create the depth buffer D3D10_DEPTH_STENCIL_VIEW_DESC dsvd; ZeroMemory

  • I'm writing a simple class to draw all the debugging lines I have in my scene at once. The code in the draw loop is this so far: (If I put for example, 2 instead of temporary2DVerts.size() where I...; bd.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE; bd.MiscFlags = 0; device->CreateBuffer(&bd, NULL, &pBuffer); void* pVoid; pBuffer->Map(D3D10_MAP_WRITE_DISCARD, 0, &pVoid); //HERE memcpy(pVoid...(D3D10_PRIMITIVE_TOPOLOGY_LINELIST); UINT stride = sizeof(VERTEX); UINT offset = 0; device->IASetVertexBuffers(0, 1, &pBuffer, &stride, &offset); screenPass->Apply(0

  • know if my speculations are ok, as I don't have much experience with 3d animations yet. I want to make a well decision as any option I choose would require a lot of work to get it to render and I... for my needs. Ah and I forgot to mention that nodes can have hierarchy, but this is not a problem if the frame data is reduced. The structure instances don't get duplicated unless any value changes. When... them to be attached to the node in most cases, even if I allow setting global lights to the scene. @Nicol: Yes that's what I'm trying to figure out. You can see the code doesn't rely on any hardware

  • = 0; textureDesc.Usage = D3D11_USAGE_DEFAULT; textureDesc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE ; textureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; D3D-&gt... function after dispatch. D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc ; ZeroMemory( &srvDesc, sizeof( srvDesc ) ); srvDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT; srvDesc.ViewDimension = D3D11_SRV...I'm programming gpgpu raymarching (volumetric raytracing) in directx11. I succesfully perform compute shader and save raymarched volume data to texture. Then i want to use same texture as SRV

  • UINT stride = sizeof(VERTEX); UINT offset = 0; device->IASetVertexBuffers(0, 1, mesh.PBuffer(), &stride, &offset); device->IASetIndexBuffer(mesh.IBuffer(), DXGI_FORMAT_R32_UINT, 0... UINT stride = sizeof(LINE); UINT offset = 0; device->IASetVertexBuffers(0, 1, &pBuffer, &stride, &offset); device->IASetIndexBuffer(iBuffer, DXGI_FORMAT_R32_UINT, 0); allLines...); pRotation->SetMatrix(&temporaryLines[i].rotation._11); // set the rotation matrix in the effect pPass->Apply(0); device->DrawIndexed(2, 0, 0); } temporaryLines.clear

  • (mats.mvHandle,1,GL_FALSE,glm::value_ptr(mats.modelViewMatrix)); //bind to vertex array object glBindVertexArray(vaoHandle); //render scene glDrawArrays(GL_TRIANGLES, 0, 240*3 ); //do post-processing if we have it enabled if(postFlag&&fboSetup) { glFlush(); glBindFramebuffer(GL_FRAMEBUFFER, 0); glUseProgram(fboProgram); glClear(GL_COLOR_BUFFER_BIT); glBindTexture(GL...); glDrawArrays(GL_TRIANGLES, 0, 6); glBindTexture(GL_TEXTURE_2D, 0); } Tried everything I can think of or find in a FBO tutorial or have read about. I don't get any errors and it returns as complete

  • ; d3dpp.BackBufferWidth = 640; d3dpp.hDeviceWindow = wndHandle; // create a default DirectX device if (FAILED(pD3D -> CreateDevice(D3DADAPTER_DEFAULT, D3DDEVTYPE_REF, wndHandle, D3DCREATE_SOFTWARE_VERTEXPROCESSING, &d3dpp, &pd3dDevice))) return false; return true; } void render(void) { // check to make sure you have a valid Direct3D device if (NULL == pd3dDevice) return...i think i just found the solution. 1) the problem is that backbuffer surface and source surface are of different formats - that is why exception was thrown. 2) the image path needed double slash "C

  • = D3D10_USAGE_DYNAMIC; desc.BindFlags = D3D10_BIND_RENDER_TARGET | D3D10_BIND_SHADER_RESOURCE; Step 2 of my plan is to use CopyResource or CopyResourceSubregion to copy data from the disk..._INVALIDARG: D3D10_MAPPED_TEXTURE2D mapped; HRESULT hr = spTexture-&gt;Map( 0, D3D10_MAP_READ, 0, &amp;mapped ); So I'm thinking that this fails because the texture can't be read from by the CPU. So...I have a DirectX10 texture (ID3D10Texture2D) that I load from disk with the following code: CComPtr<ID3D10Device&gt; spD3D; // Initialized correctly elsewhere hr = D3DX10CreateTextureFromFile

  • I've decided I want to write a central ResourceManager/ResourceCache class for my hobby game engine, but am having trouble designing a caching scheme. The idea is that the ResourceManager has a soft target for the total memory used by all the game's resources combined. Other classes will create resource objects, which will be in an unloaded state, and pass them to the ResourceManager... ~Resource(); virtual bool load() = 0; virtual bool unload() = 0; virtual size_t getSize() = 0; // Used in determining how much memory is // being used. bool

Data information