downsampling algorithm

dotminic
  • downsampling algorithm dotminic

    what are the steps to perform downsampling on a texture ? I've got as far as rendering the scene to a render target, but I'm not sure as to how to then render that to a smaller texture in order to blur it. I can't seem to find in good explanation or tutorial about this technique.

  • Render a large quad (with your texture) to a smaller render target, performing whatever blur/downsampling algorithm is appropriate.

    I.e. for each target pixel, sample the number of texels from the source you'd want to combine.

    The simplest (and thus fastest, but ugliest) is the box filter, which usually uses 4 samples from the large texture and put their average into a single texel/pixel in the target. Repeat this step until destination is 2x2 to get all mip-map levels for a texture.

    There are two-pass techniques that are more efficient if you don't need the intermediate textures, and just a result, see this gamasutra article.

Tags
c++ directx hlsl
Related questions and answers
  • working of windows/linux/osx), I have a toggle to toggle between using the FBO(post-processing) and not. The shaders are working, but it seems the FBO didn't load the texture unit bound to it. The following is the init code for the FBO and it's texture: glGenTextures(1,&fboimg); glBindTexture(GL_TEXTURE_2D,fboimg); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf.... (Also I can't seem to get it glTexImage2d a image of size width and height. says invalid values, and if I try to use GL_TEXTURE_RECTANGLE it says invalid enum :-/ but that's for a different question

  • I'm using SDL & openGL to render a tile-map. The issue is that the tile-map rendering is extremely messed up, and I'm just a bit unsure what I'm doing wrong exactly. It should just be the first tile being rendered, but I'm getting a blurred mess :S. My rendering code: glBindTexture(GL_TEXTURE_2D, texture); float texscale = 1.0f / (float)tileWidth; sourceX = sourceX / (float)tileSheetWidth...(); glOrtho(0, Width, Height, 0, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); Just in-case, here's my image loading code, I think that perhaps this may be were the problem lies somehow

  • I'm having trouble implementing render to texture with OpenGL 3. My issue is that after rendering to the frame buffer, it appears the rendered object becomes deformed, which may imply a bad... wrong in my frame buffer set up code, or elsewhere. But I can't see what. The FBO is set up through the following function: unsigned int fbo_id; unsigned int depth_buffer; int m_FBOWidth, m_FBOHeight...); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE

  • I have a DirectX10 texture (ID3D10Texture2D) that I load from disk with the following code: CComPtr<ID3D10Device> spD3D; // Initialized correctly elsewhere hr = D3DX10CreateTextureFromFile...; spTextureResource->QueryInterface <ID3D10Texture2D> ( &m_spTexture ); if( ! m_spTexture.p ) return false; The texture loads fine, I can blit this onscreen. Later, I want to use that texture... in order to get a texture that is CPU readable, I tried setting the D3DX10_IMAGE_LOAD_INFO structure to include D3D10_CPU_ACCESS_READ. This fails on D3DX10CreateShaderResourceViewFromFile() with E

  • I've got an effect which is a fairly simple two-pass deal- but it involves rendering to a texture in the first pass. Is it possible to change the render target from within an effect? I would definitely appreciate the reduction in code duplication from C++ by being able to do both passes within HLSL.

  • I'm programming gpgpu raymarching (volumetric raytracing) in directx11. I succesfully perform compute shader and save raymarched volume data to texture. Then i want to use same texture as SRV in normal graphic pipeline. But it doesnt work, texture is not visible. Texture is ok, when i save it file it is what i expect. Texture rendering is ok too, when i render another SRV, it is ok. So problem is only in UAV->SRV. I also triple checked if pointers are ok. Please help, i'm getting mad about this. Here is some code: //before dispatch D3D11_TEXTURE2D_DESC textureDesc; ZeroMemory( &

  • () { } Cube::~Cube() { pBuffer->Release(); // why does this only work when put here? because it's created here? I thnk so, why not iBuffer though? } void Cube::Draw() { render_frame(); } void Cube...Basically when placed in the same file this works fine, but if placed in separate files (and I have tested this) just after Init() is called, the pointer to ID3D10* device's value is suddenly...; scd.BufferDesc.Width = SCREEN_WIDTH; scd.BufferDesc.Height = SCREEN_HEIGHT; scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; scd.OutputWindow = hWnd; scd.SampleDesc.Count = 1; scd.SampleDesc.Quality = 0

  • I'm having problems drawing a simple sprite. When I draw: void CSprite2D::render() { CHECKGL(glLoadIdentity()); CHECKGL(glEnable(GL_TEXTURE_2D)); CHECKGL(glEnable(GL_BLEND)); CHECKGL(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); m_texture->bind(); //CHECKGL(glPushMatrix()); CHECKGL(glBegin(GL_TRIANGLE_STRIP..._OPERATION in glEnd(). I suspect that error is not here, but I can't detect where may be. Actually, the output render seems ok. But I want to solve this situation before to catch a subtle bug tomorrow

  • I'm rendering to a depth map in order to use it as a shader resource view, but when I sample the depth map in my shader, the red component has a value of 1 while all other channels have a value of 0. The Texture2D I use to create the DepthStencilView is bound with the D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE flags, the DepthStencilView has the DXGI_FORMAT_D32_FLOAT format, and the ShaderResourceView's format is D3D11_SRV_DIMENSION_TEXTURE2D. I'm setting the depth map render target, then i'm drawing my scene, and once that is done, I'm the back buffer render target and depth

Data information