Why can't I get a bool packed and aligned into a D3D constant buffer?

KlashnikovKid
  • Why can't I get a bool packed and aligned into a D3D constant buffer? KlashnikovKid

    Alright, I'm having a hard time getting a bool packed and aligned into a hlsl constant buffer and I'm not sure why.

    Here is the buffer in hlsl

    cbuffer MaterialBuffer : register(b1) {
        float3 materialDiffuseAlbedo;
        float  materialSpecularExponent;
        float3 materialSpecularAlbedo;
        bool isTextured;
    };
    

    And here it is in c++

    struct GeometryBufferPass_MaterialBuffer {
        XMFLOAT3 diffuse;
        float specularExponent;
        XMFLOAT3 specular;
        bool isTextured;
    };
    

    I've tried moving the bool and padding the struct in all kinds of ways with no luck. What is the correct way to do this?

  • Alright, did some reading and noticed that a hlsl bool is essentially a 32 bit integer. So I just used an int in the c++ struct to solve my problem.

    struct GeometryBufferPass_MaterialBuffer {
        XMFLOAT3 diffuse;
        float specularExponent;
        XMFLOAT3 specular;
        int isTextured;
    };
    

  • For efficiency, constant buffers will be mapped such that values do not straddle GPU registers. Each register is four floats in size (16 bytes) so constant buffer structures must be a multiple thereof on the GPU. Your C++ structure should be padded accordingly if you want to use it as a convenience for mapping data (this, note, doesn't always scale well).

    Your issue, then, is that an HLSL boolean is four bytes, but one byte on the CPU side (in your specific implementation). This causes your C++ structure to not align properly: the significant bit of a boolean value (the 0 or 1 that matters) is going to be stored in the least-signficant byte of the value, and since the sizes don't agree the location of that byte in memory will differ in the CPU and GPU versions of the structure.

    Manually inserting the appropriate padding and ensuring proper 16-byte alignment, or just using an appropriately-sized type, like an integer, should fix the issue. This thread may also be of use to you as it contains a more in-depth discussion of roughly the same problem.

Tags
directx hlsl data-structure
Related questions and answers
  • () { } Cube::~Cube() { pBuffer->Release(); // why does this only work when put here? because it's created here? I thnk so, why not iBuffer though? } void Cube::Draw() { render_frame(); } void Cube... 0x00000000. If it's all in the same file, the device pointer has a memory address all the way through. I'm really stumped here. Also wasn't quite sure whether this was a gamedev or stackoverflow question so...(&dsvd, sizeof(dsvd)); dsvd.Format = DXGI_FORMAT_D32_FLOAT; // one 32-bit float per pixel dsvd.ViewDimension = D3D10_DSV_DIMENSION_TEXTURE2D; // depth buffer is a 2D texture device->

  • GEOMETRYFRAME_DATA** ppGeometryFrames; // Store here the pointers to the buffer data? } then probably: struct GEOMETRYFRAME_DATA { float fTime; // or uint or whatever UINT nBufferCount... full of questions about this.. Thanks in advance. EDIT: I'm gonna add some code about it Yes probably I'm mixing up things, but it's complicated because there are a lot of data: struct... them to be attached to the node in most cases, even if I allow setting global lights to the scene. @Nicol: Yes that's what I'm trying to figure out. You can see the code doesn't rely on any hardware

  • : float m_Position[3]; // x, y, z // offset 0, size = 3*sizeof(float) float m_TexCoords[2]; // u, v // offset 3*sizeof(float), size = 2*sizeof(float) float m_Normal[3]; // nx, ny, nz; float colour[4]; // r, g, b, a float padding[20]; // padded for performance }; I've already written a working VertexBufferObject class that creates a vertex buffer... onscreen, in the shape of the terrain rectangle, but there are no regular lines etc. Here's the code I use for rendering: void ShaderProgram::Draw() { using namespace AntiMatter; if( ! m

  • I'm trying to load MD2 models (stolen from Cube) and my loader seems to be loading the models fine, but I can't say the same for the drawing of the model. Here's my code: typedef float vec3_t[3...; float current_time, old_time, interpol; int type, current_frame, next_frame; }; struct md2_glcmd_t { float s; float t; int index; }; #define MD2_IDENT (('2'<<24) + ('P'<<16... char index_normal; }; typedef short md2_textcoord_t[2]; struct md2_frame_t { float scale[3]; vec3_t translate; char name[16]; md2_vertex_t vertices[1]; // First vertex of this frame

  • I'm writing a simple class to draw all the debugging lines I have in my scene at once. The code in the draw loop is this so far: (If I put for example, 2 instead of temporary2DVerts.size() where I have marked with //THIS LINE, the code works fine.) when I run the code below the line breaks //HERE. Access violation reading location 0x00000000. seems like the create buffer line is not working, but why? what's the solution? D3D10_BUFFER_DESC bd; bd.Usage = D3D10_USAGE_DYNAMIC; bd.ByteWidth = sizeof(VERTEX) * temporary2DVerts.size();// THIS LINE bd.BindFlags = D3D10_BIND_VERTEX_BUFFER

  • I am setting an HLSL effect variable in the following way in a number of places. extern ID3D10EffectVectorVariable* pColour; pColour = pEffect->GetVariableByName("Colour")->AsVector(); pColour->SetFloatVector(temporaryLines[i].colour); In one of the places it is set in a loop, each line in the vector temporaryLines has a D3DXCOLOR variable associated with it. The most annoying thing about this problem is that it actually works on rare occasions, but most of the time it doesn't. Are there any known issues with this kind of code? Here it works: void GameObject::Draw(D3DMATRIX

  • i think i just found the solution. 1) the problem is that backbuffer surface and source surface are of different formats - that is why exception was thrown. 2) the image path needed double slash "C...); //================================================================================================================================// code starts here //================// int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { if (!initWindow(hInstance)) return...(D3D_SDK_VERSION))) return false; std::wstring wsPath = L"C:\wood.bmp"; // path to the image D3DXIMAGE_INFO Info; if (FAILED(D3DXGetImageInfoFromFile(wsPath.c_str(), &Info

  • I'm using SDL & openGL to render a tile-map. The issue is that the tile-map rendering is extremely messed up, and I'm just a bit unsure what I'm doing wrong exactly. It should just be the first tile being rendered, but I'm getting a blurred mess :S. My rendering code: glBindTexture(GL_TEXTURE_2D, texture); float texscale = 1.0f / (float)tileWidth; sourceX = sourceX / (float)tileSheetWidth...(); glOrtho(0, Width, Height, 0, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); Just in-case, here's my image loading code, I think that perhaps this may be were the problem lies somehow

  • I have a very simple effect file shown below. I am using this to draw 2D lines, however it is not behaving how I expected and I can't seem to get my head round why. If I draw a line that goes from 0,0 to 100, 100 for example, I would expect it to draw a line from one corner of the screen to a little way in. I would expect it to treat these numbers as screen coordinates. Instead, the line is huge! A line of about 2 long fills the whole screen. Why is this? How can I modify my shader to 'think' in screen coordinates? // a struct for the vertex shader return value struct VSOut { float4 Col

Data information