Creating Font Textures in Direct3d without D3DX

NtscCobalt
  • Creating Font Textures in Direct3d without D3DX NtscCobalt

    Update: The completed solution using Nathan Reed's answer is posted in my answer below

    A few open source programs I've seen that render installed fonts do something like this...

    Create a texture for the font
    Draw font to texture using native calls
    Store the coordinates for each character
    Create a vertex buffer with a quad for each character
    Render the vertex buffer using the font texture
    

    But I'm really having trouble getting this to work consistently. GetTextExtentPoint32A() for Win32 font sizes which does NOT include italic support and seems to cut off some of the stranger fonts.

    Here is an example output from my code. Notice the cut off edge of the Q and a few other characters. A much more exaggerated version happens when rendering italics.

    http://i30.photobucket.com/albums/c308/thentsc/font_test.jpg

    The code looks something like this...

    for (uint8 c = 0; c < 127 - 32; c++)
    {
        str[0] = c + 32;
        SIZE size;
        GetTextExtentPoint32A(hDC, str, 1, &size);
        if (x+size.cx+1 > m_TextureSize)
        {
            x  = 0;
            // 1 pixel vertical margin
            y += size.cy + 1;
        }
    
        uint32 Diff;
        ExtTextOutA(hDC, x+0, y+0, ETO_OPAQUE | ETO_CLIPPED, NULL, str, 1, NULL);
        m_fTexCoords[c].Left = x/m_TextureSize;
        m_fTexCoords[c].Top = y/m_TextureSize;
        m_fTexCoords[c].Right = x+size.cx/m_TextureSize;
        m_fTexCoords[c].Bottom = y+size.cy/m_TextureSize;
    
        // 1 pixel horizontal margin
        x += size.cx;
    }
    

    The image above is actually just a render of the whole texture stretched from -1,-1,0 to 1,1,0 not using the stored texture coordinates to show clipping due to GetTextExtentPoint32A.

    Again this works fine for some fonts but looks terrible for others and doesn't support italics.

    It is designed to work in a system like this...

    cFont* g_Font;
    cTextObject g_TextObject;
    
    Setup()
    {
        g_Font = SetupFont("Arial", BOLD | ITALIC, 24pt);
        g_TextObject = CreateTextObject("The quick brown fox...", g_Font, X, Y);
    }
    
    OnRender()
    {
         g_TextObject->Render();
    }
    

    So in general I guess my question is what is the correct approach for this issue?

    Should I just let each text object have a texture and on text object creation use the GDI to render all text to it (which would bypass the italics and font face problem)?

    But then wouldn't texture memory usage get out of hand?

  • I think your problem is that GetTextExtentPoint retrieves the "advance width", or how far the cursor would be advanced horizontally by inserting a particular bit of text. But some fonts (especially italics) contain overhangs - where the character extends a little bit further than the advance width.

    You can get more detailed information by using GetCharABCWidths. This retrieves the three widths called A, B, and C; I believe the exact width of the drawn character is the B width, which should be what you want. A and C are extra space to the left and right of the character respectively, and are negative in the case of overhangs. (The advance width is A + B + C.)

    If you're interested in precise typography you may also want to look at GetKerningPairs, which will tell you how to adjust the spacing when specific letters are adjacent. For instance A and V are normally slid a bit closer together when appearing adjacent: compare "AV" vs "A‌ V". See the difference? (In the second one I put in a Unicode zero-width space character that breaks the kerning without adding any space of its own.)

  • Alright so after about 2 hours of coding and testing here are my findings and final solution to this problem that works perfectly for every TrueType font with any font thickness and any escapement (italics). This also works with quality CLEARTYPE_QUALITY or ANTIALIASED_QUALITY without AA bleeding between characters (as far as I can physically see).

    This does not fix any problems with underline and strikeout not being contiguous between characters.

    The major ah-ha moment was when I realized that ExtTextOutA actually prints to the screen with the negative left margin (abcA) included. This can actually result in it printing characters with their left side trimmed if you try to print them at 0 for X. You have to take this into account when rendering the characters.

    Without further ado here is the solution:

    uint32 x = 0, y = 0;
    char str[2] = "\0";
    for (uint8 c = 0; c < MAX_FONT_CHARS; c++)
    {
        ABC* CharW = &abcWidths[c];
    
        // GetTextExtentPoint32 used for character height
        str[0] = c + 32;
        SIZE Size;
        if (!GetTextExtentPoint32A(hDC, str, 1, &Size))
        {
            Warning("GetTextExtentPoint32A failed for character '%s'", str);
        }
    
        // Left starting point is the negative left margin, it is used as an offset for ExTextOut and for calculating the Right
        uint32 Left = (CharW->abcA < 0 ? -CharW->abcA : 0);
    
        // The right is the Left, plus the character width, plus the left margin, plus the right overhang if it is positive
        uint32 Right = Left + CharW->abcB + CharW->abcA + (CharW->abcC > 0 ? CharW->abcC : 0);
    
        // Wrap around if this font would bleed off the edge
        if (x + Right > m_TextureSize)
        {
            x = 0;
            y += Size.cy;
        }
    
        // Draw a border where we expect the object to be drawn
        Rectangle(hDC, x, y, x + Right, y + Size.cy); 
    
        // Print the character
        ExtTextOutA(hDC, x + Left, y, ETO_CLIPPED | ETO_OPAQUE, NULL, str, 1, NULL);
        // Store the texture coordinates
        m_fTexCoords[c].Left = x / ((float) m_TextureSize);
        m_fTexCoords[c].Top = y / ((float) m_TextureSize);
        m_fTexCoords[c].Right = (x + Right) / ((float) m_TextureSize);
        m_fTexCoords[c].Bottom = (y + Size.cy) / ((float) m_TextureSize);
    
        // Increment by the character width and margin
        x += Right;
    }
    

    The rectangle (x, y) to (x+Right, y+Size.cy) exactly matches area written by ExTextOut. I used this function in conjunction with Photoshop and some transparency to double check.

    You still need to properly store and use the the abcA (Left Margin) and abcC (Right Margin) if you want to have proper overhangs and underhangs. They are intentionally used here to only find the drawn width.

Tags
c++ directx fonts
Related questions and answers
  • in the shader code) for( int n = 0; n < vShaderArgs.size(); n ++) glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() ); // Create and bind to a vertex array object, which... ); glDisableClientState( GL_COLOR_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); } Back to the Vertex Array Object though. My code for creating the Vertex...: float m_Position[3]; // x, y, z // offset 0, size = 3*sizeof(float) float m_TexCoords[2]; // u, v // offset 3*sizeof(float), size = 2*sizeof(float) float m_Normal[3

  • buffer and use it to create the render target ID3D10Texture2D* pBackBuffer; swapchain-&gt;GetBuffer(0, __uuidof(ID3D10Texture2D), (LPVOID*)&pBackBuffer); device-&gt;CreateRenderTargetView(pBackBuffer, NULL, &rtv); pBackBuffer-&gt;Release(); // set the back buffer as the render target device-&gt;OMSetRenderTargets(1, &rtv, dsv); D3D10_VIEWPORT viewport; // create a struct to hold... buffer ID3D10Texture2D* pDepthBuffer; device-&gt;CreateTexture2D(&zbd, NULL, &pDepthBuffer); // create the texture // create the depth buffer D3D10_DEPTH_STENCIL_VIEW_DESC dsvd; ZeroMemory

  • = xyCoords.y; m_font-&gt;DrawTextA(NULL, stateStream.str().c_str(), -1, &rct, DT_LEFT|DT_NOCLIP , D3DXCOLOR(1,0,0,1)); This all works perfectly. Where am I going wrong with the first bit of code... (texture coords for the screen go from -1 to 1 in each axis) but headPos has the value of -1.#IND000 in both the x and y. I think this may be because D3DXVec3Project is returning MASSIVE numbers, although I...() will return a vector3 that has a 0 z component and x and y values from about 2 to roughly 33. Now how on earth can me with a calculator be better than the computer? what am I missing? 0.0f / 720.0f) * 2.0f

  • I'm having trouble implementing render to texture with OpenGL 3. My issue is that after rendering to the frame buffer, it appears the rendered object becomes deformed, which may imply a bad transformation has occurred somewhere. Which doesn't make sense as the object renders fine when not using my frame buffer (see bottom of post). The current result is such: Current result http://k.minus.com...); glBindFramebuffer(GL_FRAMEBUFFER, 0); } Here is my drawing box code, which just takes a transformation matrix and calls the appropriate functions. The current values of P is a projection matrix

  • that make really hard to work with when coding some functions that use them. I was thinking of making ie. SimpleMesh and HierarchyMesh objects, which will also require that the renderer can deal... something that will be default values in most of the scene objects (I believe the scene won't be built from skinned hierarchy meshes but mostly static and even in some cases the animation is purely... know if my speculations are ok, as I don't have much experience with 3d animations yet. I want to make a well decision as any option I choose would require a lot of work to get it to render and I

  • i think i just found the solution. 1) the problem is that backbuffer surface and source surface are of different formats - that is why exception was thrown. 2) the image path needed double slash "C..., Info.Format, D3DPOOL_SYSTEMMEM, &Surface, NULL); D3DXLoadSurfaceFromFile(Surface, NULL, NULL, wsPath.c_str(), NULL, D3DX_FILTER_NONE, 0, NULL); pd3dDevice -&gt; GetBackBuffer(0, 0...; pd3dDevice -&gt; Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0, 0, 255), 1.0f, 0); // clear the back buffer to a blue color if (SUCCEEDED(pd3dDevice -&gt; BeginScene

  • it all separately? Basically how would I extend this code to include more vectors of vertices? if (temporary2DVerts.size() &gt; 0) { // create the vertex buffer and store the pointer into pBuffer...;IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_LINELIST); // select which vertex buffer to display UINT stride = sizeof(VERTEX); UINT offset = 0; device-&gt;IASetVertexBuffers(0, 1... it has exposed a basic gap in my knowledge about what some of these lines of code do. should I just populate one big list from all the vectors and push that through? some need to be drawn

  • (); glOrtho(0, Width, Height, 0, -1, 1); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); Just in-case, here's my image loading code, I think that perhaps this may be were the problem lies somehow..., texscale + sourceY); glVertex2i(x, y + tileHeight); glEnd(); glLoadIdentity(); My initialization code for OpenGL: // Set the OpenGL state after creating the context with SDL_SetVideoMode glClearColor(0...I'm using SDL & openGL to render a tile-map. The issue is that the tile-map rendering is extremely messed up, and I'm just a bit unsure what I'm doing wrong exactly. It should just be the first

  • from it. As an added touch i have made it so after you collide while traveling down the randomly generated map (or rather as the walls move uppward while your character stays put) the X chars you've...; for(char i= '0'; i!= ' ';swall++) i= buff[buff.size()-1][swall+1]; int sspace= 0; int I= swall+1; for(char i= '0'; i!= 'X';sspace++, I++) i= buff[buff.size()-1][I... set_location(int X, int Y) { location[0] = X; location[1] = Y;}; bool Alive() {return alive;}; void SetLife(bool f) {alive= f;}; //////////////////////////////// Move

Data information