Tutorials
Articles
OpenGL Demos
Games
OpenGL Misc
MSG Board
About
Donate
Links
Home
Megabyte Softworks
C++, OpenGL, Algorithms




Current series: OpenGL 3.3
(Return to list of OpenGL 3.3 tutorials)

Download (5.15 MB)
1311 downloads. 4 comments
28.) Fonts Pt. 2 - Upgraded

Hello . After another kinda long break, I bring you yet another tutorial. This one is an upgrade of previous tutorial 09.) Freetype Fonts And Ortho Projection. This time, I made it a lot better, mainly performance-wise, but also I added very important thing, and that is Unicode support. That means, that you can start printing strings with all those special characters that are not in English language, but are specific for other languages (Slovak language has many special characters, I printed some of them out in this tutorial). So I will try to go through the code and discuss the changes I made. All this makes this tutorial a little specific and special, but in the end we will have fonts implemented a better way. So let's go through these changes.

Performance-wise Upgrade

As I have said before, the new version should run faster. The old version was a little awkward - it created a separate texture for every character. That means, that for every character that needs to be printed, we needed to rebind another texture. That causes a real performance bottleneck (in general, bindings of whatever stuff is pretty slow). So the new version works differently - instead of creating a texture for every single character, we create one big texture with many characters printed into it. To make it more universal, I created two #defines, which set the number of characters in a single texture - they are located in freetypefonts.h - CHARS_PER_TEXTURE is 256 and CHARS_PER_TEXTUREROOT is just root of this number (the number must be rootable, you will see the use of this #define later), that means that 256 characters are in one texture, as you can see in this picture:

This covers the whole ASCII table and even more easily (note these square characters - these are some characters, that don't have usual graphical representation, for example these ASCII characters with value 2 or 3 (start of text, end of text), which are deprecated now don't have their own graphical representation, so don't bother about it - all characters that we 're gonna use are rendered nicely in our texture. From this reason, we also need to change texture coordinates of every rendered character - in the version before, the texture coordinates were always [0, 1], [0, 0], [1, 1] and [1,0], because every character has been covered by whole texture, but now it is different - every character has its own texture coordinates, that we need to calculate (but it will be really easy). Now, even if we have only this one texture with all ASCII characters printed out, we really need to bind the font texture only one-time and then just use it throughout whole rendering of (english) text! Just for a comparison, you can see a texture with first 1024 Unicode characters:

The whole thing is programmed in a way, that in case we would need let's say first 15000 Unicode characters in our applications and we would have set CHARS_PER_TEXTURE to 1024 and CHARS_PER_TEXTURE_ROOT to 32 (which is the case of the picture above), the application will create (15000 / 1024) = 14.64 and this number rounded up (the ceiling of the number) is the number of textures we will need to store all 15000 characters, so this number would be 15 in this case.

Now let's go through the actual changes in the code:


#pragma once

#include <ft2build.h>
#include FT_FREETYPE_H

#include "texture.h"
#include "shaders.h"
#include "vertexBufferObject.h"

/********************************

Class:      CFreeTypeFont

Purpose:   Wraps FreeType fonts and
         their usage with OpenGL.

********************************/

#define CHARS_PER_TEXTURE 1024
#define CHARS_PER_TEXTUREROOT 32

class CFreeTypeFont
{
public:
   bool LoadFont(string sFile, int iPXSize, int iMaxCharSupport = 128);
   bool LoadSystemFont(string sName, int iPXSize, int iMaxCharSupport = 128);

   int GetTextWidth(string sText, int iPXSize);

   void Print(string sText, int x, int y, int iPXSize = -1);
   void Print(wstring sText, int x, int y, int iPXSize = -1);
   void PrintFormatted(int x, int y, int iPXSize, char* sText, ...);
   void PrintFormatted(int x, int y, int iPXSize, wchar_t* sText, ...);

   void DeleteFont();

   void SetShaderProgram(CShaderProgram* a_shShaderProgram);

   CFreeTypeFont();
private:
   void CreateChar(int iIndex, GLubyte* bData);

   vector<CTexture> tCharTextures;
   vector<int> iAdvX, iAdvY;
   vector<int> iBearingX, iBearingY;
   vector<int> iCharWidth, iCharHeight;
   int iLoadedPixelSize, iNewLine;
   int iOneCharSquareSize;

   bool bLoaded;

   UINT uiVAO;
   CVertexBufferObject vboData;

   FT_Library ftLib;
   FT_Face ftFace;
   CShaderProgram* shShaderProgram;
};

The first important thing to notice is that we don't have a static 256-long arrays for each character, but now we have vector of ints for dynamic scaling, depending on how much characters we want to create. Also before the class definition, there are those two defines CHARS_PER_TEXTURE and CHARS_PER_TEXTURE_ROOT. Notice the double defintion of PrintText function - one is for regular characters (1-byte chars) and one for wide-strings. This is pretty easy to comprehend, let's move on to more important stuff.

CreateChar()

void CFreeTypeFont::CreateChar(int iIndex, GLubyte* bData)
{
   FT_Load_Glyph(ftFace, FT_Get_Char_Index(ftFace, iIndex), FT_LOAD_DEFAULT);

   FT_Render_Glyph(ftFace->glyph, FT_RENDER_MODE_NORMAL);
   FT_Bitmap* pBitmap = &ftFace->glyph->bitmap;

   int iW = pBitmap->width, iH = pBitmap->rows;

   // Some characters when rendered, are somehow just bigger than our desired pixel size
   // In this case, I just ignore them - another solution is to set iOneCharSquareSize in LoadFont function
   // to twice the size (just multiply by 2 and you're safe)
   if(iW > iOneCharSquareSize)
      return;
   if(iH > iOneCharSquareSize)
      return;

   int iRow = (iIndex%CHARS_PER_TEXTURE)/CHARS_PER_TEXTUREROOT;
   int iCol = (iIndex%CHARS_PER_TEXTURE)%CHARS_PER_TEXTUREROOT;
   int iOneTextureByteRowSize = CHARS_PER_TEXTUREROOT*iOneCharSquareSize;

   // Copy glyph data
   FOR(ch, iH)memcpy(bData+iRow*iOneTextureByteRowSize*iOneCharSquareSize + iCol*iOneCharSquareSize + ch*iOneTextureByteRowSize, pBitmap->buffer + (iH-ch-1)*iW, iW);

   // Calculate glyph data
   iAdvX[iIndex] = ftFace->glyph->advance.x>>6;
   iBearingX[iIndex] = ftFace->glyph->metrics.horiBearingX>>6;
   iCharWidth[iIndex] = ftFace->glyph->metrics.width>>6;

   iAdvY[iIndex] = (ftFace->glyph->metrics.height - ftFace->glyph->metrics.horiBearingY)>>6;
   iBearingY[iIndex] = ftFace->glyph->metrics.horiBearingY>>6;
   iCharHeight[iIndex] = ftFace->glyph->metrics.height>>6;

   iNewLine = max(iNewLine, int(ftFace->glyph->metrics.height>>6));

   glm::vec2 vQuad[] =
   {
      glm::vec2(0.0f, float(-iAdvY[iIndex]+iOneCharSquareSize)),
      glm::vec2(0.0f, float(-iAdvY[iIndex])),
      glm::vec2(float(iOneCharSquareSize), float(-iAdvY[iIndex]+iOneCharSquareSize)),
      glm::vec2(float(iOneCharSquareSize), float(-iAdvY[iIndex]))
   };
   float fOneStep = 1.0f/(float(CHARS_PER_TEXTUREROOT));
   // Texture coordinates change depending on character index, which determines its position in the texture
   glm::vec2 vTexQuad[] =
   {
      glm::vec2(float(iCol)*fOneStep, float(iRow+1)*fOneStep),
      glm::vec2(float(iCol)*fOneStep, float(iRow)*fOneStep),
      glm::vec2(float(iCol+1)*fOneStep, float(iRow+1)*fOneStep),
      glm::vec2(float(iCol+1)*fOneStep, float(iRow)*fOneStep)
   };

   // Add this char to VBO
   FOR(i, 4)
   {
      vboData.AddData(&vQuad[i], sizeof(glm::vec2));
      vboData.AddData(&vTexQuad[i], sizeof(glm::vec2));
   }
}

First we make a simple check, whether the size of character being loaded isn't somehow bigger than our desired rendered size (in pixels). I really didn't expect this to happen, but without this check the application would simply crash. I don't know exactly what kind of characters these were, but in case you're interested, just put a breakpoint there and you'll know . After this, there are three important variables created - iRow, iCol and iOneTextureByteRowSize. The first two variables will tell us row and column in our texture, where the character should be stored. For example, if you look at the first image with parameters 256 / 16, the letter 'A', ASCII code of which is 65 will result in row 65/16 = 4 and column 65%16 = 1, thus row 4, column 1. The To keep the coordinate systems consistent, I count rows from bottom of the texture to top. You can do it other way around too, but then you must flip OpenGL texture coordinates. Both ways are good, it depends on programmer's choice. The iOneTextureByteRowSize variable means, how many bytes does one whole row of texture takes in memory. This is a pretty simple math - it is just number of characters in a row times pixel size of one character, which is set in LoadFont function.

The only difficult part here is to copy glyph data to the right place in texture. The copying is done by copying each single pixel row of glyph to the corresponding pixel row in texture. So we use for cycle to go through all the rows in texture. But we need to calculate the correct data offset from the start of texture data. First of all, we move to the correct row offset of texture by adding +iRow*iOneTextureByteRowSize*iOneCharSquareSize. Then we need to move to the correct column by adding iCol*iOneCharSquareSize. With +ch*iOneTextureByteRowSize, we get to copy the correct pixel row of glyph to the corresponding place in texture. The second memcpy parameter, which represents the source of copied data is the correct row of glyph, which would be pBitmap->buffer + (iH-ch-1)*iW - we need to flip the row, because otherwise the characters would be flipped upside-down. This is something you can do by either flipping data here, or by flipping texture coordinates again, whatever fits you the most. To be honest, I just tried the combination of this and texture coordinates that worked properly and because it worked, I didn't care anymore - main thing is that it is correct .

Next significant changes are in LoadFont() function, let's go through them:

LoadFont()

bool CFreeTypeFont::LoadFont(string sFile, int iPXSize, int iMaxCharSupport)
{
   BOOL bError = FT_Init_FreeType(&ftLib);
   
   bError = FT_New_Face(ftLib, sFile.c_str(), 0, &ftFace);
   if(bError)return false;
   FT_Set_Pixel_Sizes(ftFace, iPXSize, iPXSize);
   iLoadedPixelSize = iPXSize;
   iOneCharSquareSize = next_p2(iLoadedPixelSize);

   // Neat trick - we need to calculate ceil(iMaxCharSupport/CHARS_PER_TEXTURE) and that calculation does it, more in article
   int iNumTextures = (iMaxCharSupport+CHARS_PER_TEXTURE-1)/CHARS_PER_TEXTURE;

   // One texture will store up to CHARS_PER_TEXTURE characters
   GLubyte** bTextureData = new GLubyte*[iNumTextures];

   tCharTextures.resize(iNumTextures);

   FOR(i, iNumTextures)
   {
      int iTextureDataSize = iOneCharSquareSize*iOneCharSquareSize*CHARS_PER_TEXTURE;
      bTextureData[i] = new GLubyte[iTextureDataSize];
      memset(bTextureData[i], 0, iTextureDataSize);
   }

   iAdvX.resize(iMaxCharSupport); iAdvY.resize(iMaxCharSupport);
   iBearingX.resize(iMaxCharSupport); iBearingY.resize(iMaxCharSupport);
   iCharWidth.resize(iMaxCharSupport); iCharHeight.resize(iMaxCharSupport);

   glGenVertexArrays(1, &uiVAO);
   glBindVertexArray(uiVAO);
   vboData.CreateVBO();
   vboData.BindVBO();

   FOR(i, iMaxCharSupport)CreateChar(i, bTextureData[i/CHARS_PER_TEXTURE]);
   bLoaded = true;

   FT_Done_Face(ftFace);
   FT_Done_FreeType(ftLib);

   FOR(i, iNumTextures)
   {
      tCharTextures[i].CreateFromData(bTextureData[i], iOneCharSquareSize*CHARS_PER_TEXTUREROOT, iOneCharSquareSize*CHARS_PER_TEXTUREROOT, 8, GL_DEPTH_COMPONENT, false);
      tCharTextures[i].SetFiltering(TEXTURE_FILTER_MAG_BILINEAR, TEXTURE_FILTER_MIN_BILINEAR);

      tCharTextures[i].SetSamplerParameter(GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
      tCharTextures[i].SetSamplerParameter(GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
   }
   
   vboData.UploadDataToGPU(GL_STATIC_DRAW);
   glEnableVertexAttribArray(0);
   glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, sizeof(glm::vec2)*2, 0);
   glEnableVertexAttribArray(1);
   glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(glm::vec2)*2, (void*)(sizeof(glm::vec2)));

   FOR(i, iNumTextures)
      delete[] bTextureData[i];

   delete[] bTextureData;

   return true;
}

There are several lines of code worth noticing. Firstone is iNumOfTextures variable, which determines, how many textures we need to store desired number of characters. There is a little neat trick to calculate ceiling of the number iMaxCharSupport / CHARS_PER_TEXTURE. Why ceiling? Because if you have let's say 1024 characters per texture and all you want is 1024 characters, you need just one texture. But if you wanted let's say 1025 characters, you would require two texture, or ceil(1025 / 1024) = ceil(1.000976) = 2.

Let's take it in general - you want to find ceiling of A/B, while we work with integers. You can do it somehow using a remainder, but there is a neater solution - if you add (B-1) to A and then proceed with division by B, this results in getting the ceiling of (A/B). Why? Because addition of (B-1) doesn't change anything, when A is divisible by B - in this case, it can be written as A/B + (B-1)/B and this will result in A/B + 0. However, when A is not divisible by B, that means, that there's a remainder at least 1 from division of A by B (let it be X), which results in floor(A/B) + (B-1+X)/B. And since X is at least 1, but B-1 at most, the latter expression adds the desired 1 and thus we get the ceiling of (A/B). Neat trick, isn't it ?

When this is done, all we need to do is to allocate the appropriate number of textures to store desired number of characters and then simply create each and every one of them. Don't forget to free the allocated memory after creation of textures - this is not Java nor C# .

Last significant change is in Print() function, so let's have a look at it.

Print()

void CFreeTypeFont::Print(wstring sText, int x, int y, int iPXSize)
{
   if(!bLoaded)return;
   int iLastBoundTexture = -1;

   glBindVertexArray(uiVAO);
   shShaderProgram->SetUniform("gSampler", 0);
   glEnable(GL_BLEND);
   glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
   int iCurX = x, iCurY = y;
   if(iPXSize == -1)iPXSize = iLoadedPixelSize;
   float fScale = float(iPXSize)/float(iLoadedPixelSize);
   FOR(i, ESZ(sText))
   {
      if(sText[i] == '\n')
      {
         iCurX = x;
         iCurY -= iNewLine*iPXSize/iLoadedPixelSize;
         continue;
      }
      int iIndex = int(sText[i]);
      int iTextureNeeded = iIndex/CHARS_PER_TEXTURE;
      if(iTextureNeeded < 0 || iTextureNeeded >= ESZ(tCharTextures))
         continue;
      if(iTextureNeeded != iLastBoundTexture)
      {
         iLastBoundTexture = iTextureNeeded;
         tCharTextures[iTextureNeeded].BindTexture();
      }
      iCurX += iBearingX[iIndex]*iPXSize/iLoadedPixelSize;
      if(sText[i] != ' ')
      {
         glm::mat4 mModelView = glm::translate(glm::mat4(1.0f), glm::vec3(float(iCurX), float(iCurY), 0.0f));
         mModelView = glm::scale(mModelView, glm::vec3(fScale));
         shShaderProgram->SetUniform("matrices.modelViewMatrix", mModelView);
         // Draw character
         glDrawArrays(GL_TRIANGLE_STRIP, iIndex*4, 4);
      }

      iCurX += (iAdvX[iIndex]-iBearingX[iIndex])*iPXSize/iLoadedPixelSize;
   }
   glDisable(GL_BLEND);
}

The difference between this and previous tutorial is that we don't need to re-bind texture for every character, which is also horribly slow. Now, all we need is to find the texture needed to print character, and when it differs from the currently bound texture, there is no need for any-rebinding. Rebinding may happen, when you print for example regular characters like letters or numbers and then go for something wild like Euro () symbol, which has got a lot higher code. The variable iLastBoundTexture is used to fulfill this purpose.

You may have noticed wstring being passed as a variable. This is a wide-string version of classical std::string to store multi-byte characters. It is a string of wchar_t characters and it's byte size is 2. As I found on the internet, wchar_t is compiler-dependent and it's size may differ, but honestly I didn't see its size being other than 2 anywhere. But be aware of this fact.

Result

The scene didn't change much from the previous tutorial (it's not important anyway), but you can see some special characters being printed out:

Another pretty easy tutorial of mine, which could teach you a better way to print out fonts. I really hope you enjoyed it and that you learned something new . What tutorial is going to be next? Not even I know . But I would like it to be on the webpage by the half of August (2015, as this is the time of writing), although my ETAs are as reliable as these random guaranteed online money making ads .



Download (5.15 MB)
1311 downloads. 4 comments
 
Name:

E-mail:
(Optional)
Entry:

Enter the text from image:



Smileys




jD6HSLFL (cvfefsgzcm@yahoo.com) on 15.12.2015 15:51:03
I had no idea how to approach this <a href="http://mvmdthgjfoi.com">beofre-now</a> I'm locked and loaded.
SPVB2TYK2e8 (vi0wi9mcm@mail.com) on 11.12.2015 20:06:28
I have learn several excnelelt stuff here. Certainly price bookmarking for revisiting.I wonder how a lot attempt you set to make this type of greatinformative web site.
ryuanlu on 14.08.2015 05:42:19
Try signed distance field for text scaling.

http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf
yuraSniper on 24.06.2015 18:32:58
What about text scaling? Will the text scale nicely using these texture atlases(to some limited range of course)? What glyph resolution is "best" for good scaling range(for example from 8x8 to 32x32 pixels)?
Jump to page:
1