Hello guys! This is the 24th tutorial from my series. Finally, we're getting to really interesting and cool stuff. Today it is animation, more specifically Keyframe animation using old good MD2 models. So far, our applications consisted of pretty static scenes with occasional rotation of objects. But today we will take this to the whole new level, we will animate model themselves! Are you ready to make that step? Are you fully commited you want to push your skills to the next level ? Okay, it might have been a little exaggerated, but animations are definitely cool and let us do some cool stuff, like games . Let's do this.
Keyframe animation is one of the easiest animation methods and also is one of the first methods developed to animate 3D models. The idea is to create starting and ending frame and then calculate the intermediate frames using start and end frame data at an arbitrary position in-between them. The parameter that usually controls the state of the intermediate frame is time - time required for model to pass from starting frame to ending frame. So if a model has to get from starting to ending frame in 2 seconds, then intermediate model at time 0s is the starting frame, at time 1s it is the frame exactly in the middle between starting and ending frame, and finally at time 2s it is the ending frame. In the following image, you can see a sample starting and ending frame of model with 4 vertices:
As you could have seen in the image above, we could calculate every single frame in between the starting and ending frame just by having the total animation running time (2 seconds in this case) and current time. Every value in between has been calculated by INTERPOLATION, in this case it was a simple linear interpolation. There are other types of interpolation actually (for example spherical), but this time we will go through the simplest, i.e. linear interpolation.
Given 2 endpoint values v0 and v1 (in our case two vertices) and a parameter t in range <0..1>, linear interpolation gives us arbitrary point on the line segment defined by these two endpoints, controlled by parameter t, while following conditions hold:
So basically what is happening here is that we go from one point to another across a straight line, thus the name linear interpolation. How do we apply it to the animation then? We do this for every single vertex in the model! For every single vertex that is in starting frame, we also have one in the ending frame and thus we are able to do linear interpolation between two frames! This doesn't apply only for vertex positions - this way, we can also animate normals or even texture coordinates!
However, having only two keyframes for animation would really not be very nice. Imagine an animation where you rotate something. For example, if we could rotate our heads by 180 degrees (we can't, but now imagine we can ), the animation of two keyframes would look like this: The face would get sucked into the head itself and then reappeared on the other side of head. So important lesson from here - when making keyframe animation, two keyframes are just not enough. But if we approximate animation by 20 frames or so and then we would interpolate between keyframes consecutively one by one, the animation will look smoother and better!
And this is exactly what MD2 models are about. Every MD2 model has animations consisting of several keyframes and also defines time, how long should the animation last. We are going to explore the MD2 model file format right now.
MD2 File Format is a little bit complex and explaining every single structure would be very exhausting. Especially when there is already an article explaining it pretty much in detail. It's article by David Henry - MD2 file format (Quake 2's models) and lots of structures used in my tutorial are taken from here with full permission from the author, so I shouldn't be sued or anything . So before reading further, try to read through his article first and then this tutorial continues with loading of MD2 files and preparing appropriate OpenGL structures for that. Below is our MD2 model class:
class CMD2Model
{
public:
void LoadModel(char* sFilename);
void RenderModel(animState_t* animState);
animState_t StartAnimation(animType_t type);
void UpdateAnimation(animState_t* animState, float fTimePassed);
void PauseAnimation();
void StopAnimation();
static anim_t animlist[21];
private:
UINT uiModelVAO;
vector uiFramesBuffer;
md2_t header;
vector< vector > vVertices; // Vertices extracted for every frame
vector > vNormals; // Normal indices extracted for every frame
vector glCommands; // Rendering OpenGL commands
vector vboFrameVertices; // All frames (keyframes) of model
CVertexBufferObject vboTextureCoords; // Texture coords are same for all frames
vector renderModes; // Rendering modes
vector numRenderVertices; // with number of vertices
CTexture tSkin;
UINT uiVAO;
};
Because most of the difficult stuff is in LoadModel function, we will go through it part by part, because it's waaaaaay to long to explain at once. Here is the first part - reading frames in the model:
void CMD2Model::LoadModel(char* sFilename)
{
FILE* fp = fopen(sFilename, "rb");
fread(&header, sizeof(md2_t), 1, fp); // Read header where all info about model is stored
char* buffer = new char[header.num_frames * header.framesize]; // Read all frame data to one big buffer
fseek(fp, header.ofs_frames, SEEK_SET);
fread(buffer, sizeof(char), header.num_frames * header.framesize, fp);
vVertices.resize(header.num_frames, vector(header.num_xyz)); // Allocate space for vertices
vNormals.resize(header.num_frames, vector(header.num_xyz)); // And normals
// Extract vertices and normals from frame data
FOR(i, header.num_frames)
{
frame_t* frame_ptr = (frame_t*)&buffer[header.framesize * i]; // Convert buffer to frame_t pointer
FOR(j, header.num_xyz)
{
vVertices[i][j].x = frame_ptr->translate[0] + (float(frame_ptr->verts[j].v[0]) * frame_ptr->scale[0]);
vVertices[i][j].y = frame_ptr->translate[1] + (float(frame_ptr->verts[j].v[1]) * frame_ptr->scale[1]);
vVertices[i][j].z = frame_ptr->translate[2] + (float(frame_ptr->verts[j].v[2]) * frame_ptr->scale[2]);
vNormals[i][j] = frame_ptr->verts[j].lightnormalindex;
}
}
//...
}
After we open MD2 file, we need to read header. Header has all the info about model as well as offsets in file to different model data (frames, rendering etc...). When we have read the header, we can proceed with reading all the FRAMES of the model. First of all, frame is just a single keyframe in MD2 model. Different frames belong to different model animations, this will be explained later, now we just need to read in all the frames. We do it by offseting file reader to the start of frame data - this is defined by header.ofs_frames. The number of frames in model is header.num_frames and size of one frame (in bytes) is header.framesize. With these two integers we know that if we want to read all the frames at once, their size (in bytes) is header.num_frames * header.framesize. Because of this we allocate a single buffer of chars (because size of one char is 1 byte) with size header.num_frames * header.framesize, into which we read all the framedata using only one fread command.
Next we need to extract data from this buffer. So first, we allocate sufficient space to store vertices and normals. We store them into vector < vector< glm::vec3> >, so we basically create a 2D array, which we access in the way vVertices[frame_index][vertex_index]. The same goes for normals. However, the way extraction works may be a little bit confusing, because we convert our buffer (char pointer) to frame_t pointer. This is nothing difficult, but if you look at the frame_t definition, which is a structure where frame data are stored:
typedef struct
{
float scale[3]; // scale values
float translate[3]; // translation vector
char name[16]; // frame name
vertex_t verts[1]; // first vertex of this frame
} frame_t;
You can see one very suspicious thing - the last member verts[1]. It's an array of vertices of size 1. Whaaaaat? How is it possible that later we access it as normal array with indices greater than 0 and all the vertices are stored there? Why isn't there a pointer or something? Let me explain. Pointer to vertices can point absolutely to any place in memory. However, by having that one first vertex at the end of struct, we have assured that this array is aligned in memory is truly at the end of frame_t and continues even beyond frame_t itself. So the memory layout of whole frame looks as following:
So it is really nothing else than just some ease of access to the vertices themselves. There are other ways to do the very same thing, but I left it this way. To be honest, I'm not exactly sure whether all these structs are original MD2 structures also used in Quake 2 codes (which are available online for free BTW, link at the end of the tutorial) or these structures are made by David Henry, author of the linked MD2 article. Anyway, it doesn't matter, because it works and it is an interesting and unusual approach to something like this, although it may confuse people at first. But because of that reason, we cannot simply move the frame_ptr (which points to frame) like we normally move pointers, because it simply wouldn't work correctly. The increase of pointer value by 1 (which should intuitively point to next frame) will actually move frame_ptr by constant size of 44 bytes, which is size of frame_t structure. We need to move by header.framesize to have a correct pointer. We don't move the pointer manually, but we set it at the start of every FOR cycle repetition to the correct position.
Now that we know how to extract vertex data, we will convert them to our structures. Now notice this - vertex_t structure consists of 3 unsigned chars for coords and 1 unsigned char for normal. What does this mean? How we can we define vertex position by unsigned chars? Now this might seem a little unusual today, but in 1997, the need to save disk space was a lot bigger than it is today (I remember my first HDD to have 1,18 GB in 1996, good times ). From this reason, MD2 model file format had only 1 byte per vertex dimensions, so that you could store only 256 different values there. Today, we can safely store whole floats (4 bytes), and everything is alright. But these 256 values were also scaled and translated. Each frame class has also translation and scale vector, by which we multiple all the BYTE values. This way we will get from 256 different values to... 256 values, but in floating point numbers and stretched or shrinked a little . The vectors converted are then translated by translation vector, that is written in the file for every frame.
Now you should understand, how to extract vertex data. The last thing we need to get is per vertex normal. And normal is also stored as 1 unsigned char, so only 1 byte to code a normal? You ask how is this possible? Easily - there is an global precomputed array of 256 normals in anorms.h file, which contains MD2 model normals. So this 1 byte is only an index to that array, nothing else .
This should close the vertex and normals reading part, let's move further in loading - now we will load info about how to render the model:
void CMD2Model::LoadModel(char* sFilename)
{
//...
// Now let's read OpenGL rendering commands
glCommands.resize(header.num_glcmds);
fseek(fp, header.ofs_glcmds, SEEK_SET);
fread(&glCommands[0], sizeof(int), header.num_glcmds, fp);
int i = 0;
int iTotalVertices = 0;
// And start with creating VBOs for vertices, textue coordinates and normals
vboFrameVertices.resize(header.num_frames);
FOR(i, header.num_frames)vboFrameVertices[i].CreateVBO();
vboTextureCoords.CreateVBO();
while(1) // We while loop until we come to 0 value, which is the end of OpenGL commands
{
int action = glCommands[i];
if(action == 0)break;
int renderMode = action < 0 ? GL_TRIANGLE_FAN : GL_TRIANGLE_STRIP; // Extract rendering mode
int numVertices = action < 0 ? -action : action; // And number of vertices
i++;
renderModes.push_back(renderMode); // Remember the values
numRenderVertices.push_back(numVertices);
FOR(j, numVertices)
{
float s = *((float*)(&glCommands[i++])); // Extract texture coordinates
float t = *((float*)(&glCommands[i++]));
t = 1.0f - t; // Flip t, because it is (from some reasons) stored from top to bottom
int vi = glCommands[i++];
vboTextureCoords.AddData(&s, 4); // Add texture coords to VBO
vboTextureCoords.AddData(&t, 4);
FOR(k, header.num_frames)
{
vboFrameVertices[k].AddData(&vVertices[k][vi], 12); // Add vertex to VBO
vboFrameVertices[k].AddData(&anorms[vNormals[k][vi]], 12); // Add normal to VBO from normal table
}
}
}
//...
}
Now we can see the reading of rendering data. How exactly does this work? In header, we have defined something called header.num_glcmds. It's basically number of OpenGL calls required to render model properly. These GL commands are just a bunch of integers (exactly header.num_glcmds integers) we need to read and then decode information from them. It works like this - until we don't find 0, which means the end of all rendering, we go through these data and read things in following order:
float s = *((float*)(&glCommands[i++])); // Extract texture coordinates
float t = *((float*)(&glCommands[i++]));
t = 1.0f - t; // Flip t, because it is (from some reasons) stored from top to bottom
As I said, we have integers, but we need to convert them to floats. But not like typecast them, but just tell compiler that the memory here is 4 byte wide (integer size), but we want to use this not as integer, but as float (also 4 bytes). So we just change the pointer type from integer pointer to float pointer. That's exactly what these 2 dirty lines of code do . Then we also flip t coordinate, because from some reasons it is stored the other way around as OpenGL texture coordinates are used. Because texture coordinates are same for every frame (only vertices and normals change), we will create only one VBO for texture coordinates, that will be used throughout all frames. However, for vertices and normals we will have different VBOs and we will just fill them up with vertices previously extracted from frame data and normals from anorms.h (just a note here, for a better performance, it would be probably better to make one big VBO and cramp up all the data there, but now for learning purposes, this is more intuitive and easier to understand).
Next thing we need to do is to create VAO used for rendering and ANIMATING model. This VAO will be very similar to VAOs used for static model renderings - we will have traditional vertex attributes as vertices, normals and texture coordinates. But we will add two extra attributes - vertex position at next keyframe and normal at next keyframe. Next vertex position is vertex attribute with location 3 and next normal attribute has location 4:
void CMD2Model::LoadModel(char* sFilename)
{
//...
// Now all necessary data are extracted, let's create VAO for rendering MD2 model
glGenVertexArrays(1, &uiVAO);
glBindVertexArray(uiVAO);
FOR(i, header.num_frames)
{
vboFrameVertices[i].BindVBO();
vboFrameVertices[i].UploadDataToGPU(GL_STATIC_DRAW);
}
vboFrameVertices[0].BindVBO(); // Vertex and normals data parameters
// Vertex positions
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 2*sizeof(glm::vec3), 0);
glEnableVertexAttribArray(3); // Vertices for next keyframe, now we can set it to same VBO
glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, 2*sizeof(glm::vec3), 0);
// Normal vectors
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 2*sizeof(glm::vec3), (void*)(sizeof(glm::vec3)));
glEnableVertexAttribArray(4); // Normals for next keyframe, now we can set it to same VBO
glVertexAttribPointer(4, 3, GL_FLOAT, GL_FALSE, 2*sizeof(glm::vec3), (void*)(sizeof(glm::vec3)));
// Texture coordinates
vboTextureCoords.BindVBO();
vboTextureCoords.UploadDataToGPU(GL_STATIC_DRAW);
// Texture coordinates
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(glm::vec2), 0);
//...
}
Texture coordinates are in separate VBO and they are common for all frames. At first, we will bind data from frame 0 as the default data, and we will change them dynamically during animation and rendering.
The last part of loading MD2 model deals with loading a texture for it. Because original MD2 models had paths bound to the Quake 2 data files, I usually found models with header saying that number of skins (textures) is 0. But this ain't true, because these models usually have correct texture coordinates and some texture file packed within. The following lines of code will look for the texture with same name as model file name in the directory of model, because MD2 models found on the internet usually have the same texture filename as model:
void CMD2Model::LoadModel(char* sFilename)
{
//...
// I have read, that if you read the data from header.num_skins and header.ofs_skins,
// these data are Quake2 specific paths. So usually you will find models on internet
// with header.num_skins 0 and texture with the same filename as model filename
// Find texture name (modelname.jpg, modelname.png...)
string sPath = sFilename;
int index = sPath.find_last_of("\\/");
string sDirectory = index != -1 ? sPath.substr(0, index+1) : "";
string sPureFilename = index != -1 ? sPath.substr(index+1) : sFilename;
string sTextureExtensions[] = {"jpg", "jpeg", "png", "bmp", "tga"};
index = sPureFilename.find_last_of(".");
if(index != -1)
{
string sStripped = sPureFilename.substr(0, index+1);
FOR(i, 5)
{
string sTry = sDirectory+sStripped+sTextureExtensions[i];
if(tSkin.LoadTexture2D(sTry, true))
{
tTextures[i].SetFiltering(TEXTURE_FILTER_MAG_BILINEAR, TEXTURE_FILTER_MIN_BILINEAR_MIPMAP);
break;
}
}
}
fclose(fp);
//...
}
And that's all we need for loading a model! Now let's proceed to actual rendering and animating.
To perform animation, we need to store animation state somehow. And we have a structure for that - animState_t. When we pass this structure to MD2 rendering function, model will be rendered with up-to-date positions and normals. It contains every thing we need to know about keyframe animation. Using time, that elapsed between two consecutive frames, we can update this structure, so that next frame, model looks a little different, thus creating effect of an animation. Let's have a look at its properties:
typedef struct
{
int startframe; // first frame
int endframe; // last frame
int fps; // frame per second for this animation
float curr_time; // current time
float old_time; // old time
float interpol; // percent of interpolation
int type; // animation type
int curr_frame; // current frame
int next_frame; // next frame
} animState_t;
These properties deserve a little bit of explanation, so here it follows:
Here you can see animation list for the MD2 model, the array includes starting frame, ending frame and FPS for that animation:
anim_t CMD2Model::animlist[ 21 ] =
{
// first, last, fps
{ 0, 39, 9 }, // STAND
{ 40, 45, 10 }, // RUN
{ 46, 53, 10 }, // ATTACK
{ 54, 57, 7 }, // PAIN_A
{ 58, 61, 7 }, // PAIN_B
{ 62, 65, 7 }, // PAIN_C
{ 66, 71, 7 }, // JUMP
{ 72, 83, 7 }, // FLIP
{ 84, 94, 7 }, // SALUTE
{ 95, 111, 10 }, // FALLBACK
{ 112, 122, 7 }, // WAVE
{ 123, 134, 6 }, // POINTIING
{ 135, 153, 10 }, // CROUCH_STAND
{ 154, 159, 7 }, // CROUCH_WALK
{ 160, 168, 10 }, // CROUCH_ATTACK
{ 196, 172, 7 }, // CROUCH_PAIN
{ 173, 177, 5 }, // CROUCH_DEATH
{ 178, 183, 7 }, // DEATH_FALLBACK
{ 184, 189, 7 }, // DEATH_FALLFORWARD
{ 190, 197, 7 }, // DEATH_FALLBACKSLOW
{ 198, 198, 5 }, // BOOM
};
string sMD2AnimationNames[MAX_ANIMATIONS] =
{
"Stand",
"Run",
"Attack",
"Pain A",
"Pain B",
"Pain C",
"Jump",
"Flip",
"Salute",
"Fallback",
"Wave",
"Pointing",
"Crouch Stand",
"Crouch Walk",
"Crouch Attack",
"Crouch Pain",
"Crouch Death",
"Death Fallback",
"Death Fall Forward",
"Death Fallback Slow",
"Boom"
};
First column is starting frame, second is ending frame and third is the FPS of animation. Now we can proceed to two most important functions - UpdateAnimation, that takes a pointer to animstate_t structure and time passed between frames and updates data and RenderModel with provided animState_t.
Here is the code snippet for you to see the whole function first:
void CMD2Model::UpdateAnimation(animState_t* animState, float fTimePassed)
{
animState->curr_time += fTimePassed;
if(animState->curr_time - animState->old_time > (1.0f / float(animState->fps)))
{
animState->old_time = animState->curr_time;
animState->curr_frame = animState->next_frame;
animState->next_frame++;
if(animState->next_frame > animState->endframe)
animState->next_frame = animState->startframe;
}
animState->interpol = float(animState->fps) * (animState->curr_time - animState->old_time);
}
So what do we do here? First, we update current time by the provided time between this frame and the last frame. If the difference between current time and old time is greater than time between two consecutive frames (we calculated it from animation fps), we need to update data, so that they refer to new frames. Old time becomes current time and current frame becomes next frame. Now we need to calculate new next frame. We will simply increment it by 1 and then if it passes the animState->endFrame, which is the last frame, we reset it to animState->startFrame.
The last and the most important thing is to calculate interpolation factor, but this is an easy oneliner. Maximal difference between the animState->curr_time - animState->old_time can be 1.0s / animState->fps. We just need to stretch this value to range <0...1>. So we simply re-multiplicate it with animState->fps and we're done .
In this section, you will see how to render models with a given animation state. This concerns shaders as well, so we will go through the vertex shader, that does all calculations between two frames. So first, let's have a look at rendering function itself:
void CMD2Model::RenderModel(animState_t* animState)
{
glBindVertexArray(uiVAO);
int iTotalOffset = 0;
tSkin.BindTexture();
if(animState == NULL)
{
glEnableVertexAttribArray(0);
vboFrameVertices[0].BindVBO();
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 2*sizeof(glm::vec3), 0);
spMD2Animation.SetUniform("fInterpolation", -1.0f); // Set interpolation to negative number, so that vertex shader knows
FOR(i, ESZ(renderModes)) // Just render using previously extracted render modes
{
glDrawArrays(renderModes[i], iTotalOffset, numRenderVertices[i]);
iTotalOffset += numRenderVertices[i];
}
}
else
{
// Change vertices pointers to current frame
glEnableVertexAttribArray(0);
vboFrameVertices[animState->curr_frame].BindVBO();
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 2*sizeof(glm::vec3), 0);
glEnableVertexAttribArray(3);
vboFrameVertices[animState->next_frame].BindVBO();
glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, 2*sizeof(glm::vec3), 0);
// Change normal pointers to current frame
glEnableVertexAttribArray(2);
vboFrameVertices[animState->curr_frame].BindVBO();
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 2*sizeof(glm::vec3), 0);
glEnableVertexAttribArray(4);
vboFrameVertices[animState->next_frame].BindVBO();
glVertexAttribPointer(4, 3, GL_FLOAT, GL_FALSE, 2*sizeof(glm::vec3), 0);
spMD2Animation.SetUniform("fInterpolation", animState->interpol);
FOR(i, ESZ(renderModes))
{
glDrawArrays(renderModes[i], iTotalOffset, numRenderVertices[i]);
iTotalOffset += numRenderVertices[i];
}
}
}
The function is branched into two sections - first one is rather simple. If the user doesn't provide animState_t structure, but passes NULL pointer, then we just render model statically. We will set the vertex position attribute to the vertices of the first frame. We also set interpolation factor to -1.0, so that the vertex shader know. Vertex shader then sees, that we don't want to do any inter-frame calculations, so it will just render stuff as we usually do.
However, second part is more interesting. We need to set pointer to vertex and normal positions from both frames - current frame and next frame. However, everything is pretty easy now, because all these data are stored in our animState_t structure now, so we just need to carefully set vertex attribute pointers. The most important thing is to set the interpolation factor here, so that the vertex shader can calculate inter-frames.
In both cases, rendering itself is done by iterating over all render modes we have read from the file. So we just do a series of GL_TRIANGLE_STRIP and GL_TRIANGLE_FAN renders, while providing correct vertex offsets. The last thing we should do is to have a look at the vertex shader, which calculates inter-frames:
#version 330
uniform struct Matrices
{
mat4 projMatrix;
mat4 modelMatrix;
mat4 viewMatrix;
mat4 normalMatrix;
} matrices;
layout (location = 0) in vec3 inPosition;
layout (location = 1) in vec2 inCoord;
layout (location = 2) in vec3 inNormal;
layout (location = 3) in vec3 inNextPosition;
layout (location = 4) in vec3 inNextNormal;
smooth out vec3 vNormal;
smooth out vec2 vTexCoord;
smooth out vec3 vWorldPos;
smooth out vec4 vEyeSpacePos;
uniform float fInterpolation;
void main()
{
mat4 mMV = matrices.viewMatrix*matrices.modelMatrix;
mat4 mMVP = matrices.projMatrix*matrices.viewMatrix*matrices.modelMatrix;
vTexCoord = inCoord;
vec3 vInterpolatedPosition = inPosition;
if(fInterpolation >= 0.0f)vInterpolatedPosition += (inNextPosition - inPosition)*fInterpolation;
vEyeSpacePos = mMV*vec4(vInterpolatedPosition, 1.0);
gl_Position = mMVP*vec4(vInterpolatedPosition, 1.0);
vec3 vInterpolatedNormal = inNormal;
if(fInterpolation >= 0.0f)vInterpolatedNormal += (inNextNormal - inNormal)*fInterpolation;
vNormal = (matrices.normalMatrix*vec4(vInterpolatedNormal, 1.0)).xyz;
vWorldPos = (matrices.modelMatrix*vec4(vInterpolatedPosition, 1.0)).xyz;
}
Notice two new variables here - vInterpolatedPosition and vInterpolatedNormal. Both of these are calculated only if interpolation factor is in range <0...1> as it should be (of course we could do a check if it's greater or equal than 0 and less or equal than 1, but one check is now enough, because we provide negative values only when we render models without animation and otherwise this range is valid. So we save one comparison). So if interpolation factor fInterpolation is valid, we will simply calculate difference between this and next frame and we add this difference to our current frame data multiplied by interpolation factor and we get inter-frame vertices and normals. The fragment shader used is the same as in other renderings, because this is the only place where something different happens.
This is the fruit of our today's effort:
Once again, this tutorial was pretty much stuff at once, so I will try to make some relaxing, simple tutorial next time. And I know for almost 100% now, that it's gonna be about Bump Mapping, because it's a really neat technique, that can improve visual appearance of objects significantly and doesn't cost us that much. So stay tuned for next time and I hope you enjoyed this tutorial, because:
I also declared functions like PauseAnimation and StopAnimation, but these two are left for reader to implement as an exercise ... or is it? Yes, it is an exercise, but it is an unintentional exercise, because I forgot to implement these two and now I'm too lazy to upgrade this tutorial . But at least you have an opportunity to extend it yourself and test your skills . See you next time!
Download 5.89 MB (4430 downloads)