Should I use different object types for different kind of 3D meshes?

Pablo Ariel
  • Should I use different object types for different kind of 3D meshes? Pablo Ariel


    I'm building the animation system of my game engine (the skeletal and skinned animation stuff), and I came to a point where I added so much functionality in the frame and node structures that it seems to be excessive for some applications, and it ends up adding too much complexity when working with simpler objects. Objects like boxes or simple meshes that are meant to be static objects would end up with complex properties such as the ability of having a node hierarchy and transformation frames, animation type checks, and also require a lot of unnecessary creation parameters that make really hard to work with when coding some functions that use them.

    I was thinking of making ie. SimpleMesh and HierarchyMesh objects, which will also require that the renderer can deal with different types of objects in the same scene. I was also thinking about making a MeshNode class and then make a Mesh object that contains them, but then I have some conflict on where to store some data, as some meshes will have per-node shaders and other will have a single shader for the whole mesh, then storing the shader references in the node will be a redundance for complex meshes that share the same shader in every node.

    Other option I was thinking was making some helper functions to deal with the simpler cases, which would set some default parameters and would ask only the most basic ones, for example

    CreateSingleFrameMesh( pTextureName, pBuffers, nBuffers, Material, &pCreatedMesh ); 

    but I'm not sure if this would be too time consuming and require too much memory for storing something that will be default values in most of the scene objects (I believe the scene won't be built from skinned hierarchy meshes but mostly static and even in some cases the animation is purely a shader thing which renders useless most of the animation property in the c++ side )

    Also files are required to store the minimum data required to rebuild the mesh, so in smaller meshes or meshes without animation I would end up storing a lot of data that I don't really need, even if I'm indexing node and frame data when saving and then store the hierarchy with the indices to the actual data.

    I don't know if my speculations are ok, as I don't have much experience with 3d animations yet. I want to make a well decision as any option I choose would require a lot of work to get it to render and I don't want to find out in the end that I have to rewrite everything again, as a lot of other objects will be working with these data.

    Sorry if it's a too subjective matter, but I need some insight. I'm full of questions about this..

    Thanks in advance.

    EDIT: I'm gonna add some code about it

    Yes probably I'm mixing up things, but it's complicated because there are a lot of data:

        INT32           nIndexBufferID;
        INT32           nVertexBufferID;
        UINT32          nIndexOffset;
        UINT32          nVertexOffset;
        UINT32          nIndexCount;
        UINT32          nVertexCount;
    struct TEXTURE_DATA
        UINT32          nTextureID;
        INT32           nImageID;       // reference the hardware image for this node (a video device texture)
        IMAGE_USAGE     nImageUsage;    // the way each image is applied in the node (color, bumpmap, etc)
        Vector2         vImageOffset;   // the offset for each image applied in the node (in pixels? o.O normalized coords?)
        SPRITE_DATA*    pSpriteData;    // Set to NULL or point to sprite-like animation data.
    // This structure holds data about properties of an animation frame 
    // Note that all these data may vary between frames, 
    struct FRAME3D_DATA
        UINT32                  nFrameID;       // Frame identifier
        float                   fTime;          // this frame starts at the given fTime (or lasts fTime, i'll see that later)
        Vector3                 vScale;         // this is interpolated and then applied for each node before the global transform
        Quaternion              qOrientation;   // this is interpolated and then applied for each node before the global transform
        Vector3                 vPosition;      // this is interpolated and then applied for each node before the global transform
        MATERIAL                Material;       // the base color of the object (set to 100% white to just use the texture color)
        float                   fReflectionAlpha;// The alpha value of the reflection (a transition of this value can be used to achieve some effects)
        bool                    bHidden;        // Set this variable to true to skip rendering this frame
        bool                    bAlpha;         // Set this variable to true in order to enable use of the fAlpha variable (otherwise it's gonna be discarded)
        float                   fAlpha;         // Set this variable in order to change the global alpha value for this frame
        LIGHT_TYPE              nApplyLights;   // Set the light types that will be processed for this node. If it's set to GLIGHTTYPE_NONE, the node will rendered with raw vertex and/or texture colors
        UINT8                   nBufferCount;   // The amount of buffers used by this node
        GEOMETRYBUFFER_DATA**   ppBufferData;   // these are the geometry buffers
        UINT32                  nLightCount;    // Store here the amount of lights 
        LIGHT_DATA**            ppLightData;    // Store here the pointers to the light data
        UINT32                  nTextureCount;
        TEXTURE_DATA*           pTextureData[GMAX_TEXREF];  

    But then any change to the frame DATA will require that the other values get duplicated. So now I was thinking about making smaller structures with the fTime variable each so I can have attribute keyframes, light keyframes and buffer keyframes, then make the node structure look like this:

    struct NODE3D_DATA { UINT   nNodeID;
        UINT                    nTransformFrames; 
        FRAMETRANSFORM_DATA**   ppTransformFrames;
        UINT                    nAttributeFrames; 
        FRAMEATTRIBUTE_DATA**   ppAttributeFrames;
        UINT                    nLightFrames;
        FRAMELIGHT_DATA**       ppLightFrames;
        UINT32                  nGeometryFrames;    // Store here the amount of buffer keyframes
        GEOMETRYFRAME_DATA**    ppGeometryFrames;   // Store here the pointers to the buffer data?

    then probably:

    struct GEOMETRYFRAME_DATA { float fTime; // or uint or whatever
        UINT nBufferCount; //
        INDEXEDGEOMETRY_DATA** ppGeometryBuffers;

    Other ways I have tried or thought about seem to be even more redundant, and in the cases that they don't, the performance loss is untolerable for my needs. Ah and I forgot to mention that nodes can have hierarchy, but this is not a problem if the frame data is reduced. The structure instances don't get duplicated unless any value changes. When constructing a node object it's done referencing a pool of structure instances to prevent allocation for 2 equal frames. Also the lights are something that I'm not sure on how to handle as I need them to be attached to the node in most cases, even if I allow setting global lights to the scene.

    @Nicol: Yes that's what I'm trying to figure out. You can see the code doesn't rely on any hardware in particular or contains any game attributes, it's just keyframed animation data and some IDs for them and to reference API objects. I use objects such as CFrame to hold pointers to frame data and to store runtime values. The thing is I don't want to create layers unless it's really worth it, to prevent excessive distinction for small values, such as a making a frame object to store only the Diffuse color variation or making it store any value that can change but making it so random that I woulnd't know how to relate the data to the other frames, or need a complex system for it. That's why at this level I prefer to have well-defined data so I can store and work with fast during runtime.

    It's this complicated because it must support a lot of rendering and animation techniques, must support being accessed by game entities and other objects, and must allow (somewhat)easy optimization for them if needed.

    Anything that helps me make a decision is a good answer for my question, thanks for the reply.

  • I'd say that your main problem is that you're giving meshes non-mesh properties, thus leading to a fat interface.

    A mesh is a set of per-vertex data and the information about what those vertex attributes means. That's all it should be. It may have some information, like where the "center" of it is and a bounding box perhaps, but that's about it. A mesh should have no understand of animations. It should not have a world-space position. Meshes should not have shaders. And so forth.

    All of that information should live elsewhere.

    As to the specific division of information, that depends on your needs. I would design it based on layered functionality. Figure out what things need to talk to each other, then build objects that facilitate this communication. For example, a mesh holds vertex data. But that alone isn't enough to render it; you need a material, which contains the shader and associated parameters necessary for rendering. So a renderable object would contain a reference to a material and a mesh.

c++ animation graphics-programming 3d-meshes
Related questions and answers
  • physics properties. I know there will be errors here as I get the items colliding with the ground but not with each other properly. So I would appreciate some input on what I am doing wrong. void... this on what I had done there, but modifying it to fit with Ogre. It is working but not correctly and I would like some help to understand what it is I am doing wrong. I have a state system and when... and colliding strangely with the ground. I have tried to capture the effect with the attached image. I would appreciate any help in understanding what I have done wrong. Thanks. EDIT : Solution The following

  • () { } Cube::~Cube() { pBuffer->Release(); // why does this only work when put here? because it's created here? I thnk so, why not iBuffer though? } void Cube::Draw() { render_frame(); } void Cube...Basically when placed in the same file this works fine, but if placed in separate files (and I have tested this) just after Init() is called, the pointer to ID3D10* device's value is suddenly 0x00000000. If it's all in the same file, the device pointer has a memory address all the way through. I'm really stumped here. Also wasn't quite sure whether this was a gamedev or stackoverflow question so

  • by material and draw them for (int j = 0; j < Objects[i].numMatFaces; j ++) { // Use the material's texture Materials[Objects[i].MatFaces[j].MatIndex].tex.Use(); // AFTER THE TEXTURE IS APPLIED I INSERT THE TOON FUNCTIONS HERE (FIRST PASS) glHint (GL_LINE_SMOOTH_HINT, GL_NICEST); // Use The Good...I am quite new to OpenGL, I have managed after long trial and error to integrate Nehe's Cel-Shading rendering with my Model loaders, and have them drawn using the Toon shade and outline

  • == true){/*do something here or i might not need it.....*/} } /* end of if(kbhit())*/ char coll; // the buffer for the collided char theScreen.check_collision(player, xMove, yMove...Alright so i'm making a vertical side scroller where you are an '8' character traveling downward while avoiding my randomly generated walls (the "generation" function) on the left and right sides... and traveling throug it. This is due to some odd reason for the "numb_collisions", which is used to count how many collisions have occured since initial collision (if it's 0 it means it IS the initial

  • can't think of a decent scheme to use, or the right data-structures required to quickly manage them. Could someone who has implemented a system like this give an overview of how their's worked. Is there an obvious design pattern I am missing out on? Have I made this too complicated? Ideally I need an efficient, and hard to abuse system. Any ideas? ...). If the resource isn't loaded, then the manager will mark the object to be loaded at the next opportunity, (usually at the end of drawing the frame). Note that, although my system does do some reference

  • vector container. I have chosen this way because I parse an object map file. I have object type IDs (integer values) which I need to translate them into real objects. Because I have over 100...Dear all, this is going to be tough: I have created a game object factory that generates objects of my wish. However, I get memory leaks which I can not fix. Memory leaks are generated by return new Object(); in the bottom part of the code sample. static BaseObject * CreateObjectFunc() { return new Object(); } How and where to delete the pointers? I wrote bool ReleaseClassType(). Despite

  • ) is therefore highly wasteful. Then again, making each entity state a singleton hardly seems appropriate. What do you suggest I do? (The same can be said about game states, so whatever the solution to this is, I guess it can be applied to game states as well.) 2. The state of the entity sprite. Most of the time, whenever the state of the entity changes, the sprite that is used to represent it must... changed, then it (the representation) starts having an internal state. I would much rather have it stateless so that I require only one instance of every representation (just as before). Edit

  • don't need 3D, and it is quite ...a lot of work to learn. I also don't like that it is so expensive to use for different platforms and that I can only code for it through scripting. You have to buy each...# bindings get dropped repeatedly. I'm not an engine enthusiast, I just want to make a game. I don't see this happening with Irrlicht. Ogre3D. Way too much work, it's just a graphics engine. Also no multiple... to use Flash, but I don't know if it scales well enough. I'm not a scripter, I want some real APIs that I can work with inside a proper IDE. Just for information, I looked at several alternatives, I'm

  • into a shader program, for use with glDrawArrays. My vertex data already exists in a VertexBufferObject that uses the following data structure to create a vertex buffer: class CustomVertex { public... in the shader code) for( int n = 0; n < vShaderArgs.size(); n ++) glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() ); // Create and bind to a vertex array object, which... onscreen, in the shape of the terrain rectangle, but there are no regular lines etc. Here's the code I use for rendering: void ShaderProgram::Draw() { using namespace AntiMatter; if( ! m

Data information