I have a very basic understanding of audio and was hoping I could get some help to resolve a problem with the audio design in my engine. Let me give you a run-down of how I've currently got it fitting together.
I have an AudioManager that handles setting up and tear down of the XAudio2 interface as well as the master voice, and various submix voices that I can use to group sounds. I have a ResourceManager that can be used to load audio files into the required buffers ready for XAudio2 consumption. During my game, I call a method on my AudioManager to play a sound - this obtains the necessary Audio Voice and links it to the right buffer and begins playback.
The first problem I am having is that when there is a lot going on, and a lot of sounds are played at once, I get performance issues - framerates drop, and there is audio stuttering. I currently don't limit the number of concurrent sounds that can be played - can you give me any other advice on a better design, or solutions to limit impact on performance?
The second problem I am having is that if multiples of the same sound are played at the same time, it sounds like the volume has been turned up on that particular sound. I assumed this is to do with wave amplification and tried implementing a Volume Limiter effect on my submix voices. This works to a degree, and I can eliminate any distortion, but the increased volume effect is still noticeable. Can anyone suggest decent values to use for the volume limiter effects, or any other solutions to the problem?
It sounds like you know the main answer to the first issue - you need to limit the number of voices. The AudioManager will need some sort of prioritization scheme that drops low priority sounds in favor of the high priority ones.
And that ties into the second question. Are you playing the two instances of the same sound at exactly the same time? Why? - they won't be distinguishable except for the increased volume Or are they slightly offset - like maybe 2 near-simultaneous gunshots - in which case, what's wrong with 2 events sounding louder than one event? That's realistic. It seems to me that you either actually do want the increased volume, or you want the AudioManager to filter out the duplicates to avoid overloading the voice hardware with needless work.
How many sounds do you have playing? In my experience games will use somewhere between 32->256 audio channels. To solve your problem you need to work out your audio budget (the amount of processing / memory you will be giving to the audio system) and then prioritise sounds being played.
The methods to prioritise can include :
a) Volume - the lowest being the least important.
b) Priority - a value you set yourself based on the importance of the sound and the likely hood that the user will notice the sound dropping out. Background music, for example, should be set to never be dropped. A random cricket sound for ambience should be high on the list.
c) Grouping - If you have gunshot sounds for example, once they have hit a hard limit (say 15) you would stop adding new gunshot sounds as they wouldn't add to the overall mix.
For your second problem there are two things you can do. One, you need to balance your game audio. This is a fine and somewhat black art that people can spend a lot of time doing in games. The first step is to make sure all your sounds are normalised (http://en.wikipedia.org/wiki/Audio_normalization). The next is to play through the game adjusting volumes on the fly to get the mix right. Building in some simple debug audio functionality (a remote application that can connect and adjust the mix of the application is ideal) can make this a lot easier.
Why are you playing multiple sounds of the same type? If it necessary you can attempt to a audio compressor (link in comment). Now I don't know if XAudio has one built in, but if not they are fairly simple to implement. This gives you an increase in dynamic range - i.e if there are a lot of sounds and it is getting very loud the compressor will bring the entire mix down in an attempt to fit it in.
for my needs. Ah and I forgot to mention that nodes can have hierarchy, but this is not a problem if the frame data is reduced. The structure instances don't get duplicated unless any value changes. When... know if my speculations are ok, as I don't have much experience with 3d animations yet. I want to make a well decision as any option I choose would require a lot of work to get it to render and I don't want to find out in the end that I have to rewrite everything again, as a lot of other objects will be working with these data. Sorry if it's a too subjective matter, but I need some insight. I'm
sort of designs other people used to overcome them, or would use. For a little background, I started working on it in my free time last summer. I was initially making the game in C#, but about 3 months... to worry about this since it isn't an issue there. Moving to C++, this has become a fairly major problem and made me think I may have designed things incorrectly. I can't really imagine how to decouple... using the boost library extensively and have been using SFML for graphics and FMOD for audio. I have a fair bit of code written, but am considering scrapping it and starting over. Here's the major
I am trying to create a 2D platformer (Mario-type) game and I am some having some issues with handling collisions properly. I am writing this game in C++, using SDL for input, image loading, font... detection. Up is negative y and down is positive y, as it is in most games. Hopefully I have provided enough information for someone to help me successfully. If there is something I left out that might... over-thinking it. If anyone thinks they can help, please take a look at the code below and help me try to improve on this if you can. I would like to refrain from using a library to handle this (as I
language, in my case Lua. On the other hand I want to use a component based design for my entities, and here starts my questions: 1) Should I define my componentes in C++? If I do this in C++ won't I lose all the RTTI information I want for my editor? On the other hand, I use box2d for physics, if I define all my components in script won't it be a lot of work to expose third party libs to lua? 2... think you were not wrong and you did get the idea exactly as I wanted. Let me explain my ideas right now and key points after some sleep to clarify everything: 1) Components: At first I wanted to write
years). I am also a capable graphic designer. I have a surface knowledge of C++, though it's been 10 years since I wrote any C++ code. I also played with the DirectX 5 API back then as well. I am happy to outsource graphics and sound, leaving me with coding, game design, level design and so forth. Do I need to learn C++ properly, or can I rely on my C# and .Net knowledge? If I'm going to leave C... slow me down. I know that baby steps will be required at first, but it would be nice to get some advice as to what path I should follow to get myself up to speed quickly so that I am able to create
I'm using FMOD (with C++ syntax, not C) and I'm having trouble getting the wave data of a whole sound file. Channel::GetWave() only gets the wave data of the area on which it is currently playing..., and could return the same data if it is called very quickly in succession. See the DSP API to capture a continual stream of wave data as it plays, or see Sound::lock / Sound::unlock if you want... hasn't wrapped at the end of the buffer. But as I said, my goal is to be able to get the wave data of the beginning of a song to the end.
model dynamically interact with whatever other model you come across, so in hitman when you come up behind some one with the fibre wire you strangle the other character or if you have the anesthetic you come up behind some person and put your hand over there mouth while they struggle and slowly go to the floor where you lay them down. I am confused as to whether it was animated to use two models...I am just curious as to how in many games (namely games like arkham asylum/city, manhunt, hitman) do they make it so that your character can "grab" a character in front of you and do stuff to them. I
, GameSubsystemParticleEmmiter can be merged into GameSubsystemSpatial (to place all audio, emmiter, render Components in the same hierarchy and use parent-relative transforms). con. Every-to-every check. Very... GameSubsystems) can implement registerComponent(Component*). pro. Components and GameSubystems know nothing about each other. con. In C++ it would look like ugly and slow typeid-switch. Questions: Which approach is better and mostly used in component-based design? What Practice says? Any suggestions about implementation of Approach 4? Thank you.
My team and I are currently developing a 2D platformer with SDL/OpenGL and we want to add support to the Xbox360 Gamepad with the XInput library from Microsoft, but we are currently having a problem... and I can't controlled. There is a way to limit the time between polling inside of the XInput or I should do it by myself? or what you recommend me? If I didn't explain me clearly don't hesitate in say that. Thanks a lot.