Audio was something I hadn't had much experience with. The initial implementation in Molecular was all OpenAL. We used it for the spot effects and for the streaming. The final shipping version used AudioQueues for streaming but continued to use OpenAL for the spot effects. So why did we do this and what did we actually do?
The reason we moved to AudioQueues was because the audio streams were too big, for two audio tracks, about 25 minutes of music was about 30 meg. The solution to that problem was to compress the audio stream, but OpenAL didn't support using the hardware for decoding audio. This necessitated a rewrite of the streaming system to use AudioQueues. I basically took the code from the sample spread through out the article and wrapped it up in the same stream interface my original OpenAL stream system used so the game code did not require any changes.
The OpenAl system used to have N buffers of X size preloaded with audio data which would be played through OpenAL, when a certain number had been played we would then load a certain number more per frame until we had enough then we would wait until more we used and the process would start again.
I compressed the audio and initialised the hardware decoder and the package size when from 34 meg to ~2.7meg. Making the package suitable for download over wireless. Performance went up too as we weren't having to spend as much time streaming the data in and decoding it in software!! Winner! :)
Tips :
  • When looping you audio streams make sure you seek back to the start of the file. Do the same when you restart a music stream. It's an easy one to miss.
  • Software compression/decompression isn't really usable if you want a 60 fps experience.
  • Keep an eye on the number of buffers you use and how big they get.
  • ALWAYS check your error states when calling openAL functions.
  • BEWARE OpenAL and the simulator. One day my audio just stopped(probably after an update to OSX) but none the less the AudioQueues worked just fine.
  • In OpenAl you can only play a single instance of a sound at once. You can share the same sample data you just need to create a new instance.
  • Use the alBufferDataStatic extension in openAL to improve performance by removing memcpys.
  • Beware of the case of filenames. The simulator doesn't care but the hardware does. Modifying the filename in the resource view will fix the problem.
  • The utility(supplied with the SDK I believe) "afconvert" will compress the audio files
  • Check you original audio files, most likely you don't need 32-bit, 44kHz, stereo(or more) samples. The argument "--profile" when used with afconvert will show you a comparison with the original file
  • The utility(supplied with SDK I believe) "afinfo" will display stats about your samples. 
  • Audio seems to default to software mixing.The following code should help set the audio playback to use hardware rather than software(I found this on a forum I believe, so thanks if you provided it). Note though, that there are a number of factors which may cause software mixing to be used anyway.
//set the session to MediaPlayback to take the audio hardware
UInt32 sessionCategory = kAudioSessionCategory_MediaPlayback;
     AudioSessionSetProperty (
     sizeof (sessionCategory),

//now set the session to SoloAmbient for game sounds & background music
sessionCategory = kAudioSessionCategory_SoloAmbientSound;
AudioSessionSetProperty (
     sizeof (sessionCategory),

  •   I used the following command line to compress my music streams.
afconvert -v -f caff -d aac -c 1 -s 0 --profile "IngameMusic copy.wav" IngameMusic.caf

  •   You can use the following code to check if there is other audio already playing that you want to respect the state of. This code too is no doubt copied from another forum of blog, if it was you, thanks.

UInt32 propertySize, audioIsAlreadyPlaying;
propertySize = sizeof(UInt32);
AudioSessionGetProperty (kAudioSessionProperty_OtherAudioIsPlaying, 
if (audioIsAlreadyPlaying != 0)
// there is audio already playing