2013年12月22日 星期日

Objective C-Audio Queue

上次要寫Audio Queue相關記錄的時候過了太久,也懶得寫,索性就把草稿給刪掉,結果好死不死今天又在寫了一次QQ

今天因為產品需要做聲音串流,在Apple device上面的Audio framework很多,最簡單的應該就是AVAudioSession,原本以為該framework具備像AVVideoOutputSession之類的framework有delegate可以獲得sample buffer的data,結果很不巧的AVAudioSession並沒有,囧~估狗了一下,結果網路上竟然推薦的事AudioQueue的方式,好吧!既然這樣,就把之前的code拿出來好好複習一下!

1. 首先需要先在Build Phases裡面先載入AudioToolBox.framework

2. example code about Recorder(Audio Manager *.h)

#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
#import <AudioToolbox/AudioFile.h>



#define NUM_BUFFERS 3

@interface L17AudioManager : NSObject{
    AudioStreamBasicDescription dataFormat;
    AudioQueueRef queue;
    AudioQueueBufferRef buffers[NUM_BUFFERS];
}

@property AudioQueueRef queue;

//Audio file
@property SInt64 currentPacket;
@property AudioFileID audioFile;

+ (L17AudioManager *)defaultAudioManager;
- (void)start;
- (void)stop;

@end

上面的code是有關Recorder的.h檔,在該檔案中主要需要宣告三個property

  1. queue(用來存放音訊的buffer)
  2. Buffer(用來存放音訊)
  3. dataFormat(用來設定音訊的基本設定ex:sample rate)

3. example code(*.m):接下來的code主要分成四個部分,分別是:

  1. AudioInputCallback
  2. init dataFormat
  3. Start Record
  4. Stop Record

a. AudioInputCallback

static void AudioInputCallback(void *inUserData,
                               AudioQueueRef inAQ,
                               AudioQueueBufferRef inBUffer,
                               const AudioTimeStamp *inStartTime,
                               UInt32 inNumberPacketDescriptions,
                               const AudioStreamPacketDescription *inPacketDescs){
    
    L17AudioManager *audioManager = (__bridge L17AudioManager *)inUserData;
    
    //Write Audio data to File
    OSStatus status = AudioFileWritePackets(audioManager.audioFile, NO, inBUffer->mAudioDataByteSize, inPacketDescs, audioManager.currentPacket, &inNumberPacketDescriptions, inBUffer->mAudioData);
    
    if(status == noErr){
        audioManager.currentPacket += inNumberPacketDescriptions;
        // Enqueue buffer to AudioQueue again
        AudioQueueEnqueueBuffer(audioManager.queue,inBUffer,0,nil);
    }
    NSLog(@"Audio callback");
    
}

b. Init dataFormat

- (id)init{
    
    if(self = [super init]){
        dataFormat.mSampleRate = 44100.0f;
        dataFormat.mFormatID = kAudioFormatLinearPCM;
        dataFormat.mFramesPerPacket = 1;
        dataFormat.mChannelsPerFrame = 1;
        dataFormat.mBytesPerFrame = 2;
        dataFormat.mBytesPerPacket = 2;
        dataFormat.mBitsPerChannel = 16;
        dataFormat.mReserved = 0;
        dataFormat.mFormatFlags =
        kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
        
    }
    return self;
}

c. Start Record

- (void)start{
    
    NSLog(@"Audio start");
    //Setting Input
    AudioQueueNewInput(&dataFormat,
                       AudioInputCallback,
                       (__bridge void *)(self),
                       CFRunLoopGetCurrent(),
                       kCFRunLoopCommonModes,
                       0,
                       &queue);
    //Setting Buffer
    for(int i=0;i<NUM_BUFFERS;i++){
        AudioQueueAllocateBuffer(queue, (dataFormat.mSampleRate/10.0f)*dataFormat.mBytesPerFrame, &buffers[i]);
        AudioQueueEnqueueBuffer(queue, buffers[i], 0, nil);
    }
    //Setting property
    UInt32 enabledLevelMeter = true;
    AudioQueueSetProperty(queue, kAudioQueueProperty_EnableLevelMetering, &enabledLevelMeter, sizeof(UInt32));
    
    //Setting file name which will be write
    NSString *fileName = [NSString stringWithFormat:@"%@/Desktop/record.wav",NSHomeDirectory()];
    NSLog(@"fileName = %@",fileName);
    
    self.currentPacket = 0;
    //Create write file
    AudioFileCreateWithURL((__bridge CFURLRef)([NSURL URLWithString:fileName]),
                           kAudioFileWAVEType,
                           &dataFormat,
                           kAudioFileFlags_EraseFile,
                           &audioFile);
    //Start record
    AudioQueueStart(queue, nil);
    
}

d. Stop Record

- (void)stop{
    
    NSLog(@"Audio stop");
    
    AudioQueuePause(queue);
    AudioQueueFlush(queue);
    AudioQueueStop(queue, NO);
    
    //free buffer
    for(int i=0;i<NUM_BUFFERS;i++){
        AudioQueueFreeBuffer(queue, buffers[i]);
    }

    //Dispose AudioQueue
    AudioQueueDispose(queue, YES);
    
    //Close AudioFile
    AudioFileClose(audioFile);
}

上面的stop需要先pause在stop,否則會出現如下圖的error

其實AudioQueue有還有許多的細節,但是大觀念上如下圖(Recorder):

再上圖中Audio Queue裡面其實放了很多buffer,所以Queue裡面的內容就是Audio的buffer,在聲音被device接收之後,會先放入最前面的buffer之中,然後會被取出,當取出的時候會有一個callback即AudioInputCallback,所以在這個時候我們便可以針對該小段音訊做一些處理然後再寫入disk中,最後有一個enqueue的動作是要把剛剛取出的buffer再塞回Audio queue之中,這樣可以保證Queue裡面的buffer數量和原先設定的一樣,才不會造成到最後沒buffer可以用的情況發生。

Reference:

  1. Apple
  2. 演算法筆記(音訊概念介紹)

沒有留言:

張貼留言