- 浏览: 12926826 次
- 性别:
- 来自: 大连
文章分类
最新评论
-
sanrenxing_1:
GoEasy 实时推送支持IE6-IE11及大多数主流浏览器的 ...
WindowsPhone消息推送服务 -
张砚辉:
两侧照片绕Y轴旋转后有锯齿,请问锯齿解决方案,很长时间没解决
自定义带倒影和偏转的超炫Gallery -
knight_black_bob:
能不能把你自己的博客整理下分类下,写了这么多 ,都不知道怎么查 ...
Android_View,ViewGroup,Window之间的关系 -
jeasonyoung:
你这个代码实现在iOS8下应该是滑不动的
UISlider 滑块控件—IOS开发 -
wx_hello:
如果能写个可运行的java程序,不胜感激。。。
rs232串口通信原理
Android Audio代码分析15 - testPlaybackHeadPositionAfterFlush
上次看到的testPlaybackHeadPositionIncrease函数中,先play了一会,然后获取position。
今天看个复杂点的,先play,然后stop,之后在flush,此时再获取position会是什么情况呢?
*****************************************源码*************************************************
**********************************************************************************************
源码路径:
frameworks\base\media\tests\mediaframeworktest\src\com\android\mediaframeworktest\functional\MediaAudioTrackTest.java
#######################说明################################
###########################################################
&&&&&&&&&&&&&&&&&&&&&&&总结&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
stop的时候:
若AudioTrack不是active的,在将其audio_track_cblk_t中的user, server等清0。
若是direct output,则将其close。
flush的时候:
会将audio_track_cblk_t中的user, server等清0。
因此,flush 之后再get position的话,肯定是0.
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
今天看个复杂点的,先play,然后stop,之后在flush,此时再获取position会是什么情况呢?
*****************************************源码*************************************************
//Test case 3: getPlaybackHeadPosition() is 0 after flush(); @LargeTest public void testPlaybackHeadPositionAfterFlush() throws Exception { // constants for test final String TEST_NAME = "testPlaybackHeadPositionAfterFlush"; final int TEST_SR = 22050; final int TEST_CONF = AudioFormat.CHANNEL_OUT_STEREO; final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT; final int TEST_MODE = AudioTrack.MODE_STREAM; final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC; //-------- initialization -------------- int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT); AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT, minBuffSize, TEST_MODE); byte data[] = new byte[minBuffSize/2]; //-------- test -------------- assumeTrue(TEST_NAME, track.getState() == AudioTrack.STATE_INITIALIZED); track.write(data, 0, data.length); track.write(data, 0, data.length); track.play(); Thread.sleep(100); track.stop(); track.flush(); log(TEST_NAME, "position ="+ track.getPlaybackHeadPosition()); assertTrue(TEST_NAME, track.getPlaybackHeadPosition() == 0); //-------- tear down -------------- track.release(); }
**********************************************************************************************
源码路径:
frameworks\base\media\tests\mediaframeworktest\src\com\android\mediaframeworktest\functional\MediaAudioTrackTest.java
#######################说明################################
//Test case 3: getPlaybackHeadPosition() is 0 after flush(); @LargeTest public void testPlaybackHeadPositionAfterFlush() throws Exception { // constants for test final String TEST_NAME = "testPlaybackHeadPositionAfterFlush"; final int TEST_SR = 22050; final int TEST_CONF = AudioFormat.CHANNEL_OUT_STEREO; final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT; final int TEST_MODE = AudioTrack.MODE_STREAM; final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC; //-------- initialization -------------- int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT); AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT, minBuffSize, TEST_MODE); byte data[] = new byte[minBuffSize/2]; //-------- test -------------- assumeTrue(TEST_NAME, track.getState() == AudioTrack.STATE_INITIALIZED); track.write(data, 0, data.length); track.write(data, 0, data.length); track.play(); Thread.sleep(100); track.stop(); // ++++++++++++++++++++++++++++stop++++++++++++++++++++++++++++++++++++ /** * Stops playing the audio data. * @throws IllegalStateException */ public void stop() throws IllegalStateException { if (mState != STATE_INITIALIZED) { throw(new IllegalStateException("stop() called on uninitialized AudioTrack.")); } // stop playing synchronized(mPlayStateLock) { native_stop(); // ++++++++++++++++++++++++++++++android_media_AudioTrack_stop++++++++++++++++++++++++++++++++++ static void android_media_AudioTrack_stop(JNIEnv *env, jobject thiz) { AudioTrack *lpTrack = (AudioTrack *)env->GetIntField( thiz, javaAudioTrackFields.nativeTrackInJavaObj); if (lpTrack == NULL ) { jniThrowException(env, "java/lang/IllegalStateException", "Unable to retrieve AudioTrack pointer for stop()"); return; } lpTrack->stop(); // +++++++++++++++++++++++++++++++AudioTrack::stop+++++++++++++++++++++++++++++++++ void AudioTrack::stop() { // mAudioTrackThread在函数AudioTrack::set中被赋值。 // mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava); sp<AudioTrackThread> t = mAudioTrackThread; LOGV("stop %p", this); if (t != 0) { t->mLock.lock(); } if (android_atomic_and(~1, &mActive) == 1) { mCblk->cv.signal(); // mAudioTrack在函数AudioTrack::createTrack中被赋值,其最终指向的其实是一个TrackHandle对象 mAudioTrack->stop(); // ++++++++++++++++++++++++++++++AudioFlinger::TrackHandle::stop++++++++++++++++++++++++++++++++++ void AudioFlinger::TrackHandle::stop() { mTrack->stop(); // +++++++++++++++++++++++++++++AudioFlinger::PlaybackThread::Track::stop+++++++++++++++++++++++++++++++++++ void AudioFlinger::PlaybackThread::Track::stop() { LOGV("stop(%d), calling thread %d", mName, IPCThreadState::self()->getCallingPid()); // 在Track的构造函数中赋值,指向创建该Track对象的PlaybackThread sp<ThreadBase> thread = mThread.promote(); if (thread != 0) { Mutex::Autolock _l(thread->mLock); int state = mState; if (mState > STOPPED) { // ++++++++++++++++++++++++++++++++track_state++++++++++++++++++++++++++++++++ enum track_state { IDLE, TERMINATED, STOPPED, RESUMING, ACTIVE, PAUSING, PAUSED }; // --------------------------------track_state-------------------------------- mState = STOPPED; // If the track is not active (PAUSED and buffers full), flush buffers PlaybackThread *playbackThread = (PlaybackThread *)thread.get(); if (playbackThread->mActiveTracks.indexOf(this) < 0) { reset(); // ++++++++++++++++++++++++++++++AudioFlinger::PlaybackThread::Track::reset++++++++++++++++++++++++++++++++++ void AudioFlinger::PlaybackThread::Track::reset() { // Do not reset twice to avoid discarding data written just after a flush and before // the audioflinger thread detects the track is stopped. if (!mResetDone) { TrackBase::reset(); // ++++++++++++++++++++++++++++++AudioFlinger::ThreadBase::TrackBase::reset++++++++++++++++++++++++++++++++++ void AudioFlinger::ThreadBase::TrackBase::reset() { audio_track_cblk_t* cblk = this->cblk(); cblk->user = 0; cblk->server = 0; cblk->userBase = 0; cblk->serverBase = 0; mFlags &= (uint32_t)(~SYSTEM_FLAGS_MASK); LOGV("TrackBase::reset"); } // ------------------------------AudioFlinger::ThreadBase::TrackBase::reset---------------------------------- // Force underrun condition to avoid false underrun callback until first data is // written to buffer mCblk->flags |= CBLK_UNDERRUN_ON; mCblk->flags &= ~CBLK_FORCEREADY_MSK; mFillingUpStatus = FS_FILLING; mResetDone = true; } } // ------------------------------AudioFlinger::PlaybackThread::Track::reset---------------------------------- } LOGV("(> STOPPED) => STOPPED (%d) on thread %p", mName, playbackThread); } if (!isOutputTrack() && (state == ACTIVE || state == RESUMING)) { thread->mLock.unlock(); AudioSystem::stopOutput(thread->id(), (AudioSystem::stream_type)mStreamType, mSessionId); // ++++++++++++++++++++++++++++++AudioSystem::stopOutput++++++++++++++++++++++++++++++++++ status_t AudioSystem::stopOutput(audio_io_handle_t output, AudioSystem::stream_type stream, int session) { const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service(); if (aps == 0) return PERMISSION_DENIED; return aps->stopOutput(output, stream, session); // +++++++++++++++++++++++++++++++++AudioPolicyService::stopOutput+++++++++++++++++++++++++++++++ status_t AudioPolicyService::stopOutput(audio_io_handle_t output, AudioSystem::stream_type stream, int session) { if (mpPolicyManager == NULL) { return NO_INIT; } LOGV("stopOutput() tid %d", gettid()); Mutex::Autolock _l(mLock); return mpPolicyManager->stopOutput(output, stream, session); // ++++++++++++++++++++++++++++++AudioPolicyManagerBase::stopOutput++++++++++++++++++++++++++++++++++ status_t AudioPolicyManagerBase::stopOutput(audio_io_handle_t output, AudioSystem::stream_type stream, int session) { LOGV("stopOutput() output %d, stream %d, session %d", output, stream, session); ssize_t index = mOutputs.indexOfKey(output); if (index < 0) { LOGW("stopOutput() unknow output %d", output); return BAD_VALUE; } AudioOutputDescriptor *outputDesc = mOutputs.valueAt(index); routing_strategy strategy = getStrategy((AudioSystem::stream_type)stream); // handle special case for sonification while in call if (isInCall()) { handleIncallSonification(stream, false, false); } if (outputDesc->mRefCount[stream] > 0) { // decrement usage count of this stream on the output outputDesc->changeRefCount(stream, -1); // ++++++++++++++++++++++++++++AudioPolicyManagerBase::AudioOutputDescriptor::changeRefCount++++++++++++++++++++++++++++++++++++ void AudioPolicyManagerBase::AudioOutputDescriptor::changeRefCount(AudioSystem::stream_type stream, int delta) { // forward usage count change to attached outputs if (isDuplicated()) { mOutput1->changeRefCount(stream, delta); mOutput2->changeRefCount(stream, delta); } if ((delta + (int)mRefCount[stream]) < 0) { LOGW("changeRefCount() invalid delta %d for stream %d, refCount %d", delta, stream, mRefCount[stream]); mRefCount[stream] = 0; return; } // 函数AudioPolicyManagerBase::AudioOutputDescriptor::refCount会使用mRefCount mRefCount[stream] += delta; // +++++++++++++++++++++++++++++AudioPolicyManagerBase::AudioOutputDescriptor::refCount+++++++++++++++++++++++++++++++++++ uint32_t AudioPolicyManagerBase::AudioOutputDescriptor::refCount() { uint32_t refcount = 0; for (int i = 0; i < (int)AudioSystem::NUM_STREAM_TYPES; i++) { refcount += mRefCount[i]; } return refcount; } // 函数AudioPolicyManagerBase::releaseOutput中有调用函数AudioPolicyManagerBase::AudioOutputDescriptor::refCount。 // +++++++++++++++++++++++++++++AudioPolicyManagerBase::releaseOutput+++++++++++++++++++++++++++++++++++ void AudioPolicyManagerBase::releaseOutput(audio_io_handle_t output) { LOGV("releaseOutput() %d", output); ssize_t index = mOutputs.indexOfKey(output); if (index < 0) { LOGW("releaseOutput() releasing unknown output %d", output); return; } #ifdef AUDIO_POLICY_TEST int testIndex = testOutputIndex(output); if (testIndex != 0) { AudioOutputDescriptor *outputDesc = mOutputs.valueAt(index); if (outputDesc->refCount() == 0) { mpClientInterface->closeOutput(output); delete mOutputs.valueAt(index); mOutputs.removeItem(output); mTestOutputs[testIndex] = 0; } return; } #endif //AUDIO_POLICY_TEST // 如果没定义AUDIO_POLICY_TEST,只对设置了OUTPUT_FLAG_DIRECT标志的output做delete处理 // 原因,其实在以前我们看AudioPolicyManagerBase::getOutput函数时已经看到了, // 在get output的时候,只有需要一个direct output的时候,才会调用函数AudioPolicyService::openOutput函数来打开一个output // 否则,只是将成员变量(在构造函数中打开的output)返回 // 当然,我们没有考虑AUDIO_POLICY_TEST时的情况 if (mOutputs.valueAt(index)->mFlags & AudioSystem::OUTPUT_FLAG_DIRECT) { // ++++++++++++++++++++++++++++++output_flags++++++++++++++++++++++++++++++++++ // request to open a direct output with getOutput() (by opposition to sharing an output with other AudioTracks) enum output_flags { OUTPUT_FLAG_INDIRECT = 0x0, OUTPUT_FLAG_DIRECT = 0x1 }; // ------------------------------output_flags---------------------------------- mpClientInterface->closeOutput(output); // +++++++++++++++++++++++++++++AudioPolicyService::closeOutput+++++++++++++++++++++++++++++++++++ status_t AudioPolicyService::closeOutput(audio_io_handle_t output) { sp<IAudioFlinger> af = AudioSystem::get_audio_flinger(); if (af == 0) return PERMISSION_DENIED; return af->closeOutput(output); // +++++++++++++++++++++++++++AudioFlinger::closeOutput++++++++++++++++++++++++++++++++++++ status_t AudioFlinger::closeOutput(int output) { // keep strong reference on the playback thread so that // it is not destroyed while exit() is executed sp <PlaybackThread> thread; { Mutex::Autolock _l(mLock); thread = checkPlaybackThread_l(output); if (thread == NULL) { return BAD_VALUE; } LOGV("closeOutput() %d", output); // 如果thread 为mixer // 删除所有DUPLICATING thread中包含的该thread if (thread->type() == PlaybackThread::MIXER) { for (size_t i = 0; i < mPlaybackThreads.size(); i++) { if (mPlaybackThreads.valueAt(i)->type() == PlaybackThread::DUPLICATING) { DuplicatingThread *dupThread = (DuplicatingThread *)mPlaybackThreads.valueAt(i).get(); dupThread->removeOutputTrack((MixerThread *)thread.get()); } } } void *param2 = 0; audioConfigChanged_l(AudioSystem::OUTPUT_CLOSED, output, param2); mPlaybackThreads.removeItem(output); } thread->exit(); // ++++++++++++++++++++++++++++AudioFlinger::ThreadBase::exit++++++++++++++++++++++++++++++++++++ void AudioFlinger::ThreadBase::exit() { // keep a strong ref on ourself so that we wont get // destroyed in the middle of requestExitAndWait() sp <ThreadBase> strongMe = this; LOGV("ThreadBase::exit"); { AutoMutex lock(&mLock); mExiting = true; requestExit(); // +++++++++++++++++++++++++++++Thread::requestExit+++++++++++++++++++++++++++++++++++ void Thread::requestExit() { // threadloop函数中会判断该成员变量,以判断是否要结束线程 mExitPending = true; } // -----------------------------Thread::requestExit---------------------------------- mWaitWorkCV.signal(); } requestExitAndWait(); // +++++++++++++++++++++++++++++Thread::requestExitAndWait+++++++++++++++++++++++++++++++++++ status_t Thread::requestExitAndWait() { if (mThread == getThreadId()) { LOGW( "Thread (this=%p): don't call waitForExit() from this " "Thread object's thread. It's a guaranteed deadlock!", this); return WOULD_BLOCK; } requestExit(); Mutex::Autolock _l(mLock); while (mRunning == true) { mThreadExitedCondition.wait(mLock); } mExitPending = false; return mStatus; } // -----------------------------Thread::requestExitAndWait----------------------------------- } // ----------------------------AudioFlinger::ThreadBase::exit------------------------------------ if (thread->type() != PlaybackThread::DUPLICATING) { // output是在函数AudioFlinger::openOutput中调用函数mAudioHardware->openOutputStream打开的。 mAudioHardware->closeOutputStream(thread->getOutput()); // +++++++++++++++++++++++++++AudioHardwareALSA::closeOutputStream+++++++++++++++++++++++++++++++++++++ void AudioHardwareALSA::closeOutputStream(AudioStreamOut* out) { AutoMutex lock(mLock); delete out; } // ---------------------------AudioHardwareALSA::closeOutputStream------------------------------------- } return NO_ERROR; } // ----------------------------AudioFlinger::closeOutput------------------------------------ } // -----------------------------AudioPolicyService::closeOutput----------------------------------- delete mOutputs.valueAt(index); mOutputs.removeItem(output); } } // -----------------------------AudioPolicyManagerBase::releaseOutput----------------------------------- // -----------------------------AudioPolicyManagerBase::AudioOutputDescriptor::refCount----------------------------------- LOGV("changeRefCount() stream %d, count %d", stream, mRefCount[stream]); } // ----------------------------AudioPolicyManagerBase::AudioOutputDescriptor::changeRefCount------------------------------------ // store time at which the last music track was stopped - see computeVolume() if (stream == AudioSystem::MUSIC) { mMusicStopTime = systemTime(); } setOutputDevice(output, getNewDevice(output)); #ifdef WITH_A2DP if (mA2dpOutput != 0 && !a2dpUsedForSonification() && strategy == STRATEGY_SONIFICATION) { setStrategyMute(STRATEGY_MEDIA, false, mA2dpOutput, mOutputs.valueFor(mHardwareOutput)->mLatency*2); } #endif if (output != mHardwareOutput) { setOutputDevice(mHardwareOutput, getNewDevice(mHardwareOutput), true); } return NO_ERROR; } else { LOGW("stopOutput() refcount is already 0 for output %d", output); return INVALID_OPERATION; } } // ------------------------------AudioPolicyManagerBase::stopOutput---------------------------------- } // ---------------------------------AudioPolicyService::stopOutput------------------------------- } // ------------------------------AudioSystem::stopOutput---------------------------------- thread->mLock.lock(); } } } // -----------------------------AudioFlinger::PlaybackThread::Track::stop----------------------------------- } // ------------------------------AudioFlinger::TrackHandle::stop---------------------------------- // Cancel loops (If we are in the middle of a loop, playback // would not stop until loopCount reaches 0). setLoop(0, 0, 0); // +++++++++++++++++++++++++++++AudioTrack::setLoop+++++++++++++++++++++++++++++++++++ status_t AudioTrack::setLoop(uint32_t loopStart, uint32_t loopEnd, int loopCount) { audio_track_cblk_t* cblk = mCblk; Mutex::Autolock _l(cblk->lock); if (loopCount == 0) { cblk->loopStart = ULLONG_MAX; cblk->loopEnd = ULLONG_MAX; cblk->loopCount = 0; mLoopCount = 0; return NO_ERROR; } if (loopStart >= loopEnd || loopEnd - loopStart > cblk->frameCount) { LOGE("setLoop invalid value: loopStart %d, loopEnd %d, loopCount %d, framecount %d, user %lld", loopStart, loopEnd, loopCount, cblk->frameCount, cblk->user); return BAD_VALUE; } if ((mSharedBuffer != 0) && (loopEnd > cblk->frameCount)) { LOGE("setLoop invalid value: loop markers beyond data: loopStart %d, loopEnd %d, framecount %d", loopStart, loopEnd, cblk->frameCount); return BAD_VALUE; } cblk->loopStart = loopStart; cblk->loopEnd = loopEnd; cblk->loopCount = loopCount; mLoopCount = loopCount; return NO_ERROR; } // -----------------------------AudioTrack::setLoop----------------------------------- // the playback head position will reset to 0, so if a marker is set, we need // to activate it again mMarkerReached = false; // Force flush if a shared buffer is used otherwise audioflinger // will not stop before end of buffer is reached. if (mSharedBuffer != 0) { flush(); // ++++++++++++++++++++++++++++AudioTrack::flush++++++++++++++++++++++++++++++++++++ void AudioTrack::flush() { LOGV("flush"); // clear playback marker and periodic update counter mMarkerPosition = 0; mMarkerReached = false; mUpdatePeriod = 0; if (!mActive) { mAudioTrack->flush(); // ++++++++++++++++++++++++++++++AudioFlinger::TrackHandle::flush++++++++++++++++++++++++++++++++++ void AudioFlinger::TrackHandle::flush() { mTrack->flush(); // ++++++++++++++++++++++++++++++AudioFlinger::PlaybackThread::Track::flush++++++++++++++++++++++++++++++++++ void AudioFlinger::PlaybackThread::Track::flush() { LOGV("flush(%d)", mName); sp<ThreadBase> thread = mThread.promote(); if (thread != 0) { Mutex::Autolock _l(thread->mLock); if (mState != STOPPED && mState != PAUSED && mState != PAUSING) { return; } // No point remaining in PAUSED state after a flush => go to // STOPPED state mState = STOPPED; mCblk->lock.lock(); // NOTE: reset() will reset cblk->user and cblk->server with // the risk that at the same time, the AudioMixer is trying to read // data. In this case, getNextBuffer() would return a NULL pointer // as audio buffer => the AudioMixer code MUST always test that pointer // returned by getNextBuffer() is not NULL! // reset函数在stop函数中已经看过 reset(); mCblk->lock.unlock(); } } // ------------------------------AudioFlinger::PlaybackThread::Track::flush---------------------------------- } // ------------------------------AudioFlinger::TrackHandle::flush---------------------------------- // Release AudioTrack callback thread in case it was waiting for new buffers // in AudioTrack::obtainBuffer() mCblk->cv.signal(); } } // ----------------------------AudioTrack::flush------------------------------------ } if (t != 0) { t->requestExit(); } else { setpriority(PRIO_PROCESS, 0, ANDROID_PRIORITY_NORMAL); } } if (t != 0) { t->mLock.unlock(); } } // -------------------------------AudioTrack::stop--------------------------------- } // ------------------------------android_media_AudioTrack_stop---------------------------------- mPlayState = PLAYSTATE_STOPPED; } } // ----------------------------stop------------------------------------ track.flush(); // ++++++++++++++++++++++++++++flush++++++++++++++++++++++++++++++++++++ /** * Flushes the audio data currently queued for playback. */ public void flush() { if (mState == STATE_INITIALIZED) { // flush the data in native layer native_flush(); // ++++++++++++++++++++++++++++android_media_AudioTrack_flush++++++++++++++++++++++++++++++++++++ static void android_media_AudioTrack_flush(JNIEnv *env, jobject thiz) { AudioTrack *lpTrack = (AudioTrack *)env->GetIntField( thiz, javaAudioTrackFields.nativeTrackInJavaObj); if (lpTrack == NULL ) { jniThrowException(env, "java/lang/IllegalStateException", "Unable to retrieve AudioTrack pointer for flush()"); return; } lpTrack->flush(); // +++++++++++++++++++++++++++++AudioTrack::flush+++++++++++++++++++++++++++++++++++ void AudioTrack::flush() { LOGV("flush"); // clear playback marker and periodic update counter mMarkerPosition = 0; mMarkerReached = false; mUpdatePeriod = 0; if (!mActive) { mAudioTrack->flush(); // ++++++++++++++++++++++++++++AudioFlinger::TrackHandle::flush++++++++++++++++++++++++++++++++++++ void AudioFlinger::TrackHandle::flush() { // 函数AudioFlinger::PlaybackThread::Track::flush在上面已经看过 mTrack->flush(); } // ----------------------------AudioFlinger::TrackHandle::flush------------------------------------ // Release AudioTrack callback thread in case it was waiting for new buffers // in AudioTrack::obtainBuffer() mCblk->cv.signal(); } } // -----------------------------AudioTrack::flush----------------------------------- } // ----------------------------android_media_AudioTrack_flush------------------------------------ } } // ----------------------------flush------------------------------------ log(TEST_NAME, "position ="+ track.getPlaybackHeadPosition()); assertTrue(TEST_NAME, track.getPlaybackHeadPosition() == 0); //-------- tear down -------------- track.release(); }
###########################################################
&&&&&&&&&&&&&&&&&&&&&&&总结&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
stop的时候:
若AudioTrack不是active的,在将其audio_track_cblk_t中的user, server等清0。
若是direct output,则将其close。
flush的时候:
会将audio_track_cblk_t中的user, server等清0。
因此,flush 之后再get position的话,肯定是0.
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
相关推荐
project( ' :react-native-android-audio-streaming-aac ' ) . projectDir = new File (settingsDir, ' ../node_modules/react-native-android-audio-streaming-aac ' ) android/app/build.gradle .. . ...
audio 播放器 在vue-audio-better上增加图标名称和大小 非工程化项目使用,绑定vue.js全局使用
DroidPhone的ANDROID-AUDIO-SYSTEM系列,写的极好,我把它整理成文档了,原帖:http://blog.csdn.net/droidphone,感谢作者的辛勤劳作和无私奉献。
android-sun-jarsign-support-1.1.jar
android audio 框架流程分析图
webrtc-audio-processing-0.3.1在arm平台的测试程序,含测试用的pcm文件。
XMOS-Stereo-USB-Audio-Class2-Driver-3033_v4.13.0公板
Demo Importing the Library Add to build.gradle: dependencies { ...} Library is available in jcenter repository How to use Refer to the sample project on how to use visualizer or refer to WIKI docs. ...
simpleaudio-1.0.4-cp310-cp310-win_amd64
数据库安装时补丁包, rpm -ihv zlib-devel-1.2.3-3.i386.rpm rpm -ihv freetype-devel-2.2.1-21.el5_3.i386.rpm ...rpm -ihv audiofile-devel-0.2.6-5.i386.rpm rpm -ihv esound-devel-0.2.36-3.i386.rpm
android-jni-audio-codecAndroid JNI audio codec from android source, now include amr , pcma, pcmu codec.Android JNI 层的音频编解码库,现在已经合入的有 AMR, PCMA, PCMU 编码.说明本文参考了 , 表示感谢. 他...
改SDK可以在多个平台上,以高保真度为XR,3D和360视频项目渲染数百个同步3D音源,包括两个unity独有功能:现场录制和几何混响烘培
Android 12 AudioFlinger 分析(RK3588)
android audio system
Learning Core Audio A Hands-On Guide to Audio Programming for Mac and iOS.pdf iOS音视频开发必读书籍
安卓手机查看java源码Android 音频频谱分析仪 一个分支(请参阅 README.old 以获取其原始自述文件) 该软件显示了手机听到的声音的频率分量的幅度分布(称为频谱)。 可用于帮助调整乐器或唱歌时的音调,(暂定)...
该文档用来基于STM32的USB相关SDK将STM32枚举成Audio-CCID-HID-CDC-MSC-DFU等USB设备,内含详细说明
LYRICS-TO-AUDIO ALIGNMENT AND PHRASE-LEVEL SEGMENTATION
Android_Audio_overview.ppt 这个ppt要逆天啦 源码分析 英文的 100多页的 胜过国内任何一个分析
It has been tested with Android MediaCodec encoder to send H264 (avc) video and with Android MediaRecorder to send AAC audio via RTMP to a server. RFCs This implementation uses the RTMP 3 protocol ...