概述
承接上一章节分析:【六】Android MediaPlayer整体架构源码分析 -【start请求播放处理流程】【Part 4】【05】
本系列文章分析的安卓源码版本:【Android 10.0 版本】
【此章节小节编号就接着上一章节排列】
8.3.2.4.2、useGraphicBuffer_l(portIndex, omxBuffer.mGraphicBuffer, buffer)实现分析:
使用图形Buffer
从下面方法上英文注释非常明确该方法的使用提示,它只是个向后兼容方法,一旦OMX实现改变,该方法可以被移除并重命名为useGraphicBuffer2来使用。
// [frameworks/av/media/libstagefright/omx/OMXNodeInstance.cpp]
// XXX: This function is here for backwards compatibility.
Once the OMX
// implementations have been updated this can be removed and useGraphicBuffer2
// can be renamed to useGraphicBuffer.
status_t OMXNodeInstance::useGraphicBuffer_l(
OMX_U32 portIndex, const sp<GraphicBuffer>& graphicBuffer,
IOMX::buffer_id *buffer) {
if (graphicBuffer == NULL || buffer == NULL) {
ALOGE("b/25884056");
return BAD_VALUE;
}
// 首先检查是否使用了媒体元数据模式,我们可以在支持元数据meta的设备上运行一个实验来模拟遗留行为(预分配缓冲区)。
// First, see if we're in metadata mode. We could be running an experiment to simulate
// legacy behavior (preallocated buffers) on devices that supports meta.
if (mMetadataType[portIndex] != kMetadataBufferTypeInvalid) {
// 使用了媒体元数据模式时,使用图形Buffer与元数据
// 见下面的分析
return useGraphicBufferWithMetadata_l(
portIndex, graphicBuffer, buffer);
}
// 该图形Buffer模式下,必须开启了该模式
if (!mGraphicBufferEnabled[portIndex]) {
// Report error if this is not in graphic buffer mode.
ALOGE("b/62948670");
android_errorWriteLog(0x534e4554, "62948670");
return INVALID_OPERATION;
}
// See if the newer version of the extension is present.
OMX_INDEXTYPE index;
// 检查底层组件是否支持该扩展属性功能新版本,并返回其参数索引
// 备注:在所有软编解码器组件中都不支持该属性,该扩展属性功能一般为硬编解码器支持
if (OMX_GetExtensionIndex(
mHandle,
const_cast<OMX_STRING>("OMX.google.android.index.useAndroidNativeBuffer2"),
&index) == OMX_ErrorNone) {
// 支持该扩展属性时
// 使用graphicBuffer
// 见下面分析
return useGraphicBuffer2_l(portIndex, graphicBuffer, buffer);
}
// 该扩展属性功能在软编解码器中也不支持,通常硬Codec可能支持
OMX_STRING name = const_cast<OMX_STRING>(
"OMX.google.android.index.useAndroidNativeBuffer");
OMX_ERRORTYPE err = OMX_GetExtensionIndex(mHandle, name, &index);
if (err != OMX_ErrorNone) {
CLOG_ERROR(getExtensionIndex, err, "%s", name);
return StatusFromOMXError(err);
}
// 同上一样处理
BufferMeta *bufferMeta = new BufferMeta(graphicBuffer, portIndex);
OMX_BUFFERHEADERTYPE *header;
OMX_VERSIONTYPE ver;
ver.s.nVersionMajor = 1;
ver.s.nVersionMinor = 0;
ver.s.nRevision = 0;
ver.s.nStep = 0;
// 创建该扩展属性支持的参数结构对象
UseAndroidNativeBufferParams params = {
sizeof(UseAndroidNativeBufferParams), ver, portIndex, bufferMeta,
&header, graphicBuffer,
};
// 将该扩展属性参数结构配置信息设置给底层组件更新
err = OMX_SetParameter(mHandle, index, ¶ms);
if (err != OMX_ErrorNone) {
CLOG_ERROR(setParameter, err, "%s(%#x): %s:%u meta=%p GB=%p", name, index,
portString(portIndex), portIndex, bufferMeta, graphicBuffer->handle);
delete bufferMeta;
bufferMeta = NULL;
*buffer = 0;
return StatusFromOMXError(err);
}
CHECK_EQ(header->pAppPrivate, bufferMeta);
// 同样是创建新Buffer id和添加到活动buffer队列中
*buffer = makeBufferID(header);
addActiveBuffer(portIndex, *buffer);
CLOG_BUFFER(useGraphicBuffer, NEW_BUFFER_FMT(
*buffer, portIndex, "GB=%p", graphicBuffer->handle));
return OK;
}
useGraphicBufferWithMetadata_l(portIndex, graphicBuffer, buffer)实现分析:
使用了媒体元数据模式时,使用图形Buffer与元数据。该方法只处理输出端口buffer
// [frameworks/av/media/libstagefright/omx/OMXNodeInstance.cpp]
status_t OMXNodeInstance::useGraphicBufferWithMetadata_l(
OMX_U32 portIndex, const sp<GraphicBuffer> &graphicBuffer,
IOMX::buffer_id *buffer) {
// 必须是输出端口索引,否则失败
if (portIndex != kPortIndexOutput) {
return BAD_VALUE;
}
// 必须是这两种媒体元数据Buffer类型,否则失败
if (mMetadataType[portIndex] != kMetadataBufferTypeGrallocSource &&
mMetadataType[portIndex] != kMetadataBufferTypeANWBuffer) {
return BAD_VALUE;
}
// 然后使用该Buffer处理,见此前分析
status_t err = useBuffer_l(portIndex, NULL, NULL, buffer);
if (err != OK) {
return err;
}
// useBuffer_l分配Buffer header头信息成功时,查询Buffer Header头信息结构对象指针
// 见下面分析
OMX_BUFFERHEADERTYPE *header = findBufferHeader(*buffer, portIndex);
// 更新图形Buffer中的元数据
// 见下面分析
return updateGraphicBufferInMeta_l(portIndex, graphicBuffer, *buffer, header);
}
findBufferHeader(*buffer, portIndex)实现分析:
useBuffer_l分配Buffer header头信息成功时,查询Buffer Header头信息结构对象指针
// [frameworks/av/media/libstagefright/omx/OMXNodeInstance.cpp]
OMX_BUFFERHEADERTYPE *OMXNodeInstance::findBufferHeader(
IOMX::buffer_id buffer, OMX_U32 portIndex) {
if (buffer == 0) {
return NULL;
}
// Buffer ID读写锁
Mutex::Autolock autoLock(mBufferIDLock);
// 这就是前面创建Buffer和Buffer id时存在的map映射时,此处根据id获取该ID在ID列表中索引
ssize_t index = mBufferIDToBufferHeader.indexOfKey(buffer);
if (index < 0) {
// 不存在该ID
CLOGW("findBufferHeader: buffer %u not found", buffer);
return NULL;
}
// 存在则获取对应Buffer头信息对象指针
OMX_BUFFERHEADERTYPE *header = mBufferIDToBufferHeader.valueAt(index);
// 此次就是获取转换此前上层程序的缓冲区Buffer私有数据类型对象指针
BufferMeta *buffer_meta =
static_cast<BufferMeta *>(header->pAppPrivate);
// 然后判断Buffer所在的端口索引是否和请求的端口一致,理论上都是一致的。 不一致则失败
if (buffer_meta->getPortIndex() != portIndex) {
CLOGW("findBufferHeader: buffer %u found but with incorrect port index.", buffer);
android_errorWriteLog(0x534e4554, "28816827");
return NULL;
}
return header;
}
updateGraphicBufferInMeta_l(portIndex, graphicBuffer, *buffer, header)实现分析:
更新图形Buffer中的元数据
// [frameworks/av/media/libstagefright/omx/OMXNodeInstance.cpp]
status_t OMXNodeInstance::updateGraphicBufferInMeta_l(
OMX_U32 portIndex, const sp<GraphicBuffer>& graphicBuffer,
IOMX::buffer_id buffer, OMX_BUFFERHEADERTYPE *header) {
// No need to check |graphicBuffer| since NULL is valid for it as below.
if (header == NULL) {
ALOGE("b/25884056");
return BAD_VALUE;
}
// 只处理输入输出端口Buffer
if (portIndex != kPortIndexInput && portIndex != kPortIndexOutput) {
return BAD_VALUE;
}
// 获取私有Buffer类型数据
BufferMeta *bufferMeta = (BufferMeta *)(header->pAppPrivate);
// 获取数据header中的底层组件Buffer数据访问内存地址,存入ABuffer,该方法分析见前面流程已有分析
sp<ABuffer> data = bufferMeta->getBuffer(header, false /* limit */);
// 设置图形Buffer,其实际只是缓存graphicBuffer该Buffer
bufferMeta->setGraphicBuffer(graphicBuffer);
// 媒体元数据Buffer类型
// 根据前面流程分析,可知下面将判断并初始化data对象中实际缓存的具体元数据类型
MetadataBufferType metaType = mMetadataType[portIndex];
if (metaType == kMetadataBufferTypeGrallocSource
&& data->capacity() >= sizeof(VideoGrallocMetadata)) {
VideoGrallocMetadata &metadata = *(VideoGrallocMetadata *)(data->data());
metadata.eType = kMetadataBufferTypeGrallocSource;
// 缓存graphicBuffer数据访问句柄指针
metadata.pHandle = graphicBuffer == NULL ? NULL : graphicBuffer->handle;
} else if (metaType == kMetadataBufferTypeANWBuffer
&& data->capacity() >= sizeof(VideoNativeMetadata)) {
VideoNativeMetadata &metadata = *(VideoNativeMetadata *)(data->data());
metadata.eType = kMetadataBufferTypeANWBuffer;
// 缓存graphicBuffer数据Native Buffer访问句柄指针
metadata.pBuffer = graphicBuffer == NULL ? NULL : graphicBuffer->getNativeBuffer();
// Fence文件描述符初始化为-1即无效
metadata.nFenceFd = -1;
} else {
// 该方法不支持其它元数据类型
CLOG_ERROR(updateGraphicBufferInMeta, BAD_VALUE, "%s:%u, %#x bad type (%d) or size (%u)",
portString(portIndex), portIndex, buffer, mMetadataType[portIndex], header->nAllocLen);
return BAD_VALUE;
}
CLOG_BUFFER(updateGraphicBufferInMeta, "%s:%u, %#x := %p",
portString(portIndex), portIndex, buffer,
graphicBuffer == NULL ? NULL : graphicBuffer->handle);
return OK;
}
useGraphicBuffer2_l(portIndex, graphicBuffer, buffer)实现分析:
使用graphicBuffer
// [frameworks/av/media/libstagefright/omx/OMXNodeInstance.cpp]
status_t OMXNodeInstance::useGraphicBuffer2_l(
OMX_U32 portIndex, const sp<GraphicBuffer>& graphicBuffer,
IOMX::buffer_id *buffer) {
if (graphicBuffer == NULL || buffer == NULL) {
ALOGE("b/25884056");
return BAD_VALUE;
}
// Buffer端口定义信息,并初始化
// port definition
OMX_PARAM_PORTDEFINITIONTYPE def;
InitOMXParams(&def);
def.nPortIndex = portIndex;
// 根据前面有过分析可知,返回底层组件该配置信息
OMX_ERRORTYPE err = OMX_GetParameter(mHandle, OMX_IndexParamPortDefinition, &def);
if (err != OMX_ErrorNone) {
OMX_INDEXTYPE index = OMX_IndexParamPortDefinition;
CLOG_ERROR(getParameter, err, "%s(%#x): %s:%u",
asString(index), index, portString(portIndex), portIndex);
return UNKNOWN_ERROR;
}
// 返回成功时
// 创建BufferMeta,内部缓存graphicBuffer
BufferMeta *bufferMeta = new BufferMeta(graphicBuffer, portIndex);
OMX_BUFFERHEADERTYPE *header = NULL;
// 获取和转换graphicBuffer数据Buffer访问句柄指针
OMX_U8* bufferHandle = const_cast<OMX_U8*>(
reinterpret_cast<const OMX_U8*>(graphicBuffer->handle));
// 请求底层组件使用该Buffer数据
// 见前面已有分析描述,最终底层组件将会创建header对象返回给该指针
err = OMX_UseBuffer(
mHandle,
&header,
portIndex,
bufferMeta,
def.nBufferSize,
bufferHandle);
if (err != OMX_ErrorNone) {
CLOG_ERROR(useBuffer, err, BUFFER_FMT(portIndex, "%u@%p", def.nBufferSize, bufferHandle));
delete bufferMeta;
bufferMeta = NULL;
*buffer = 0;
return StatusFromOMXError(err);
}
// 从这里可以看出底层组件此时将该数据Buffer访问句柄指针直接交给了pBuffer字段来控制
CHECK_EQ(header->pBuffer, bufferHandle);
CHECK_EQ(header->pAppPrivate, bufferMeta);
// 生成当前新header的新Buffer ID
*buffer = makeBufferID(header);
// 添加header和ID映射ActiveBuffer对象到活动Buffer队列中
addActiveBuffer(portIndex, *buffer);
CLOG_BUFFER(useGraphicBuffer2, NEW_BUFFER_FMT(
*buffer, portIndex, "%u@%p", def.nBufferSize, bufferHandle));
return OK;
}
8.3.3、allocateOutputBuffersFromNativeWindow()实现分析:
从Surface中分配输出缓冲区Buffer
通常硬件解码会走此功能
从下面注译中可知:该方法仅仅处理非元数据模式(或者模拟使用云数据遗留行为模式,它将对于ACodec是透明的数据类型)
// [frameworks/av/media/libstagefright/ACodec.cpp]
status_t ACodec::allocateOutputBuffersFromNativeWindow() {
// This method only handles the non-metadata mode (or simulating legacy
// mode with metadata, which is transparent to ACodec).
CHECK(!storingMetadataInDecodedBuffers());
// 从Surface中获取配置的输出缓冲区buffer个数、(每个)buffer大小、最小未出队列缓冲区buffer个数
// 见前面8.3.2.1小节分析
OMX_U32 bufferCount, bufferSize, minUndequeuedBuffers;
status_t err = configureOutputBuffersFromNativeWindow(
&bufferCount, &bufferSize, &minUndequeuedBuffers, true /* preregister */);
if (err != 0)
return err;
// 全局缓存新的Surface的最小未出队列缓冲区个数
mNumUndequeuedBuffers = minUndequeuedBuffers;
// 设置要求允许Surface的GraphicBufferProducer图形Buffer数据生产者端来分配内存buffer。
// 注意这里意思就是说将会直接把解码出来的解码数据放到Surface的图形Buffer缓冲区中。
// 备注:最终将它设置了【BufferQueueCore】对象的mAllowAllocation字段标识
static_cast<Surface*>(mNativeWindow.get())
->getIGraphicBufferProducer()->allowAllocation(true);
// 从这个log可以看到当前方法处理的作用为,从Surface中分配输出端口指定数量缓冲区和指定大小的Buffer
ALOGV("[%s] Allocating %u buffers from a native window of size %u on "
"output port",
mComponentName.c_str(), bufferCount, bufferSize);
// 循环将缓冲区Buffer出队列并发送给OMX组件
// Dequeue buffers and send them to OMX
for (OMX_U32 i = 0; i < bufferCount; i++) {
ANativeWindowBuffer *buf;
int fenceFd;
// 直接从Surface中获取图形Buffer生产者端缓冲区中分配的Buffer
// 和 Fence文件描述符(即一种Surface Buffer数据同步机制,也即是什么时候该Buffer可以被提交,什么时候被释放)
// 备注:Surface的分析将会在后续Surface章节分析。
// 重要NOTE:
// 若该Buffer递交给Surface时,Surface检查到该Buffer的native数据访问句柄指针和分配的不一致时
// 则将会最终导致Surface不接收该Buffer,且解码中断视频结束播放。
err = mNativeWindow->dequeueBuffer(mNativeWindow.get(), &buf, &fenceFd);
if (err != 0) {
ALOGE("dequeueBuffer failed: %s (%d)", strerror(-err), -err);
break;
}
// from方法其实是将该Buffer类型强转为子类类型GraphicBuffer,
// 因为buf在Surface中分配时本来就是GraphicBuffer。
// static_cast<GraphicBuffer *>(buf)
sp<GraphicBuffer> graphicBuffer(GraphicBuffer::from(buf));
// 创建ACodec使用的Buffer信息结构对象并初始化
BufferInfo info;
// 标记状态为我们(ACodec)拥有的
info.mStatus = BufferInfo::OWNED_BY_US;
info.mFenceFd = fenceFd;
info.mIsReadFence = false;
info.mRenderInfo = NULL;
// 缓存
info.mGraphicBuffer = graphicBuffer;
info.mNewGraphicBuffer = false;
// 出队列buffer计数值
info.mDequeuedAt = mDequeueCounter;
// TODO: We shouln't need to create MediaCodecBuffer. In metadata mode
//
OMX doesn't use the shared memory buffer, but some code still
//
access info.mData. Create an ABuffer as a placeholder.
// 根据英文注释可知,这两个data对象不需要,因为在元数据Buffer模式下OMX不使用共享内存Buffer,
// 但是还是有些编解码器依然会访问该数据,所以创建一个占位Buffer
info.mData = new MediaCodecBuffer(mOutputFormat, new ABuffer(bufferSize));
info.mCodecData = info.mData;
// 添加Buffer到输出端口缓冲区队列中
mBuffers[kPortIndexOutput].push(info);
IOMX::buffer_id bufferId;
// 请求底层组件使用该Buffer,见此前分析
err = mOMXNode->useBuffer(kPortIndexOutput, graphicBuffer, &bufferId);
if (err != 0) {
ALOGE("registering GraphicBuffer %u with OMX IL component failed: "
"%d", i, err);
break;
}
// 更新缓存OMX创建的Buffer ID
mBuffers[kPortIndexOutput].editItemAt(i).mBufferID = bufferId;
ALOGV("[%s] Registered graphic buffer with ID %u (pointer = %p)",
mComponentName.c_str(),
bufferId, graphicBuffer.get());
}
// Buffer取消开始和结束索引
OMX_U32 cancelStart;
OMX_U32 cancelEnd;
if (err != OK) {
// 失败时
// If an error occurred while dequeuing we need to cancel any buffers
// that were dequeued. Also cancel all if we're in legacy metadata mode.
cancelStart = 0;
cancelEnd = mBuffers[kPortIndexOutput].size();
} else {
// 缓冲区分配成功时
// Return the required minimum undequeued buffers to the native window.
// 向Surface返回所需的最小未出队列缓冲区Buffer(个)数
cancelStart = bufferCount - minUndequeuedBuffers;
// 缓冲区Buffer个数
cancelEnd = bufferCount;
}
// 其实也就是取消归还多余的Surface分配的Buffer
// 备注:此处的取消只是告诉Surface缓冲区取消,但ACodec内部并不移除它
for (OMX_U32 i = cancelStart; i < cancelEnd; i++) {
BufferInfo *info = &mBuffers[kPortIndexOutput].editItemAt(i);
if (info->mStatus == BufferInfo::OWNED_BY_US) {
// 状态为OWNED_BY_US时
// 取消归还多余的Surface分配的Buffer
// 见下面分析
status_t error = cancelBufferToNativeWindow(info);
if (err == 0) {
err = error;
}
}
}
// 上面分配完毕后将重置为false
static_cast<Surface*>(mNativeWindow.get())
->getIGraphicBufferProducer()->allowAllocation(false);
return err;
}
cancelBufferToNativeWindow(info)实现分析:
状态为OWNED_BY_US时,取消归还多余的Surface分配的Buffer
// [frameworks/av/media/libstagefright/ACodec.cpp]
status_t ACodec::cancelBufferToNativeWindow(BufferInfo *info) {
CHECK_EQ((int)info->mStatus, (int)BufferInfo::OWNED_BY_US);
ALOGV("[%s] Calling cancelBuffer on buffer %u",
mComponentName.c_str(), info->mBufferID);
info->checkWriteFence("cancelBufferToNativeWindow");
// 请求Surface取消归还多余的分配的Buffer
// 注意:此处将会发生前面讲述的若当前Buffer的数据访问句柄指针异常的话,Surface将会取消失败
// 备注:其实际执行结果是:
// 若为非共享内存模式时,将会将这个buffer放入到【BufferQueueCore】的freeBuffers未使用队列中,
// 并从activeBuffers正在使用的队列中移除,也就是后续ACodec将可以使用这个freeBuffers来递交新解码的数据给Surface。
// 若为共享内存模式时,则不会如上处理,它只会讲当前Buffer的状态置为未使用状态。
int err = mNativeWindow->cancelBuffer(
mNativeWindow.get(), info->mGraphicBuffer.get(), info->mFenceFd);
// 取消后将其重置无效值
info->mFenceFd = -1;
// 若失败则将会打印这条log
ALOGW_IF(err != 0, "[%s] can not return buffer %u to native window",
mComponentName.c_str(), info->mBufferID);
// change ownership even if cancelBuffer fails
// 改变Buffer所有权,即使cancelBuffer执行取消Buffer失败
info->mStatus = BufferInfo::OWNED_BY_NATIVE_WINDOW;
return err;
}
8.3.4、align(bufSize, alignment)实现分析:
模板函数,字节对齐,也就是将num的值按照den的对齐大小来调整对齐
~运算符计算规则是:将所有的bit位取反。
// [frameworks/av/media/libstagefright/foundation/include/media/stagefright/foundation/AUtils.h]
/* == ceil(num / den) * den. T must be integer type, alignment must be positive power of 2 */
template<class T, class U>
inline static const T align(const T &num, const U &den) {
return (num + (T)(den - 1)) & (T)~(den - 1);
}
8.3.5、mBufferChannel->setInputBufferArray(array)实现分析:
设置输入Buffer列表
// [frameworks/av/media/libstagefright/ACodecBufferChannel.cpp]
void ACodecBufferChannel::setInputBufferArray(const std::vector<BufferAndId> &array) {
if (hasCryptoOrDescrambler()) {
// 有加密或解密器时,我们不关注这种,只关注非加密媒体
size_t totalSize = std::accumulate(
array.begin(), array.end(), 0u,
[alignment = MemoryDealer::getAllocationAlignment()]
(size_t sum, const BufferAndId& elem) {
return sum + align(elem.mBuffer->capacity(), alignment);
});
size_t maxSize = std::accumulate(
array.begin(), array.end(), 0u,
[alignment = MemoryDealer::getAllocationAlignment()]
(size_t max, const BufferAndId& elem) {
return std::max(max, align(elem.mBuffer->capacity(), alignment));
});
size_t destinationBufferSize = maxSize;
size_t heapSize = totalSize + destinationBufferSize;
if (heapSize > 0) {
mDealer = makeMemoryDealer(heapSize);
mDecryptDestination = mDealer->allocate(destinationBufferSize);
}
}
// 创建输入buffer列表集合
std::vector<const BufferInfo> inputBuffers;
for (const BufferAndId &elem : array) {
sp<IMemory> sharedEncryptedBuffer;
if (hasCryptoOrDescrambler()) {
sharedEncryptedBuffer = mDealer->allocate(elem.mBuffer->capacity());
}
// 遍历输入buffer列表对应元素item字段值,添加新item到inputBuffers中
inputBuffers.emplace_back(elem.mBuffer, elem.mBufferId, sharedEncryptedBuffer);
}
// 原子性创建存储【inputBuffers】对象指针给全局变量输入buffer列表指针
std::atomic_store(
&mInputBuffers,
std::make_shared<const std::vector<const BufferInfo>>(inputBuffers));
}
8.3.6、mBufferChannel->setOutputBufferArray(array)实现分析:
设置输出Buffer列表
其实下面实现同8.3.5中类似处理,因此不再分析
// [frameworks/av/media/libstagefright/ACodecBufferChannel.cpp]
void ACodecBufferChannel::setOutputBufferArray(const std::vector<BufferAndId> &array) {
std::vector<const BufferInfo> outputBuffers;
for (const BufferAndId &elem : array) {
outputBuffers.emplace_back(elem.mBuffer, elem.mBufferId, nullptr);
}
std::atomic_store(
&mOutputBuffers,
std::make_shared<const std::vector<const BufferInfo>>(outputBuffers));
}
8.3.7、mCodec->mCallback->onStartCompleted()实现分析:
执行MediaCodec的回调监听类的该方法,通知ACodec及其底层组件启动完成事件
// [frameworks/av/media/libstagefright/MediaCodec.cpp]
void CodecCallback::onStartCompleted() {
// 有早前初始化分析,mNotify为kWhatCodecNotify事件通知,MediaCodec来接收处理
sp<AMessage> notify(mNotify->dup());
// 通知子事件类型【kWhatStartCompleted】
notify->setInt32("what", kWhatStartCompleted);
notify->post();
}
MediaCodec接收kWhatCodecNotify事件通知子事件类型【kWhatStartCompleted】处理:
void MediaCodec::onMessageReceived(const sp<AMessage> &msg) {
switch (msg->what()) {
case kWhatCodecNotify:
{
int32_t what;
CHECK(msg->findInt32("what", &what));
switch (what) {
case kWhatStartCompleted:
{
// 检查当前状态是否有效,否则忽略该事件
if (mState == RELEASING || mState == UNINITIALIZED) {
// In case a kWhatRelease message came in and replied,
// we log a warning and ignore.
ALOGW("start interrupted by release, current state %d", mState);
break;
}
CHECK_EQ(mState, STARTING);
if (mIsVideo) {
// 视频时,添加当前媒体资源类型数据给到ResourceManagerService服务区管理
// ResourceManagerService关于资源管理服务早前已有阐述过,因此不再仔细分析
// getGraphicBufferSize()占用的缓冲区数据大小,其实际它只是估算,根据buffer个数和帧宽高及其像素大小计算而得
addResource(
MediaResource::kGraphicMemory,
MediaResource::kUnspecifiedSubType,
getGraphicBufferSize());
}
// 设置编解码器已启动工作状态
// 见此前已有分析
setState(STARTED);
// 最终应答此前NuPlayerDecoder关于本次start编解码器的请求
// 此时将会回到NuPlayer::Decoder::onConfigure()中的err = mCodec->start(),
// 返回其start执行结果,继续往下执行第9小节流程
(new AMessage)->postReply(mReplyID);
break;
}
}
}
}
}
9、releaseAndResetMediaBuffers()实现分析:
释放并重置媒体buffer缓冲区
第一次刚开始工作时下面的mMediaBuffers和mInputBuffers这两个缓冲区都没有数据,mInputBuffers它添加buffer是在解码器获取输入数据时添加的,因此该方法不再分析,主要就是将各个缓冲区清空,注意此处mInputBuffers不会清空
// [frameworks/av/media/libmediaplayerservice/nuplayer/NuPlayerDecoder.cpp]
void NuPlayer::Decoder::releaseAndResetMediaBuffers() {
for (size_t i = 0; i < mMediaBuffers.size(); i++) {
if (mMediaBuffers[i] != NULL) {
mMediaBuffers[i]->release();
mMediaBuffers.editItemAt(i) = NULL;
}
}
mMediaBuffers.resize(mInputBuffers.size());
for (size_t i = 0; i < mMediaBuffers.size(); i++) {
mMediaBuffers.editItemAt(i) = NULL;
}
mInputBufferIsDequeued.clear();
mInputBufferIsDequeued.resize(mInputBuffers.size());
for (size_t i = 0; i < mInputBufferIsDequeued.size(); i++) {
mInputBufferIsDequeued.editItemAt(i) = false;
}
mPendingInputMessages.clear();
mDequeuedInputBuffers.clear();
mSkipRenderingUntilMediaTimeUs = -1;
}
本章节【Part 4】部分结束
最后
以上就是阳光枫叶为你收集整理的【六】Android MediaPlayer整体架构源码分析 -【start请求播放处理流程】【Part 4】【06】的全部内容,希望文章能够帮你解决【六】Android MediaPlayer整体架构源码分析 -【start请求播放处理流程】【Part 4】【06】所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复