我是靠谱客的博主 超帅背包,最近开发中收集的这篇文章主要介绍Android鬼点子-通过Google官方示例学NDK(4),觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

如果你看遍了网上那些只是在C++里面输出一个 ‘ helloWorld ’ 的NDK教程的话,可以看看本系列的文章,本系列是通过NDK的运用的例子来学习NDK。

如果对这方面感兴趣,可以看看前三篇。

Android鬼点子-通过Google官方示例学NDK(1)——主要说的是如何在NDK使用多线程,还有就是基础的java与c++的相互调用。

Android鬼点子-通过Google官方示例学NDK(2)——主要是说的不使用java代码,用c++写一个activity。

Android鬼点子-通过Google官方示例学NDK(3)——这是一个opengl的例子。

第四个例子是展示的视频解码相关的内容,外加opengl部分内容。

代码在这里。

主要的功能是播放在assets文件夹中的一段视频。视频可以暂停,继续播放。

例子中使用了2个SurfaceView,分别是Android原生的SurfaceView,和使用c++自己实现的MyGLSurfaceView进行播放。在MyGLSurfaceView中有立体旋转的效果(截图中下面的那个),主要说一说视频的播放过程。

这个例子有个很有价值的点就是有一个c++实现的消息队列,也就是Looper。我读过之后,收获很多。

项目的结构如下:

先看Activity,NativeCodec.java中:

public void onCreate(Bundle icicle) {
        super.onCreate(icicle);
        setContentView(R.layout.main);

        mGLView1 = (MyGLSurfaceView) findViewById(R.id.glsurfaceview1);

        // set up the Surface 1 video sink
        mSurfaceView1 = (SurfaceView) findViewById(R.id.surfaceview1);
        mSurfaceHolder1 = mSurfaceView1.getHolder();

        mSurfaceHolder1.addCallback(new SurfaceHolder.Callback() {

            @Override
            public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
                Log.v(TAG, "surfaceChanged format=" + format + ", width=" + width + ", height="
                        + height);
            }

            @Override
            public void surfaceCreated(SurfaceHolder holder) {
                Log.v(TAG, "surfaceCreated");
                if (mRadio1.isChecked()) {
                    setSurface(holder.getSurface());
                }
            }

            @Override
            public void surfaceDestroyed(SurfaceHolder holder) {
                Log.v(TAG, "surfaceDestroyed");
            }

        });

        // initialize content source spinner
        Spinner sourceSpinner = (Spinner) findViewById(R.id.source_spinner);
        ArrayAdapter<CharSequence> sourceAdapter = ArrayAdapter.createFromResource(
                this, R.array.source_array, android.R.layout.simple_spinner_item);
        sourceAdapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item);
        sourceSpinner.setAdapter(sourceAdapter);
        sourceSpinner.setOnItemSelectedListener(new AdapterView.OnItemSelectedListener() {

            @Override
            public void onItemSelected(AdapterView<?> parent, View view, int pos, long id) {
                mSourceString = parent.getItemAtPosition(pos).toString();
                Log.v(TAG, "onItemSelected " + mSourceString);
            }

            @Override
            public void onNothingSelected(AdapterView parent) {
                Log.v(TAG, "onNothingSelected");
                mSourceString = null;
            }

        });

        mRadio1 = (RadioButton) findViewById(R.id.radio1);
        mRadio2 = (RadioButton) findViewById(R.id.radio2);

        OnCheckedChangeListener checklistener = new CompoundButton.OnCheckedChangeListener() {

          @Override
          public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) {
              ···
              if (isChecked) {
                  if (mRadio1.isChecked()) {
                      if (mSurfaceHolder1VideoSink == null) {
                          mSurfaceHolder1VideoSink = new SurfaceHolderVideoSink(mSurfaceHolder1);
                      }
                      mSelectedVideoSink = mSurfaceHolder1VideoSink;
                      mGLView1.onPause();
                      Log.i("@@@@", "glview pause");
                  } else {
                      mGLView1.onResume();
                      if (mGLView1VideoSink == null) {
                          mGLView1VideoSink = new GLViewVideoSink(mGLView1);
                      }
                      mSelectedVideoSink = mGLView1VideoSink;
                  }
                  switchSurface();
              }
          }
        };
        ···
        // native MediaPlayer start/pause
        ((Button) findViewById(R.id.start_native)).setOnClickListener(new View.OnClickListener() {

            @Override
            public void onClick(View view) {
                if (!mCreated) {
                    if (mNativeCodecPlayerVideoSink == null) {
                        if (mSelectedVideoSink == null) {
                            return;
                        }
                        mSelectedVideoSink.useAsSinkForNative();
                        mNativeCodecPlayerVideoSink = mSelectedVideoSink;
                    }
                    if (mSourceString != null) {
                        mCreated = createStreamingMediaPlayer(getResources().getAssets(),
                                mSourceString);
                    }
                }
                if (mCreated) {
                    mIsPlaying = !mIsPlaying;
                    setPlayingStreamingMediaPlayer(mIsPlaying);
                }
            }

        });


        // native MediaPlayer rewind
        ((Button) findViewById(R.id.rewind_native)).setOnClickListener(new View.OnClickListener() {

            @Override
            public void onClick(View view) {
                if (mNativeCodecPlayerVideoSink != null) {
                    rewindStreamingMediaPlayer();
                }
            }

        });
    }
复制代码

这里主要是一些控件的时间处理。比如暂停播放,读取文件,切换播放的SurfaceView,从头播放等。

暂停(继续)播放:setPlayingStreamingMediaPlayer(mIsPlaying);

读取文件:mCreated = createStreamingMediaPlayer(getResources().getAssets(),mSourceString);

切换播放的SurfaceView:当前播放的SurfaceView保存在mNativeCodecPlayerVideoSink中,通过useAsSinkForNative()中调用setSurface(s)方法传入到c++代码中。

从头播放:rewindStreamingMediaPlayer()

退出:shutdown()

所以一共有下面5个jni方法:

    public static native boolean createStreamingMediaPlayer(AssetManager assetMgr, String filename);
    public static native void setPlayingStreamingMediaPlayer(boolean isPlaying);
    public static native void shutdown();
    public static native void setSurface(Surface surface);
    public static native void rewindStreamingMediaPlayer();
复制代码

上面的5个方法的实现都在native-codec-jni.cpp中,先看看读取文件的方法:

typedef struct {
    int fd;
    ANativeWindow* window;
    AMediaExtractor* ex;
    AMediaCodec *codec;
    int64_t renderstart;
    bool sawInputEOS;
    bool sawOutputEOS;
    bool isPlaying;
    bool renderonce;
} workerdata;

workerdata data = {-1, NULL, NULL, NULL, 0, false, false, false, false};

jboolean Java_com_example_nativecodec_NativeCodec_createStreamingMediaPlayer(JNIEnv* env,
        jclass clazz, jobject assetMgr, jstring filename)
{
    LOGV("@@@ create");

    // convert Java string to UTF-8
    const char *utf8 = env->GetStringUTFChars(filename, NULL);
    LOGV("opening %s", utf8);

    off_t outStart, outLen;
    int fd = AAsset_openFileDescriptor(AAssetManager_open(AAssetManager_fromJava(env, assetMgr), utf8, 0),
                                       &outStart, &outLen);//打开文件

    env->ReleaseStringUTFChars(filename, utf8);
    if (fd < 0) {
        LOGE("failed to open file: %s %d (%s)", utf8, fd, strerror(errno));
        return JNI_FALSE;
    }

    data.fd = fd;

    workerdata *d = &data;

    AMediaExtractor *ex = AMediaExtractor_new();//负责将指定类型的媒体文件从文件中找到轨道,并填充到MediaCodec的缓冲区中
    media_status_t err = AMediaExtractor_setDataSourceFd(ex, d->fd,
                                                         static_cast<off64_t>(outStart),
                                                         static_cast<off64_t>(outLen));
    close(d->fd);
    if (err != AMEDIA_OK) {
        LOGV("setDataSource error: %d", err);
        return JNI_FALSE;
    }

    int numtracks = AMediaExtractor_getTrackCount(ex);//获取轨道数

    AMediaCodec *codec = NULL;//负责媒体文件的编码和解码工作

    //log:input has 2 tracks
    LOGV("input has %d tracks", numtracks);
    for (int i = 0; i < numtracks; i++) {
        AMediaFormat *format = AMediaExtractor_getTrackFormat(ex, i);
        const char *s = AMediaFormat_toString(format);
        //track 0 format: mime: string(video/avc), durationUs: int64(10000000), width: int32(480), height: int32(360), max-input-size: int32(55147), csd-0: data, csd-1: data}
        //track 1 format: mime: string(audio/mp4a-latm), durationUs: int64(9914920), channel-count: int32(2), sample-rate: int32(44100), aac-profile: int32(2), bit-width: int32(16), pcm-type: int32(1), max-input-size: int32(694), csd-0: data}
        LOGV("track %d format: %s", i, s);
        const char *mime;
        if (!AMediaFormat_getString(format, AMEDIAFORMAT_KEY_MIME, &mime)) {//在format中取出mime字段
            LOGV("no mime type");
            return JNI_FALSE;
        } else if (!strncmp(mime, "video/", 6)) {//获取视频所在轨道 注:strncmp相同返回0,比较6位 if:非0为真
            // Omitting most error handling for clarity.
            // Production code should check for errors.
            AMediaExtractor_selectTrack(ex, i);//选中轨道
            codec = AMediaCodec_createDecoderByType(mime);//通过mime创建解码器
            //配置解码器
            AMediaCodec_configure(codec, format, d->window, NULL, 0);
            d->ex = ex;
            d->codec = codec;
            d->renderstart = -1;
            d->sawInputEOS = false;
            d->sawOutputEOS = false;
            d->isPlaying = false;
            d->renderonce = true;
            //开始解码
            AMediaCodec_start(codec);
        }
        AMediaFormat_delete(format);
    }

    mlooper = new mylooper();
    mlooper->post(kMsgCodecBuffer, d);

    return JNI_TRUE;
}
复制代码

在这个方法中,先找到了视频的轨道,然后配置了解码器,并把d->window与解码器绑定。d->window是在setSurface方法中传入的。

// set the surface
void Java_com_example_nativecodec_NativeCodec_setSurface(JNIEnv *env, jclass clazz, jobject surface)
{
    // obtain a native window from a Java surface
    if (data.window) {
        ANativeWindow_release(data.window);
        data.window = NULL;
    }
    data.window = ANativeWindow_fromSurface(env, surface);
    LOGV("@@@ setsurface %p", data.window);
}
复制代码

workerdata *d = &data中保存了当前播放用到的一些标志位。d->ex = ex视频轨道,d->codec = codec解码器,d->renderstart = -1渲染开始时间。

方法的最后mlooper = new mylooper(); mlooper->post(kMsgCodecBuffer, d);new了一个looper然后发送了一个事件。例子中无论是暂停播放,开始播放,退出播放,还是每一帧的播放,都是通过mlooper->post(msg)的方法进行的,那么看看这个looper是如何实现。

代码looper.cpp,关键部分加了注释:

struct loopermessage;
typedef struct loopermessage loopermessage;

struct loopermessage {
    int what;
    void *obj;
    loopermessage *next;
    bool quit;
};



void* looper::trampoline(void* p) {
    ((looper*)p)->loop();
    return NULL;
}

looper::looper() {
    sem_init(&headdataavailable, 0, 0);//为0时此信号量在进程间共享,信号量的初始值
    sem_init(&headwriteprotect, 0, 1);
    pthread_attr_t attr;
    pthread_attr_init(&attr);//线程默认配置
    //起一个线程,线程上运行trampoline,参数是this,其实就是调用了loop()
    pthread_create(&worker, &attr, trampoline, this);
    running = true;
}


looper::~looper() {
    if (running) {
        LOGV("Looper deleted while still running. Some messages will not be processed");
        quit();
    }
}

//进入队列
void looper::post(int what, void *data, bool flush) {
    //组装消息
    loopermessage *msg = new loopermessage();
    msg->what = what;
    msg->obj = data;
    msg->next = NULL;
    msg->quit = false;
    addmsg(msg, flush);
}

void looper::addmsg(loopermessage *msg, bool flush) {
    sem_wait(&headwriteprotect);//等待写入消息
    loopermessage *h = head;

    if (flush) {
        //如果flush的话,先清空队列
        while(h) {
            loopermessage *next = h->next;
            delete h;
            h = next;
        }
        h = NULL;
    }
    if (h) {
        //如果队列里面有消息
        //指针移动到队尾
        while (h->next) {
            h = h->next;
        }
        //在队尾插入消息
        h->next = msg;
    } else {
        //如果队列里面没有消息,直接入队
        head = msg;
    }
    LOGV("post msg %d", msg->what);
    sem_post(&headwriteprotect);
    sem_post(&headdataavailable);
}

void looper::loop() {
    while(true) {
        // wait for available message
        //阻塞当前线程直到信号量headdataavailable的值大于0,解除阻塞后将headdataavailable的值-1
        sem_wait(&headdataavailable);

        // get next available message
        sem_wait(&headwriteprotect);
        loopermessage *msg = head;//有消息了,取出第一个消息
        if (msg == NULL) {
            LOGV("no msg");
            sem_post(&headwriteprotect);//+1 可以写入消息了
            continue;
        }
        head = msg->next;//把头部指针指到下一条消息
        sem_post(&headwriteprotect);//+1 可以写入消息了

        if (msg->quit) {//如果是退出的消息
            LOGV("quitting");
            delete msg;
            return;
        }
        LOGV("processing msg %d", msg->what);
        handle(msg->what, msg->obj);//处理消息
        delete msg;
    }
}

void looper::quit() {
    LOGV("quit");
    loopermessage *msg = new loopermessage();
    msg->what = 0;
    msg->obj = NULL;
    msg->next = NULL;
    msg->quit = true;//发送一个退出的消息
    addmsg(msg, false);
    void *retval;
    pthread_join(worker, &retval);
    sem_destroy(&headdataavailable); //释放信号量
    sem_destroy(&headwriteprotect);
    running = false;
}

void looper::handle(int what, void* obj) {
    LOGV("dropping msg %d %p", what, obj);
}
复制代码

大致的流程是起了一个单独线程,然后在上面监视消息队列,有消息入列void looper::addmsg(loopermessage *msg, bool flush)和读取looper::loop()的操作。loop()中有一个while,一直监听着队列。取到了消息,就调用looper::handle(int what, void* obj)进行处理。程序退出的时候调用looper::quit()结束读取操作。

回到native-codec-jni.cpp中,这里重写了消息的处理方法:

class mylooper: public looper {
    virtual void handle(int what, void* obj);
};

static mylooper *mlooper = NULL;
void mylooper::handle(int what, void* obj) {
    switch (what) {
        case kMsgCodecBuffer:
            doCodecWork((workerdata*)obj);
            break;

        case kMsgDecodeDone:
        {
            workerdata *d = (workerdata*)obj;
            AMediaCodec_stop(d->codec);
            AMediaCodec_delete(d->codec);
            AMediaExtractor_delete(d->ex);
            d->sawInputEOS = true;
            d->sawOutputEOS = true;
        }
        break;

        case kMsgSeek:
        {
            workerdata *d = (workerdata*)obj;
            AMediaExtractor_seekTo(d->ex, 0, AMEDIAEXTRACTOR_SEEK_NEXT_SYNC);
            AMediaCodec_flush(d->codec);
            d->renderstart = -1;
            d->sawInputEOS = false;
            d->sawOutputEOS = false;
            if (!d->isPlaying) {
                d->renderonce = true;
                post(kMsgCodecBuffer, d);
            }
            LOGV("seeked");
        }
        break;

        case kMsgPause:
        {
            workerdata *d = (workerdata*)obj;
            if (d->isPlaying) {
                // flush all outstanding codecbuffer messages with a no-op message
                d->isPlaying = false;
                post(kMsgPauseAck, NULL, true);//清空队列
            }
        }
        break;

        case kMsgResume:
        {
            workerdata *d = (workerdata*)obj;
            if (!d->isPlaying) {
                d->renderstart = -1;
                d->isPlaying = true;
                post(kMsgCodecBuffer, d);
            }
        }
        break;
    }
}
复制代码

读取了文件之后,发送了kMsgCodecBuffer消息,在这个消息的处理中调用了doCodecWork((workerdata*)obj);看看doCodecWork方法:

//https://www.cnblogs.com/jiy-for-you/p/7282033.html
//https://www.cnblogs.com/Xiegg/p/3428529.html
void doCodecWork(workerdata *d) {

    ssize_t bufidx = -1;
    if (!d->sawInputEOS) {
        //获取缓冲区,设置超时为2000毫秒
        bufidx = AMediaCodec_dequeueInputBuffer(d->codec, 2000);
        LOGV("input buffer %zd", bufidx);
        if (bufidx >= 0) {
            size_t bufsize;
            //取到缓冲区输入流
            auto buf = AMediaCodec_getInputBuffer(d->codec, bufidx, &bufsize);
            //开始读取样本
            auto sampleSize = AMediaExtractor_readSampleData(d->ex, buf, bufsize);//d->ex已经设置了选中的轨道
            if (sampleSize < 0) {
                //如果读到了尾部
                sampleSize = 0;
                d->sawInputEOS = true;
                LOGV("EOS");
            }
            //以微秒为单位返回当前样本的呈现时间。
            auto presentationTimeUs = AMediaExtractor_getSampleTime(d->ex);//d->ex已经设置了选中的轨道

            //将缓冲区传递至解码器
            AMediaCodec_queueInputBuffer(d->codec, bufidx, 0, sampleSize, presentationTimeUs,
                    d->sawInputEOS ? AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM : 0);
            //前进到下一个样本
            AMediaExtractor_advance(d->ex);
        }
    }

    if (!d->sawOutputEOS) {
        AMediaCodecBufferInfo info;
        //缓冲区 第一步
        auto status = AMediaCodec_dequeueOutputBuffer(d->codec, &info, 0);
        if (status >= 0) {
            if (info.flags & AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM) {
                LOGV("output EOS");
                d->sawOutputEOS = true;
            }
            int64_t presentationNano = info.presentationTimeUs * 1000;
            if (d->renderstart < 0) {
                d->renderstart = systemnanotime() - presentationNano;
            }
            int64_t delay = (d->renderstart + presentationNano) - systemnanotime();
            if (delay > 0) {
                //延时操作
                //如果缓冲区里的可展示时间>当前视频播放的进度,就休眠一下
                usleep(delay / 1000);
            }
            //渲染 ,如果 info.size != 0 等于 true ,就会渲染到surface上 第二步
            AMediaCodec_releaseOutputBuffer(d->codec, status, info.size != 0);
            if (d->renderonce) {
                d->renderonce = false;
                return;
            }
        } else if (status == AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED) {
            LOGV("output buffers changed");
        } else if (status == AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED) {
            auto format = AMediaCodec_getOutputFormat(d->codec);
            LOGV("format changed to: %s", AMediaFormat_toString(format));
            AMediaFormat_delete(format);
        } else if (status == AMEDIACODEC_INFO_TRY_AGAIN_LATER) {
            //解码当前帧超时
            LOGV("no output buffer right now");
        } else {
            LOGV("unexpected info code: %zd", status);
        }
    }

    if (!d->sawInputEOS || !d->sawOutputEOS) {
        mlooper->post(kMsgCodecBuffer, d);//如果输入或输出没有结束,就回调自己
    }
}
复制代码

这里主要是针对解码器的输入和输出操作,注释如有纰漏请指明。

如果没有播放完,就会继续调用mlooper->post(kMsgCodecBuffer, d);回调自己。

如果用户点击了暂停键,发送kMsgPause消息并清空消息队列,这里就没有再继续回调doCodecWork((workerdata*)obj),所以播放就暂停了,但是播放的进度会记录在workerdata *d = &data中(d->ex和d->codec)。当继续播放时,会继续发送post(kMsgCodecBuffer, d)消息。

如果点击重播按钮,发送mlooper->post(kMsgSeek, &data)消息:

    case kMsgSeek:
        {
            workerdata *d = (workerdata*)obj;
            AMediaExtractor_seekTo(d->ex, 0, AMEDIAEXTRACTOR_SEEK_NEXT_SYNC);
            AMediaCodec_flush(d->codec);
            d->renderstart = -1;
            d->sawInputEOS = false;
            d->sawOutputEOS = false;
            if (!d->isPlaying) {
                d->renderonce = true;
                post(kMsgCodecBuffer, d);
            }
            LOGV("seeked");
        }
复制代码

清除了进度AMediaExtractor_seekTo(d->ex, 0, AMEDIAEXTRACTOR_SEEK_NEXT_SYNC); AMediaCodec_flush(d->codec);,然后发送post(kMsgCodecBuffer, d)消息开始播放。

视频的暂停和退出都是见到发送消息:

// set the playing state for the streaming media player
void Java_com_example_nativecodec_NativeCodec_setPlayingStreamingMediaPlayer(JNIEnv* env,
        jclass clazz, jboolean isPlaying)
{
    LOGV("@@@ playpause: %d", isPlaying);
    if (mlooper) {
        if (isPlaying) {
            mlooper->post(kMsgResume, &data);
        } else {
            mlooper->post(kMsgPause, &data);
        }
    }
}


// shut down the native media system
void Java_com_example_nativecodec_NativeCodec_shutdown(JNIEnv* env, jclass clazz)
{
    LOGV("@@@ shutdown");
    if (mlooper) {
        mlooper->post(kMsgDecodeDone, &data, true /* flush */);
        mlooper->quit();
        delete mlooper;
        mlooper = NULL;
    }
    if (data.window) {
        ANativeWindow_release(data.window);
        data.window = NULL;
    }
}
复制代码

具体的操作都是在mylooper::handle(int what, void* obj)中处理的。

到此,视频播放的部分大致结束。

最后

以上就是超帅背包为你收集整理的Android鬼点子-通过Google官方示例学NDK(4)的全部内容,希望文章能够帮你解决Android鬼点子-通过Google官方示例学NDK(4)所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(72)

评论列表共有 0 条评论

立即
投稿
返回
顶部