我是
靠谱客的博主
不安火龙果,最近开发中收集的这篇文章主要介绍
录像过程中拍照的流程 从framework到hal,觉得挺不错的,现在分享给大家,希望可以做个参考。
概述
- 由 178903294创建, 最后修改于1月 20, 2020
在VideoMode.java中 当拍照键按下
VideoMode.java
private View.OnClickListener mVssListener = new View.OnClickListener() { public void onClick(View view) { ~ LogHelper.i(TAG, "[mVssListener] bjq click video state = " + mVideoState + "mCanTakePicture = " + mCanTakePicture); if ((getVideoState() == VideoState.STATE_PAUSED || getVideoState() == VideoState.STATE_RECORDING) && mCanTakePicture) { mAppUi.animationStart(IAppUi.AnimationType.TYPE_CAPTURE, null ); mCameraDevice.updateGSensorOrientation(mApp.getGSensorOrientation()); mCameraDevice.takePicture(mJpegCallback); mCanTakePicture = false ; } } }; |
调用mCameraDevice.takePicture(mJpegCallback); 这里面的mJpegCallback 是处理数据回调保存的函数
VideoDevice2Controller.java
@Override public void takePicture( @Nonnull JpegCallback callback) { ~ LogHelper.e(TAG, "[takePicture] bjq +" ); mJpegCallback = callback; CaptureRequest.Builder builder; try { builder = mCamera2Proxy.createCaptureRequest(Camera2Proxy.TEMPLATE_VIDEO_SNAPSHOT); configureQuickPreview(builder); builder.addTarget(mPreviewSurface); builder.addTarget(mRecordSurface); builder.addTarget(mCaptureSurface.getSurface()); + builder.set(CaptureRequest.CONTROL_AE_MODE, //加入这个自动白平衡 暗光条件下录像过程中拍照一片黑暗的问题就临时解决了 + CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH); int rotation = CameraUtil.getJpegRotation(Integer.parseInt(mCameraId), mJpegRotation, mActivity); builder.set(CaptureRequest.JPEG_ORIENTATION, rotation); mSettingDevice2Configurator.configCaptureRequest(builder); mSession.capture(builder.build(), mCaptureCallback, mModeHandler); } catch (CameraAccessException e) { e.printStackTrace(); } ~ LogHelper.e(TAG, "[takePicture] bjq -" ); } |
mCamera2Proxy.createCaptureRequest(Camera2Proxy.TEMPLATE_VIDEO_SNAPSHOT) 这里定义了拍照的模式:TEMPLATE_VIDEO_SNAPSHOT 视频快门模式
接着调用 mSession.capture(builder.build(), mCaptureCallback, mModeHandler); 注意这里传入了一个Handler 以后会用到
Camera2CaptureSessionProxy.java
public int capture( @Nonnull CaptureRequest request, @Nullable CaptureCallback listener, @Nullable Handler handler) throws CameraAccessException { int [] captureNum = new int [ 1 ]; List<CaptureRequest> requestList = new ArrayList<>(); requestList.add(request); SessionOperatorInfo info = new SessionOperatorInfo(requestList, listener, handler, captureNum); Message msg = mRequestHandler.obtainMessage(Camera2Actions.CAPTURE, info); mAtomAccessor.sendAtomMessageAndWait(mRequestHandler, msg); return captureNum[ 0 ]; } |
调用mAtomAccessor.sendAtomMessageAndWait(mRequestHandler, msg);
AtomAccessor.java
public void sendAtomMessageAtFrontOfQueue(Handler handler, Message msg) { if (handler == null || msg == null ) { return ; } acquireResource(); + LogHelper.i(TAG, "bjq sendAtomMessageAtFrontOfQueue" ); handler.sendMessageAtFrontOfQueue(msg); releaseResource(); } |
接着调用Handler.java的sendMessageAtFrontOfQueue 加入消息队列并处理对应的handel
Handler.java
public final boolean sendMessageAtFrontOfQueue(Message msg) { MessageQueue queue = mQueue; if (queue == null ) { RuntimeException e = new RuntimeException( this + " sendMessageAtTime() called with no mQueue" ); Log.w( "Looper" , e.getMessage(), e); return false ; } return enqueueMessage(queue, msg, 0 ); } |
Handler.java
private boolean enqueueMessage(MessageQueue queue, Message msg, long uptimeMillis) { msg.target = this ; if (mAsynchronous) { msg.setAsynchronous( true ); } return queue.enqueueMessage(msg, uptimeMillis); } |
Handler.java
boolean enqueueMessage(Message msg, long when) { if (msg.target == null ) { throw new IllegalArgumentException( "Message must have a target." ); } if (msg.isInUse()) { throw new IllegalStateException(msg + " This message is already in use." ); } synchronized ( this ) { if (mQuitting) { IllegalStateException e = new IllegalStateException( msg.target + " sending message to a Handler on a dead thread" ); Log.w(TAG, e.getMessage(), e); msg.recycle(); return false ; } msg.markInUse(); msg.when = when; Message p = mMessages; boolean needWake; if (p == null || when == 0 || when < p.when) { // New head, wake up the event queue if blocked. msg.next = p; mMessages = msg; needWake = mBlocked; } else { // Inserted within the middle of the queue. Usually we don't have to wake // up the event queue unless there is a barrier at the head of the queue // and the message is the earliest asynchronous message in the queue. needWake = mBlocked && p.target == null && msg.isAsynchronous(); Message prev; for (;;) { prev = p; p = p.next; if (p == null || when < p.when) { break ; } if (needWake && p.isAsynchronous()) { needWake = false ; } } msg.next = p; // invariant: p == prev.next prev.next = msg; } // We can assume mPtr != 0 because mQuitting is false. if (needWake) { nativeWake(mPtr); } } return true ; } |
至此发送完了一次MQ
安卓的Android Handler机制 和sendmessage 机制见文末参考。
下面说一下handler
Camera2Handler.java
public void handleMessage(Message msg) { super .handleMessage(msg); int operation = msg.what; + LogHelper.i(mTag, "[handleMessage] bjq msg: " + msg.what); mMsgStartTime = SystemClock.uptimeMillis(); printStartMsg(mTag.toString(), Camera2Actions.stringify(operation), (mMsgStartTime - msg .getWhen())); doHandleMessage(msg); mMsgStopTime = SystemClock.uptimeMillis(); printStopMsg(mTag.toString(), Camera2Actions.stringify(operation), (mMsgStopTime - mMsgStartTime)); } |
调用doHandleMessage(msg)
Camera2Handler.java
@Override protected void doHandleMessage(Message msg) { if (Camera2Actions.isSessionMessageType(msg.what)) { handleSessionMessage(msg); + LogHelper.i(mTag, "[doHandleMessage] bjq msg: " + msg.what); } else { handleRequestMessage(msg); + LogHelper.i(mTag, "[doHandleMessage] bjq msg: " + msg.what); } } |
上面进行消息类型判断进而选择对应的handle
我们的拍照请求是执行handleRequestMessage(msg);
Camera2Handler.java
private void handleRequestMessage(Message msg) { //If camera or session is closed, don't need to the request. if (isCameraClosed(Camera2Actions.stringify(msg.what))) { return ; } if (isSessionClosed(Camera2Actions.stringify(msg.what))) { return ; } + LogHelper.e(mTag, "[handleRequestMessage] bjq msg: " + msg.what); try { switch (msg.what) { case Camera2Actions.PREPARE: mCameraCaptureSession.prepare((Surface) msg.obj); break ; case Camera2Actions.CAPTURE: SessionOperatorInfo captureInfo = (SessionOperatorInfo) msg.obj; int [] captureNumber = captureInfo.mSessionNum; captureNumber[ 0 ] = startCapture(captureInfo); break ; case Camera2Actions.CAPTURE_BURST: SessionOperatorInfo burstInfo = (SessionOperatorInfo) msg.obj; int [] captureBurstNumber = burstInfo.mSessionNum; captureBurstNumber[ 0 ] = startBurstCapture(burstInfo); break ; ... |
对应执行 case Camera2Actions.CAPTURE: startCapture(captureInfo);
Camera2Handler.java
private int startCapture(SessionOperatorInfo info) throws CameraAccessException { CaptureRequest request = info.mCaptureRequest.get( 0 ); CaptureCallback callback = info.mCaptureCallback; Handler handler = info.mHandler; + LogHelper.i(mTag, "[startCapture] bjq use the respond handler" ); return mCameraCaptureSession.capture(request, callback, handler); } |
调用mCameraCaptureSession.capture(request, callback, handler);
CameraCaptureSession.java
public abstract int capture( @NonNull CaptureRequest request, @Nullable CaptureCallback listener, @Nullable Handler handler) throws CameraAccessException; |
上面这个是抽象方法 我们再找他的实现:
在CameraCaptureSessionImpl.java
@Override public int capture(CaptureRequest request, CaptureCallback callback, Handler handler) throws CameraAccessException { checkCaptureRequest(request); synchronized (mDeviceImpl.mInterfaceLock) { checkNotClosed(); handler = checkHandler(handler, callback); ~ // if (DEBUG) { ~ Log.i(TAG, mIdString + "bjq capture - request " + request + ", callback " + callback + " handler " + handler); ~ // } return addPendingSequence(mDeviceImpl.capture(request, createCaptureCallbackProxy(handler, callback), mDeviceExecutor)); } } |
调用
mDeviceImpl.capture(request,createCaptureCallbackProxy(handler, callback), mDeviceExecutor)
CameraDeviceImpl.java
public int capture(CaptureRequest request, CaptureCallback callback, Executor executor) throws CameraAccessException { if (DEBUG) { Log.d(TAG, "calling capture" ); } List<CaptureRequest> requestList = new ArrayList<CaptureRequest>(); requestList.add(request); return submitCaptureRequest(requestList, callback, executor, /*streaming*/ false ); |
调用 submitCaptureRequest(requestList, callback, executor, /*streaming*/false);
CameraDeviceImpl.java
private int submitCaptureRequest(List<CaptureRequest> requestList, CaptureCallback callback, Executor executor, boolean repeating) throws CameraAccessException { // Need a valid executor, or current thread needs to have a looper, if // callback is valid executor = checkExecutor(executor, callback); // Make sure that there all requests have at least 1 surface; all surfaces are non-null; // the surface isn't a physical stream surface for reprocessing request for (CaptureRequest request : requestList) { if (request.getTargets().isEmpty()) { throw new IllegalArgumentException( "Each request must have at least one Surface target" ); } for (Surface surface : request.getTargets()) { if (surface == null ) { throw new IllegalArgumentException( "Null Surface targets are not allowed" ); } for ( int i = 0 ; i < mConfiguredOutputs.size(); i++) { OutputConfiguration configuration = mConfiguredOutputs.valueAt(i); if (configuration.isForPhysicalCamera() ... |
调用 requestInfo = mRemoteDevice.submitRequestList(requestArray, repeating);
ICameraDeviceUserWrapper.java
public SubmitInfo submitRequestList(CaptureRequest[] requestList, boolean streaming) throws CameraAccessException { try { return mRemoteDevice.submitRequestList(requestList, streaming); } catch (Throwable t) { CameraManager.throwAsPublicException(t); throw new UnsupportedOperationException( "Unexpected exception" , t); } |
调用return mRemoteDevice.submitRequestList(requestList, streaming);
下面是对应接口的定义:
ICameraDeviceUser.aidl
interface ICameraDeviceUser { void disconnect(); const int NO_IN_FLIGHT_REPEATING_FRAMES = - 1 ; SubmitInfo submitRequest(in CaptureRequest request, boolean streaming); SubmitInfo submitRequestList(in CaptureRequest[] requestList, boolean streaming); |
调用对应服务
CameraDeviceClient.cpp
CameraDeviceClient.cpp : binder::Status CameraDeviceClient::submitRequest( const hardware::camera2::CaptureRequest& request, bool streaming, /*out*/ hardware::camera2::utils::SubmitInfo *submitInfo) { std::vector<hardware::camera2::CaptureRequest> requestList = { request }; return submitRequestList(requestList, streaming, submitInfo); } |
调用submitRequestLis 中的mDevice->captureList(metadataRequestList, surfaceMapList, &(submitInfo->mLastFrameNumber));
下面有其定义
CameraDeviceBase.h
virtual status_t captureList( const List< const PhysicalCameraSettingsList> &requests, const std::list< const SurfaceMap> &surfaceMaps, int64_t *lastFrameNumber = NULL) = 0; |
下面对上面的头文件继承
Camera3Device.h
class Camera3Device : public CameraDeviceBase, virtual public hardware::camera::device::V3_4::ICameraDeviceCallback, private camera3_callback_ops { status_t Camera3Device::captureList( const List< const PhysicalCameraSettingsList> &requestsList, const std::list< const SurfaceMap> &surfaceMaps, int64_t *lastFrameNumber) { ATRACE_CALL(); return submitRequestsHelper(requestsList, surfaceMaps, /*repeating*/ false , lastFrameNumber); } |
调用 submitRequestsHelper
Camera3Device.cpp
if (repeating) { res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber); } else { res = mRequestThread->queueRequestList(requestList, lastFrameNumber); } |
调用 status_t Camera3Device::RequestThread::queueRequestList
接着调用
status_t Camera3Device::RequestThread::queueRequestList( List<sp<CaptureRequest> > &requests, /*out*/ int64_t *lastFrameNumber) { ATRACE_CALL(); Mutex::Autolock l(mRequestLock); for (List<sp<CaptureRequest> >::iterator it = requests.begin(); it != requests.end(); ++it) { mRequestQueue.push_back(*it); } if (lastFrameNumber != NULL) { *lastFrameNumber = mFrameNumber + mRequestQueue.size() - 1; ALOGV( "%s: requestId %d, mFrameNumber %" PRId32 ", lastFrameNumber %" PRId64 "." , __FUNCTION__, (*(requests.begin()))->mResultExtras.requestId, mFrameNumber, *lastFrameNumber); } unpauseForNewRequests(); return OK; } |
其中
mRequestQueue; 的定义:
RequestList mRequestQueue;
typedef List<sp<CaptureRequest> > RequestList;
*lastFrameNumber = mFrameNumber + mRequestQueue.size() - 1;
这里有关键的执行代码,表示当前取最新的capture frame数据
Camera3Device::RequestThread : frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp 中有内部类 RequestThread,这是一个线程类。 Camera3Stream : frameworks/av/services/camera/libcameraservice/device3/Camera3Stream.cpp Camera3InputStream : frameworks/av/services/camera/libcameraservice/device3/Camera3InputStream.cpp Camera3IOStreamBase : frameworks/av/services/camera/libcameraservice/device3/Camera3IOStreamBase.cpp BufferItemConsumer : frameworks/native/libs/gui/BufferItemConsumer.cpp ConsumerBase : frameworks/native/libs/gui/ConsumerBase.cpp BnGraphicBufferConsumer : frameworks/native/libs/gui/IGraphicBufferConsumer.cpp 上层发过来来的capture request,收到底层申请Consumer buffer,这个buffer数据存储在capture request缓存中,后期这些buffer数据会被复用,不断地生产数据,也不断地被消费。 capture request开启之后,camera hal层也会受到capture request批处理请求,让camera hal做好准备,开始和camera driver层交互 |
所以有个loop一直在处理请求:
bool Camera3Device::RequestThread::threadLoop()
其通过调用:sendRequestsOneByOne()
再调用: processCaptureRequest(&nextRequest.halRequest);
进而调用:processBatchCaptureRequests(requests, &numRequestProcessed);
最后调用hardware/interfaces/camera/device/3.4/ICameraDeviceSession.hal 的
processCaptureRequest_3_4(vec<CaptureRequest> requests, vec<BufferCache> cachesToRemove)
generates (Status status, uint32_t numRequestProcessed);
将ui 拍照键按下时的请求一路发送到hal层,让hal层返回请求数据。
下面看hal层
CameraDeviceSession.cpp
Return< void > CameraDeviceSession::processCaptureRequest_3_4( const hidl_vec<V3_4::CaptureRequest>& requests, const hidl_vec<V3_2::BufferCache>& cachesToRemove, ICameraDeviceSession::processCaptureRequest_3_4_cb _hidl_cb) { updateBufferCaches(cachesToRemove); uint32_t numRequestProcessed = 0; Status s = Status::OK; for ( size_t i = 0; i < requests.size(); i++, numRequestProcessed++) { s = processOneCaptureRequest_3_4(requests[i]); if (s != Status::OK) { break ; } } if (s == Status::OK && requests.size() > 1) { mResultBatcher_3_4.registerBatch(requests[0].v3_2.frameNumber, requests.size()); } _hidl_cb(s, numRequestProcessed); return Void(); } |
通过调用
Return< void > CameraDeviceSession::processCaptureRequest_3_4( const hidl_vec<V3_4::CaptureRequest>& requests, const hidl_vec<V3_2::BufferCache>& cachesToRemove, ICameraDeviceSession::processCaptureRequest_3_4_cb _hidl_cb) { updateBufferCaches(cachesToRemove); uint32_t numRequestProcessed = 0; Status s = Status::OK; for ( size_t i = 0; i < requests.size(); i++, numRequestProcessed++) { s = processOneCaptureRequest_3_4(requests[i]); if (s != Status::OK) { break ; } } if (s == Status::OK && requests.size() > 1) { mResultBatcher_3_4.registerBatch(requests[0].v3_2.frameNumber, requests.size()); } _hidl_cb(s, numRequestProcessed); return Void(); } |
之后又调用processOneCaptureRequest_3_4(requests[i]);
最后调用:
mDevice->ops->process_capture_request(mDevice, &halRequest);
基于录像时拍照黑暗的问题,一路查到了这里 后面的流程可以借鉴参考Android : Camera之camx hal架构
参考:
camera流程
https://www.jianshu.com/p/bac0e72351e4
Camera UI
https://www.jianshu.com/p/e44d9b3320a3
Android Handler机制
https://www.jianshu.com/p/3d8f7ec1017a
最后
以上就是不安火龙果为你收集整理的录像过程中拍照的流程 从framework到hal的全部内容,希望文章能够帮你解决录像过程中拍照的流程 从framework到hal所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复