概述
1、类图
native层:
Java层:
2、下面函数中ProcessState::self()->getContextObject(NULL)相当于new BpBinder(0),BpBinder.mHandle=0;interface_cast<IServiceManager>()相当于new BpServiceManager();
sp<IServiceManager> defaultServiceManager()
{
if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
{
AutoMutex _l(gDefaultServiceManagerLock);
while (gDefaultServiceManager == NULL) {
gDefaultServiceManager = interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
if (gDefaultServiceManager == NULL)
sleep(1);
}
}
return gDefaultServiceManager;
}
3、IServiceManager.c中提供了一个namespace android(android 命名空间)的defaultServiceManager()全局函数,这个函数会生成一个BpServiceManager单例对象保存在gDefaultServiceManager中。其他service在为client提供服务之前需要通过BpServiceManager将自身添加到Service Manager中以供client查询获取;BpServiceManager的实现端不在BnServiceManager中,而是在Service_manager.c的main()中,这个地方初学者容易搞错;
4、BpXXXX中调用remote()->transact()后,数据接收者是谁由BpBinder.mHandle决定的,在BpServiceManager中BpBinder.mHandle=0,默认指向Service_manager.c中,所以说BpServiceManager的实现端在Service_manager.c中。
5、Service将自己add到Service Manager中是通过defaultServiceManager().addService()完成的;例如SurfaceFlinger: sp<IServiceManager> sm(defaultServiceManager()); sm->addService(String16(SurfaceFlinger::getServiceName()), flinger, false);
6、client中new BpBinder在 readStrongBinder()-->unflatten_binder()-->getStrongProxyForHandle()-->new BpBinder(handle)中完成;
7、client中new BpXXXX在interface_cast<IXXXX>(sm->getService(String16("name")))-->IMPLEMENT_META_INTERFACE(INTERFACE, NAME) asInterface()里完成;
8、每个进程都有一个ProcessState对象,该对象创建时在构造函数中会打开/dev/binder设备节点,并映射一块1M-8KB大小的虚拟内存;ProcessState::startThreadPool()函数会开启一个线程池,说白点就是创建一个主PoolThread线程(每一个PoolThread都以Binder_X命名),并通过IPCThreadState::self()->joinThreadPool(mIsMain)将自己加入线程池(也就是开启一个死循环不停调用talkWithDriver()和executeCommand(cmd),前者是从Binder驱动中读取数据,后者是解析数据并执行);主PoolThread会一直存在,非主PoolThread如果一直没有数据过来会被关闭掉;
9、Service Manager进程/system/bin/ServiceManager在init进程解析init.rc时作为service启动,Service_manager.c的main()函数作为入口。Service_manager.c中开启了一个Looper死循环,循环中通过ioctl()从Binder驱动中读取数据、binder_parse()解析数据并调用svcmgr_handler()函数进行处理;从Service_manager.c中的do_add_service()和do_find_service()函数中可以看出,最重要的两个数据是svcinfo.name和svcinfo->ptr,svcinfo.name很容易知道就是service的名字,svcinfo->ptr是啥,从if (txn->target != svcmgr_handle) return -1;中其实就可以看出svcinfo->ptr是service的handle(句柄);
10、client如果要获取service,那么首先要取得service的handle,首先从ServiceManager中返回目标service的handle(查询的依据便是name啦),client有了service的handle才有了资格与service通信。Binder通信机制使用句柄来代表远程接口,这个句柄的意义和Windows编程中用到的句柄是差不多的概念。前面说到,Service Manager在充当守护进程的同时,它充当Server的角色,当它作为远程接口使用时,它的句柄值便为0,这就是它的特殊之处,其余的Server的远程接口句柄值都是一个大于0 而且由Binder驱动程序自动进行分配的。我们知道,Service Manager在Binder机制中既充当守护进程的角色,同时它也充当着Server角色,然而它又与一般的Server不一样。对于普通的Server来说,Client如果想要获得Server的远程接口,那么必须通过Service Manager远程接口提供的getService接口来获得,这本身就是一个使用Binder机制来进行进程间通信的过程。而对于Service Manager这个Server来说,Client如果想要获得Service Manager远程接口,却不必通过进程间通信机制来获得,因为Service Manager远程接口是一个特殊的Binder引用,它的引用句柄一定是0。
11、从IPCThreadState::self()中可以知道每一个PoolThread(Binder线程)都有单独的一个IPCThreadState对象;注释:看pthread_getspecific得来。
IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS) {
restart:
const pthread_key_t k = gTLS;
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
return new IPCThreadState;
}
if (gShutdown) return NULL;
pthread_mutex_lock(&gTLSMutex);
if (!gHaveTLS) {
if (pthread_key_create(&gTLS, threadDestructor) != 0) {
pthread_mutex_unlock(&gTLSMutex);
return NULL;
}
gHaveTLS = true;
}
pthread_mutex_unlock(&gTLSMutex);
goto restart;
}
12、BpServiceManager.addService()中将Service对象作为参数,并调用writeStrongBinder()将Binder进行序列化,序列化后将保存在flat_binder_object结构体对象中,flat_binder_object.binder指向BBinder指针,flat_binder_object.cookie指向BBinder对象;
virtual status_t addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated)
{
Parcel data, reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name);
data.writeStrongBinder(service);
data.writeInt32(allowIsolated ? 1 : 0);
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
return err == NO_ERROR ? reply.readExceptionCode() : err;
}
writeStrongBinder()-->flatten_binder()
status_t flatten_binder(const sp<ProcessState>& proc,
const sp<IBinder>& binder, Parcel* out)
{
flat_binder_object obj;
obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
if (binder != NULL) {
IBinder *local = binder->localBinder();
//BBinder::localBinder() return this;
if (!local) {
BpBinder *proxy = binder->remoteBinder();
//BpBinder::remoteBinder() return this;
if (proxy == NULL) {
ALOGE("null proxy");
}
const int32_t handle = proxy ? proxy->handle() : 0;
obj.type = BINDER_TYPE_HANDLE;
obj.handle = handle;
obj.cookie = NULL;
} else {
//addService()走这个逻辑,看到没根本就没有对handle赋值,这个会在Binder驱动中赋值;
obj.type = BINDER_TYPE_BINDER;
obj.binder = local->getWeakRefs();
obj.cookie = local;
}
} else {
obj.type = BINDER_TYPE_BINDER;
obj.binder = NULL;
obj.cookie = NULL;
}
return finish_flatten_binder(binder, obj, out);
}
flatten_binder()函数会把binder序列化到flat_binder_object结构体对象中。
13、
Parcel::readStrongBinder()-->unflatten_binder()从读取parcel中flat_binder_object结构体中解析出BBinder或BpXXXX;
Parcel::writeStrongBinder()-->flatten_binder()将Binder序列化后将保存在flat_binder_object结构体对象中,见12条分析;
Parcel.cpp中提供了android命名空间的acquire_object()、release_object()、finish_flatten_binder()、flatten_binder()、unflatten_binder()、finish_unflatten_binder()等全局方法;
14、writeTransactionData()将parcel数据保存到binder_transaction_data结构体中,然后通过;
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr;
tr.target.handle = handle;
tr.code = code;
tr.flags = binderFlags;
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
tr.flags |= TF_STATUS_CODE;
*statusBuffer = err;
tr.data_size = sizeof(status_t);
tr.data.ptr.buffer = statusBuffer;
tr.offsets_size = 0;
tr.data.ptr.offsets = NULL;
} else {
return (mLastError = err);
}
mOut.writeInt32(cmd);
mOut.write(&tr, sizeof(tr));
return NO_ERROR;
}
client 端:remotes()::transact()--->IPCThreadState::transact()-->writeTransactionData();waitForResponse()-->talkWithDriver()
service端:IPCThreadState::joinThreadPool()-->getAndExecuteCommand()-->talkWithDriver();executeCommand(cmd);-->BBinder::transact()-->BnXXX::onTransact()
引用:http://www.cloudchou.com/android/post-534.html文章中的一段话:
客户端进程调用接口方法时,最后会都会调用remote()->transact函数提交数据到服务端进程,remote()的类型是IBinder指针,指向的实际类型其实是BpBinder(BpBinder封装了binder实体对象的句柄)。执行BpBinder的transact方法时,会调用IPCThreadState::self()->transact,先前也说到IPCThreadState负责和驱动交互数据,这里它会先调用writeTransactionData方法填充要发送给服务端进程的数据,然后调用waitForResponse方法,而waitForResponse方法会先调用talkWithDriver,talkWithDriver最终会调用ioctl方法和驱动交互,提交数据,并返回服务端进程执行的结果,然后waitForResponse方法会将收到的数据设置到reply对象里,reply为parcel类型对象。
服务端进程一般会调用joinThreadPool函数来等待客户端调用,joinThreadPool函数会先调用talkWithDriver来和驱动交互,最终调用ioctl来等待驱动提交请求,收到请求后会调用executeCommand函数,在executeCommand函数里找到BBinder对象,并执行BBinder的transact方法,BBinder的transact方法会调用它留给子类扩展的onTransact方法,BBinder子类BnInterface的子类会实现onTransact方法,根据不同的请求code转调不同的接口方法。
贴一个Binder卡死调用栈感受下调用流程:
"Binder_4" prio=5 tid=57 NATIVE
| group="main" sCount=1 dsCount=0 obj=0x44217d00 self=0x723720c0
| sysTid=3255 nice=0 sched=0/0 cgrp=apps handle=1916214904
| state=S schedstat=( 0 0 0 ) utm=989 stm=410 core=0
#00
pc 000205d8
/system/lib/libc.so (__ioctl+8)
#01
pc 0002d62f
/system/lib/libc.so (ioctl+14)
#02
pc 0001d5a9
/system/lib/libbinder.so (android::IPCThreadState::talkWithDriver(bool)+140)
#03
pc 0001da93
/system/lib/libbinder.so (android::IPCThreadState::waitForResponse(android::Parcel*, int*)+42)
#04
pc 0001dc9b
/system/lib/libbinder.so (android::IPCThreadState::transact(int, unsigned int, android::Parcel const&, android::Parcel*, unsigned int)+118)
#05
pc 00019869
/system/lib/libbinder.so (android::BpBinder::transact(unsigned int, android::Parcel const&, android::Parcel*, unsigned int)+30)
#06
pc 00072649
/system/lib/libandroid_runtime.so
#07
pc 0001db8c
/system/lib/libdvm.so (dvmPlatformInvoke+112)
#08
pc 0004e173
/system/lib/libdvm.so (dvmCallJNIMethod(unsigned int const*, JValue*, Method const*, Thread*)+398)
#09
pc 00026fa0
/system/lib/libdvm.so
#10
pc 0002dfb8
/system/lib/libdvm.so (dvmMterpStd(Thread*)+76)
#11
pc 0002b61c
/system/lib/libdvm.so (dvmInterpret(Thread*, Method const*, JValue*)+184)
#12
pc 00060595
/system/lib/libdvm.so (dvmCallMethodV(Thread*, Method const*, Object*, bool, JValue*, std::__va_list)+336)
#13
pc 00049d5b
/system/lib/libdvm.so
#14
pc 0004e3ab
/system/lib/libandroid_runtime.so
#15
pc 0007202b
/system/lib/libandroid_runtime.so
#16
pc 00019f45
/system/lib/libbinder.so (android::BpBinder::reportOneDeath(android::BpBinder::Obituary const&)+68)
#17
pc 00019fb1
/system/lib/libbinder.so (android::BpBinder::sendObituary()+76)
#18
pc 0001d9bb
/system/lib/libbinder.so (android::IPCThreadState::executeCommand(int)+610)
#19
pc 0001dcd3
/system/lib/libbinder.so (android::IPCThreadState::getAndExecuteCommand()+38)
#20
pc 0001dd49
/system/lib/libbinder.so (android::IPCThreadState::joinThreadPool(bool)+48)
#21
pc 00021bb1
/system/lib/libbinder.so
#22
pc 0000ea01
/system/lib/libutils.so (android::Thread::_threadLoop(void*)+216)
#23
pc 0004e345
/system/lib/libandroid_runtime.so (android::AndroidRuntime::javaThreadShell(void*)+68)
#24
pc 0000e533
/system/lib/libutils.so
#25
pc 0000d240
/system/lib/libc.so (__thread_entry+72)
#26
pc 0000d3d8
/system/lib/libc.so (pthread_create+240)
at android.os.BinderProxy.transact(Native Method)
at android.app.IApplicationErrorListener$Stub$Proxy.onError(IApplicationErrorListener.java:99)
at com.android.server.am.ActivityManagerService$28.run(ActivityManagerService.java:17142)
at com.android.server.am.ActivityManagerService.reportApplicationError(ActivityManagerService.java:17154)
at com.android.server.am.ActivityManagerService.addErrorToDropBox(ActivityManagerService.java:10215)
at com.android.server.am.ActivityManagerService.handleApplicationWtf(ActivityManagerService.java:10096)
at com.android.internal.os.RuntimeInit.wtf(RuntimeInit.java:337)
at android.util.Log$1.onTerribleFailure(Log.java:104)
at android.util.Log.wtf(Log.java:293)
at android.util.Slog.wtf(Slog.java:82)
at com.android.server.am.ActiveServices.killServicesLocked(ActiveServices.java:2089)
at com.android.server.am.ActivityManagerService.cleanUpApplicationRecordLocked(ActivityManagerService.java:12760)
at com.android.server.am.ActivityManagerService.handleAppDiedLocked(ActivityManagerService.java:3792)
at com.android.server.am.ActivityManagerService.appDiedLocked(ActivityManagerService.java:3956)
at com.android.server.am.ActivityManagerService$AppDeathRecipient.binderDied(ActivityManagerService.java:1076)
at android.os.BinderProxy.sendDeathNotice(Binder.java:493)
at dalvik.system.NativeStart.run(Native Method)
16、BpServiceManager提供addService()和getService()两个重要方法
addService()被调用时传进来的参数service是一个service实体,该service实体经过writeStrongBinder()-->flatten_binder()后,会将BBinder保存在flat_binder_object中,但是并没有给handle赋值,细致的人可以发现Service_manager.c中add service时保存的最重要的数据是name和handle,那么addService()时驱动会自动分配handle值,并保存BBinder对象?
getService()被调用时,Service_manager.c会根据name返回一个handle,checkService()会调用readStrongBinder()-->unflatten_binder()-->ProcessState::getStrongProxyForHandle()-->new BpBinder(handle) (type=BINDER_TYPE_HANDLE)
remote()-->trasaction()被调用时,writeTransactionData()会将数据进行封装到binder_transaction_data结构体中,其中binder_transaction_data.target.handle=BpBinder.mHandle,binder_transaction_data.cookie=null,这个会在驱动进行赋上与handle对应的BBinder对象,这样在Service端调用IPCThreadState::executeCommand()时才能进入BBinder::transact()-->BnXXXX::onTransact();
virtual status_t addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated)
{
Parcel data, reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name);
data.writeStrongBinder(service);
data.writeInt32(allowIsolated ? 1 : 0);
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
return err == NO_ERROR ? reply.readExceptionCode() : err;
}
virtual sp<IBinder> checkService( const String16& name) const
{
Parcel data, reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name);
remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
return reply.readStrongBinder();
}
17、linkToDeath()收取service挂掉的信号。一般client获取service的Binder引用后会立即调用linkToDeath()函数,类似于注册一个Binder service挂掉的监听器,只要Binder Service挂掉,那么在client的IPCThreadState::executeCommand()会收到从Binder驱动发送过来的BR_DEAD_BINDER消息。client如有特殊处理需复写DeathRecipient.binderDied()函数。比如WindowState中会调用c.asBinder().linkToDeath(deathRecipient, 0)监听窗口挂掉情况,并在deathRecipient做了相应处理逻辑,这个设计真的非常棒;
IPCThreadState::executeCommand()中执行Binder service挂掉的逻辑:
case BR_DEAD_BINDER:
{
BpBinder *proxy = (BpBinder*)mIn.readInt32();
proxy->sendObituary();
mOut.writeInt32(BC_DEAD_BINDER_DONE);
mOut.writeInt32((int32_t)proxy);
} break;
status_t BpBinder::linkToDeath(
const sp<DeathRecipient>& recipient, void* cookie, uint32_t flags)
{
Obituary ob;
ob.recipient = recipient;
ob.cookie = cookie;
ob.flags = flags;
LOG_ALWAYS_FATAL_IF(recipient == NULL,
"linkToDeath(): recipient must be non-NULL");
{
AutoMutex _l(mLock);
if (!mObitsSent) {
if (!mObituaries) {
mObituaries = new Vector<Obituary>;
if (!mObituaries) {
return NO_MEMORY;
}
ALOGV("Requesting death notification: %p handle %dn", this, mHandle);
getWeakRefs()->incWeak(this);
IPCThreadState* self = IPCThreadState::self();
self->requestDeathNotification(mHandle, this);
self->flushCommands();
}
ssize_t res = mObituaries->add(ob);
return res >= (ssize_t)NO_ERROR ? (status_t)NO_ERROR : res;
}
}
return DEAD_OBJECT;
}
status_t BBinder::linkToDeath(
const sp<DeathRecipient>& recipient, void* cookie, uint32_t flags)
{
return INVALID_OPERATION;
}
18、Binder驱动实体节点
struct binder_proc {
struct hlist_node proc_node;
struct rb_root threads;
//binder_proc中挂了四颗红黑树节点,用来保存binder_proc进程内用于处理用户请求的线程;
struct rb_root nodes;
//保存binder_proc进程内的binder实体
struct rb_root refs_by_desc;
struct rb_root refs_by_node;
//保存binder_proc进程内的Binder引用(前者以句柄,后者以节点地址的key来组织)
int pid;
struct vm_area_struct *vma;
struct mm_struct *vma_vm_mm;
struct task_struct *tsk;
struct files_struct *files;
struct hlist_node deferred_work_node;
int deferred_work;
void *buffer;
ptrdiff_t user_buffer_offset;
struct list_head buffers;
struct rb_root free_buffers;
struct rb_root allocated_buffers;
size_t free_async_space;
struct page **pages;
size_t buffer_size;
uint32_t buffer_free;
struct list_head todo;
wait_queue_head_t wait;
struct binder_stats stats;
struct list_head delivered_death;
int max_threads;
int requested_threads;
int requested_threads_started;
int ready_threads;
long default_priority;
struct dentry *debugfs_entry;
}
19、 调用驱动的binder_mmap(),对打开的设备文件进行内存映射mmap: 同一块物理地址分别映射给了内核和应用进程,就减少了数据从内核到应用进程的拷贝了。 (一般是:client到内核、内核到server)
20、BnService和BpService负责业务的交互;BBinder和BpBinder负责通信的交互;
21、register_android_os_Binder()函数初始化Java类与native的关系,注册native方法到AndroidRuntime中去;
22、从android_util_Binder.cpp中的gBinderMethods、gBinderProxyMethods和gBinderInternalMethods中可以得知Java层中Binder相当于BnBinder,BinderProxy相当于BpBinder,Java层同样提供一个用于承载通信数据的Parcel类;
23、 且Java层的Binder对象把Native层的JavaBBinderHolder(就是BBinder)保存在变量mObject中;
24、client获取service是通过IXXXX.Stub.asInterface(ServiceManager.getService("servicename"))语句来获取,ServiceManager.getService("servicename") return BinderProxy对象,获取service语句相当于IXXXX.Stub.asInterface(newBinderProxy()),而IXXXX.Stub.asInterface()又相当于 new IXXXX.Stub.Proxy(new BinderProxy())。获取远程service的接口,实质上是获取了一个IXXX接口的IXXX.Stub.Proxy对象,Proxy中的mRemote指向BinderProxy。
25、C/S获得ServiceManager的Java远程接口过程,实际上就是获得ServiceManagerProxy,为Java层提供C/S与SM通信的对象。
26、Client通过Java远端接口使用XXXService提供的服务,实际上是利用client的BinderProxy对象与XXXService的JavaBBinder对象通信,达到client使用服务的目的。
27、IPCThreadState::freeBuffer()函数用于释放一块映射的内存。Binder接收方通过mmap()映射一块较大的内存空间,Binder驱动基于这片内存采用最佳匹配算法实现接收数据缓存的动态分配和释放,满足并发请求对接收缓存区的需求。应用程序处理完这片数据后必须尽快使用该命令释放缓存区,否则会因为缓存区耗尽而无法接收新数据。从IPCThreadState::sendReply()-->waitForResponse()-->case BR_REPLY-->freeBuffer()-->BC_FREE_BUFFER调用流程来看,每次处理完client的请求后都会进行内存释放。
28、BC_REGISTER_LOOPER 、BC_ENTER_LOOPER、BC_EXIT_LOOPER命令。这组命令同BINDER_SET_MAX_THREADS一道实现Binder驱动对接收方线程池管理。BC_REGISTER_LOOPER通知驱动线程池中一个线程已经创建了;BC_ENTER_LOOPER通知驱动该线程已经进入主循环,可以接收数据;BC_EXIT_LOOPER通知驱动该线程退出主循环,不再接收数据。
29、BpBinder::linkToDeath()-->IPCThreadState::requestDeathNotification()-->向驱动发送BC_REQUEST_DEATH_NOTIFICATION命令。获得Binder引用的进程通过该命令要求驱动在Binder实体销毁得到通知。
30、BR_SPAWN_LOOPER消息。该消息用于接收方线程池管理。当驱动发现接收方所有线程都处于忙碌状态且线程池里的线程总数没有超过BINDER_SET_MAX_THREADS设置的最大线程数时,向接收方发送该命令要求创建更多线程以备接收数据。
31、BR_TRANSACTION、BR_REPLY消息。这两条消息分别对应发送方的BC_TRANSACTION和BC_REPLY,表示当前接收的数据是请求还是回复。表明收到了一个格式为binder_transaction_data的请求数据包(BR_TRANSACTION)或返回数据包(BR_REPLY)
32、Parcel::writeObject()函数分析。
writeObject()函数用在两个地方。一个是Parcel::writeFileDescriptor(int fd, bool takeOwnership)中,从writeFileDescriptor()函数中可以看出文件句柄(FileDescriptor)也是可以跨Binder传输的,传输文件句柄时是封装在flat_binder_object结构体变量中,type = BINDER_TYPE_FD,handle = fd;另一个调用地方是在Parcel::writeStrongBinder()-->flatten_binder()中,此时是需要跨Binder传输Binder实体或Binder引用时将Binder压入parcel中。可以看出跨Binder传输文件句柄、Binder实体、Binder引用时都是构建一个flat_binder_object结构体变量,然后将该变量通过调用Parcel::writeObject()来压入parcel缓存中,下面来看看是如何将flat_binder_object结构体变量压入parcel缓存中的.
首先要知道mData、mDataPos、mDataCapacity三个变量的意义,mData指向parcel缓存的首地址,mDataCapacity表示parcel缓存容量(大小),mDataPos指向parcel缓存中空闲区域的首地址,整个parcel缓存是一块连续的内存。还要再了解mObjects、mObjectsSize、mObjectsCapacity三个变量,我们知道flat_binder_object结构体变量会压入parcel缓存块中,压入的方式是连续填充到缓存块中,这样如果不把他显示地标识出来Binder驱动根本无法从数据块中把它一个个拎出来,这三个变量就是起了这个用途,mObjects数组用来保存每个flat_binder_object结构体变量在parcel缓存中的起始地址,mObjectsSize标示当前已保存了几个flat_binder_object对象,mObjectsCapacity表示总共可以保存flat_binder_object对象的容量。把这六个变量解释清楚writeObject()就非常容易理解了。
status_t Parcel::writeObject(const flat_binder_object& val, bool nullMetaData)
{
const bool enoughData = (mDataPos+sizeof(val)) <= mDataCapacity;
//判断是否有足够大小空间;
const bool enoughObjects = mObjectsSize < mObjectsCapacity;
//判断是否写入的flat_binder_object达到数量限制值;
if (enoughData && enoughObjects) {
restart_write:
*reinterpret_cast<flat_binder_object*>(mData+mDataPos) = val;
// Need to write meta-data?
if (nullMetaData || val.binder != 0) {
//nullMetaData在传输文件句柄时为true,val.binder在传输Binder实体时为true,由此可以得出只有writeFileDescriptor()和writeStrongBinder() Binder实体时才会把这个flat_binder_object对象标示出来,由此知道驱动只关心用户空间传进来的Binder实体和文件句柄,并不关心用户空间传下来的Binder引用,对于Binder引用直接作为普通数据传输到对端进程;
mObjects[mObjectsSize] = mDataPos;
//保存flat_binder_object变量存储地址;
acquire_object(ProcessState::self(), val, this);
//对Binder实体增加引用计数;
mObjectsSize++;
}
// remember if it's a file descriptor
if (val.type == BINDER_TYPE_FD) {
如果flat_binder_object代表文件句柄,设置mHasFds和mFdsKnown值为true;
if (!mAllowFds) {
return FDS_NOT_ALLOWED;
}
mHasFds = mFdsKnown = true;
}
return finishWrite(sizeof(flat_binder_object));
//将mDataPos指向新的空闲起始地址;
}
if (!enoughData) {
const status_t err = growData(sizeof(val));
//如果空间不够了,那么重新分配一些空间来存储,这个分配算法没看懂,里面夹杂着mDataSize变量也不太明白是干啥的;
if (err != NO_ERROR) return err;
}
if (!enoughObjects) {
//如果flat_binder_object的数量达到限制也要重新扩容;
size_t newSize = ((mObjectsSize+2)*3)/2;
binder_size_t* objects = (binder_size_t*)realloc(mObjects, newSize*sizeof(binder_size_t));
if (objects == NULL) return NO_MEMORY;
mObjects = objects;
mObjectsCapacity = newSize;
}
goto restart_write;
//扩容后再重新将flat_binder_object对象写入,知道成功;
}
上面知道了parcel缓存中包含了跨进程传输的数据,数据中还包含了
flat_binder_object对象,主要体现在mData、mDataPos、mObjects、mObjectsSize上,这些值只是parcel类的内部变量,并没有保存到parcel缓存中去,那么必须重新封装下这些信息,这个封装动作在IPCThreadState::writeTransactionData()完成:
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr;
tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
tr.target.handle = handle;
//这个表示要把这些数据传给哪个service去;
tr.code = code;
//与service约定的code(就是调用service的哪个函数);
tr.flags = binderFlags;
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
//传输数据大小(parcel缓存数据大小);
tr.data.ptr.buffer = data.ipcData();
//parcel的缓存起始地址;
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
//parcel缓存中flat_binder_object对象数量(不包含Binder引用的flat_binder_object数量);
tr.data.ptr.offsets = data.ipcObjects();
//parcel.mObjects数组起始地址;
} else if (statusBuffer) {
tr.flags |= TF_STATUS_CODE;
*statusBuffer = err;
tr.data_size = sizeof(status_t);
tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
tr.offsets_size = 0;
tr.data.ptr.offsets = 0;
} else {
return (mLastError = err);
}
mOut.writeInt32(cmd);
mOut.write(&tr, sizeof(tr));
return NO_ERROR;
}
数据打包后调用ioctl()函数进入kernel,binder_ioctl()将被调用,继续分析数据传递过程。接着分析驱动binder.c中
binder_thread_write()函数:
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
//buffer起始地址;
void __user *ptr = buffer + *consumed;
//client写入buffer数据,client称为生产者,Binder驱动会对这些数据进行处理,可类比消费者,consumed值表示驱动已经处理buffer中的数据具体到哪一个地址了;
void __user *end = buffer + size;
//buffer是一个连续内存块,所以buffer+size就指向了buffer末尾;
while (ptr < end && thread->return_error == BR_OK) {
if (get_user(cmd, (uint32_t __user *)ptr))
//获取命令;
return -EFAULT;
ptr += sizeof(uint32_t);
trace_binder_command(cmd);
if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
binder_stats.bc[_IOC_NR(cmd)]++;
proc->stats.bc[_IOC_NR(cmd)]++;
thread->stats.bc[_IOC_NR(cmd)]++;
}
switch (cmd) {
...........
case BC_TRANSACTION:
//最重要的是这两个command,BC_TRANSACTION表示client端调用service端,BC_REPLY表示service给client回复;
case BC_REPLY: {
struct binder_transaction_data tr;
if (copy_from_user(&tr, ptr, sizeof(tr)))
//将client打包好的binder_transaction_data数据从client进程拷贝到内核空间来;
return -EFAULT;
ptr += sizeof(tr);
binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
//调用binder_transaction()函数继续处理;
break;
}
default:
pr_err("%d:%d unknown command %dn",
proc->pid, thread->pid, cmd);
return -EINVAL;
}
*consumed = ptr - buffer;
}
return 0;
}
继续分析binder_transaction()函数:
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply)
{
struct binder_transaction *t;
struct binder_work *tcomplete;
binder_size_t *offp, *off_end;
binder_size_t off_min;
struct binder_proc *target_proc;
struct binder_thread *target_thread = NULL;
struct binder_node *target_node = NULL;
struct list_head *target_list;
wait_queue_head_t *target_wait;
struct binder_transaction *in_reply_to = NULL;
struct binder_transaction_log_entry *e;
uint32_t return_error;
#ifdef BINDER_MONITOR
struct binder_transaction_log_entry log_entry;
unsigned int log_idx = -1;
if ((reply && (tr->data_size < (proc->buffer_size/16))) || log_disable)
e = &log_entry;
else
{
e = binder_transaction_log_add(&binder_transaction_log);
if (binder_transaction_log.next)
log_idx = binder_transaction_log.next - 1;
else
log_idx = binder_transaction_log.size - 1;
}
#else
e = binder_transaction_log_add(&binder_transaction_log);
#endif
e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);
e->from_proc = proc->pid;
e->from_thread = thread->pid;
e->target_handle = tr->target.handle;
e->data_size = tr->data_size;
e->offsets_size = tr->offsets_size;
#ifdef BINDER_MONITOR
e->code = tr->code;
/* fd 0 is also valid... set initial value to -1 */
e->fd = -1;
do_posix_clock_monotonic_gettime(&e->timestamp);
//monotonic_to_bootbased(&e->timestamp);
do_gettimeofday(&e->tv);
/* consider time zone. translate to android time */
e->tv.tv_sec -= (sys_tz.tz_minuteswest * 60);
#endif
if (reply) {
//处理service返回的transaction;
in_reply_to = thread->transaction_stack;
if (in_reply_to == NULL) {
binder_user_error("%d:%d got reply transaction with no transaction stackn",
proc->pid, thread->pid);
return_error = BR_FAILED_REPLY;
goto err_empty_call_stack;
}
#ifdef BINDER_MONITOR
binder_cancel_bwdog(in_reply_to);
#endif
binder_set_nice(in_reply_to->saved_priority);
#ifdef RT_PRIO_INHERIT
if (rt_task(current) && (MAX_RT_PRIO != in_reply_to->saved_rt_prio) &&
!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
BINDER_LOOPER_STATE_ENTERED))) {
struct sched_param param = {
.sched_priority = in_reply_to->saved_rt_prio,
};
mt_sched_setscheduler_nocheck(current,
in_reply_to->saved_policy, ¶m);
#ifdef BINDER_MONITOR
if (log_disable & BINDER_RT_LOG_ENABLE)
{
pr_debug("reply reset %d sched_policy from %d to %d rt_prio from %d to %dn",
proc->pid, in_reply_to->policy, in_reply_to->saved_policy,
in_reply_to->rt_prio, in_reply_to->saved_rt_prio);
}
#endif
}
#endif
if (in_reply_to->to_thread != thread) {
binder_user_error("%d:%d got reply transaction with bad transaction stack, transaction %d has target %d:%dn",
proc->pid, thread->pid, in_reply_to->debug_id,
in_reply_to->to_proc ?
in_reply_to->to_proc->pid : 0,
in_reply_to->to_thread ?
in_reply_to->to_thread->pid : 0);
return_error = BR_FAILED_REPLY;
in_reply_to = NULL;
goto err_bad_call_stack;
}
thread->transaction_stack = in_reply_to->to_parent;
target_thread = in_reply_to->from;
if (target_thread == NULL) {
#ifdef MTK_BINDER_DEBUG
binder_user_error("%d:%d got reply transaction "
"with bad transaction reply_from, "
"transaction %d has target %d:%dn",
proc->pid, thread->pid, in_reply_to->debug_id,
in_reply_to->to_proc ?
in_reply_to->to_proc->pid : 0,
in_reply_to->to_thread ?
in_reply_to->to_thread->pid : 0);
#endif
return_error = BR_DEAD_REPLY;
goto err_dead_binder;
}
if (target_thread->transaction_stack != in_reply_to) {
binder_user_error("%d:%d got reply transaction with bad target transaction stack %d, expected %dn",
proc->pid, thread->pid,
target_thread->transaction_stack ?
target_thread->transaction_stack->debug_id : 0,
in_reply_to->debug_id);
return_error = BR_FAILED_REPLY;
in_reply_to = NULL;
target_thread = NULL;
goto err_dead_binder;
}
target_proc = target_thread->proc;
#ifdef BINDER_MONITOR
e->service[0] = '