我是靠谱客的博主 忧郁汉堡,最近开发中收集的这篇文章主要介绍深入掌握Binder原理(下)前言Binder实现原理结尾,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

前言

在前一篇文章《深入掌握Binder原理(上)》中,我们已经了解了Binder的架构以及Binder的组成部分,并且深入的了解了Binder驱动程序以及ServiceManager的原理,在这篇文章中,会对Binder剩下两部分Client和Server进行讲解,这里建议没有阅读上一篇文章的先读完上一篇文章,然后再来看这一篇,只有这两篇文章连起来,才能对Binder的体系有一个完整和深入的了解。

Binder实现原理

为了能更具体的理解Binder中Client和Server这两个组成部分,这里我以我们最熟悉的一个场景:在应用进程中startActivity,这个场景来讲解Clinet端和Server端是如何使用Binder的。在这个场景中,应用进程是Client端,位于system_server进程的ActivityManagerService是Server端。通过理解应用进程是如何调用Binder,通知AMS启动一个Activity的,我们便能理解Android系统中所有Clinet和Server进行Binder通信的原理,因为其他的场景的Binder通信原理都是一样的。

Client

当我们在桌面点击应用icon启动应用或者手动调用startActivity函数启动一个Activity,实际都是通过Binder调用ActivityManangerService去启动指定的Activity,在AMS启动Activity的流程中,会检查该Activity所属的进程是否存在,如果不存在,就会通知Zygote去fork该Activity的进程,关于Activity更详细的启动流程可以看我的这篇文章《Activity启动详解》,在这里我们只需要了解所有通过Zygote fork成功的进程,都会执行onZygoteInit回调,所以我们的应用进程,在第一次被创建时,也会执行onZygoteInit回调,而onZygoteInit中,默认会打开Binder。

/frameworks/base/cmds/app_process/app_main.cpp

virtual void onZygoteInit() {
sp<ProcessState> proc = ProcessState::self();
proc->startThreadPool();
}

在前面讲ServiceManager使用Binder主要有这几步流程,第一步调用open打开binder,第二步调用mmap映射内存,第三步是在Binder驱动中将ServiceManagerServer注册成BinderManager,第四步让ServiceManager进程陷入循环,并不断的调用ioctl监听缓存区是否有数据。

onZygoteInit函数中虽然只有两行代码,但是其实做了ServiceManager第一步,第二步和第四步的事情。这两行代码主要做的事情如下

  1. ProcessState::self()打开binder驱动,并进行内存的映射
  2. proc->startThreadPool()让binder线程无线的循环,并不断的通过ioctl往binder驱动读或者写数据。

打开Binder

ProcessState

我们先来看看ProcessState::self()函数是如何打开Binder驱动程序的,ProcessState对象相当于是一个进程操作Binder的工具类,并且它是一个全局的单例对象,通过self获取单例的实例。

/frameworks/native/libs/binder/ProcessState.cpp

sp<ProcessState> ProcessState::self()
{
Mutex::Autolock _l(gProcessMutex);
if (gProcess != NULL) {
return gProcess;
}
gProcess = new ProcessState("/dev/binder");
return gProcess;
}

ProcessState的构造函数实现如下

/frameworks/native/libs/binder/ProcessState.cpp

#define BINDER_VM_SIZE ((1 * 1024 * 1024) - sysconf(_SC_PAGE_SIZE) * 2)
ProcessState::ProcessState(const char *driver)
: mDriverName(String8(driver))
, mDriverFD(open_driver(driver))
//打开binder
, mVMStart(MAP_FAILED)
, mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)
, mThreadCountDecrement(PTHREAD_COND_INITIALIZER)
, mExecutingThreadsCount(0)
, mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
, mStarvationStartTimeMs(0)
, mManagesContexts(false)
, mBinderContextCheckFunc(NULL)
, mBinderContextUserData(NULL)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
{
if (mDriverFD >= 0) {
mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);//进行内核空间和用户空间的内存映射
if (mVMStart == MAP_FAILED) {
close(mDriverFD);
mDriverFD = -1;
mDriverName.clear();
}
}
}

ProcessState的构造函数中主要做了这两件事情

  1. 调用open_driver("/dev/binder")函数,打开位于内核空间binder驱动程序
  2. 调用mmap函数,将Binder驱动的内核空间内存映射到当前进程的用户空间来
open_driver

我们先看open_driver("/dev/binder")这个函数的实现

/frameworks/native/libs/binder/ProcessState.cpp

static int open_driver(const char *driver)
{
//打开binder驱动程序
int fd = open(driver, O_RDWR | O_CLOEXEC);
if (fd >= 0) {
int vers = 0;
//获取当前binder版本
status_t result = ioctl(fd, BINDER_VERSION, &vers);
if (result == -1) {
close(fd);
fd = -1;
}
if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
close(fd);
fd = -1;
}
size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
//设置binder最大线程数
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
if (result == -1) {
ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
}
} else {
ALOGW("Opening '%s' failed: %sn", driver, strerror(errno));
}
return fd;
}

open_driver函数主要做了这三件事情

  1. 调用I/O函数open打开驱动,open函数会经过系统调用,最终执行binder驱动程序中的open_binder函数
  2. 调用I/O函数ioctl获取BINDER_VERSION,ioctl函数经过系统调用最终会执行binder驱动程序中的binder_ioctl函数
  3. 调用I/O函数ioctl设置当前进程最大的Binder线程数量,这里设置的线程数是15个

前一篇文章我们已经知道了系统调用的流程,以及binder驱动中open_binder函数实现,这里就不再讲了,并且还知道了binder_ioctl响应BINDER_SET_CONTEXT_MGR和BINDER_WRITE_READ命令的流程,在这里,我主要补充binder_ioctl函数是如何响应剩下的两个命令BINDER_VERSION和BINDER_SET_MAX_THREADS的。

响应BINDER_VERSION指令

/drivers/staging/android/binder.c

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
……
switch (cmd) {
case BINDER_WRITE_READ:
……
break;
case BINDER_SET_MAX_THREADS:
//将最大线程数从用户空间拷贝到内核空间,并赋值给binder_proc结构体中的max_threads变量
if (copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads))) {
ret = -EINVAL;
goto err;
}
break;
case BINDER_SET_CONTEXT_MGR:
……
break;
case BINDER_THREAD_EXIT:
……
break;
case BINDER_VERSION: {
struct binder_version __user *ver = ubuf;
if (size != sizeof(struct binder_version)) {
ret = -EINVAL;
goto err;
}
//将Binder驱动程序版本信息会写到用户空间
if (put_user(BINDER_CURRENT_PROTOCOL_VERSION,
&ver->protocol_version)) {
ret = -EINVAL;
goto err;
}
break;
}
default:
ret = -EINVAL;
goto err;
}
ret = 0;
err:
……
return ret;
}

在binder驱动程序的BINDER_VERSION响应逻辑中,会直接调用put_user函数将当前版本信息写入到用户空间地址**&vers**中,put_user函数是Linux内核函数,功能和copy_to_user类似,都是将数据从内核空间拷贝到用户空间,但是put_user函数只能拷贝一些简单的变量类似,复杂的数据结构和数据的拷贝,还是要copy_to_user来进行。

响应BINDER_SET_MAX_THREADS指令

在binder驱动程序的BINDER_SET_MAX_THREADS响应逻辑中,会调用copy_from_user函数,将线程数量拷贝到内核空间并赋值给proc->max_threads。

mmap

ProcessState构造函数执行open_driver函数打开binder驱动并且设置了binder线程数,便会执行mmap进行内存的映射,mmap的原理以及Binder驱动程序中binder_mmap的实现,在前一篇文章都详细讲了,这里也不重复讲了,需要注意的是,这儿映射的内存大小为BINDER_VM_SIZE,也就是1M-8kb的大小,而前面讲到的ServiceManager映射的内存是128kb。

陷入循环

ProcessState的构造函数中调用open_driver打开了binder驱动,以及调用了mmap进行内存映射后,就会执行proc->startThreadPool()让binder线程陷入无限循环,看一下它是怎么实现的。

/frameworks/native/libs/binder/ProcessState.cpp

void ProcessState::startThreadPool()
{
AutoMutex _l(mLock);
if (!mThreadPoolStarted) {
mThreadPoolStarted = true;
spawnPooledThread(true);
}
}
void ProcessState::spawnPooledThread(bool isMain)
{
if (mThreadPoolStarted) {
String8 name = makeBinderThreadName();
ALOGV("Spawning new pooled thread, name=%sn", name.string());
sp<Thread> t = new PoolThread(isMain);
t->run(name.string());
}
}

startThreadPool函数实际是启动了一个PoolThread线程,该线程是ProcessState的内部类,然后执行该线程的run方法。

/frameworks/native/libs/binder/ProcessState.cpp

class PoolThread : public Thread
{
public:
explicit PoolThread(bool isMain)
: mIsMain(isMain)
{
}
protected:
virtual bool threadLoop()
{
IPCThreadState::self()->joinThreadPool(mIsMain);
return false;
}
const bool mIsMain;
};

IPCThreadState

PoolThread线程启动时会执行threadLoop函数中的IPCThreadState::self()->joinThreadPool(mIsMain)函数。Binder通信是基于多线程模型的,一个Binder线程就对应了一个IPCThreadState,它是进程中真正调用Binder驱动读写数据的对象,并且IPCThreadState是基于线程的单例模式,通过ThreadLocal,也就是本地线程存储机制,确保每个线程都只有唯一的一个IPCThreadState,我们看一下IPCThreadState::self()里的逻辑。

/frameworks/native/libs/binder/IPCThreadState.cpp

IPCThreadState* IPCThreadState::self()
{
//首次进来gHaveTLS的值为false
if (gHaveTLS) {
restart:
const pthread_key_t k = gTLS;
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
return new IPCThreadState;
}
if (!gHaveTLS) {
//创建线程私有数据
int key_create_value = pthread_key_create(&gTLS, threadDestructor);
gHaveTLS = true;
}
goto restart;
}

接着看看IPCThreadState的构造函数

/frameworks/native/libs/binder/IPCThreadState.cpp

IPCThreadState::IPCThreadState()
: mProcess(ProcessState::self()),
mStrictModePolicy(0),
mLastTransactionBinderFlags(0)
{
pthread_setspecific(gTLS, this);
clearCaller();
mIn.setDataCapacity(256);
mOut.setDataCapacity(256);
}

IPCThreadState构造函数初始化了mInmOut两个buffer,mIn用来存放其他进程写过来的数据,mOut用来存放需要传递给其他进程的数据。

joinThreadPool

了解了IPCThreadState,我们再看看它的joinThreadPool函数实现。

/frameworks/native/libs/binder/IPCThreadState.cpp

void IPCThreadState::joinThreadPool(bool isMain)
{
mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
status_t result;
do {
// now get the next command to be processed, waiting if necessary
result = getAndExecuteCommand();
if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
abort();
}
if(result == TIMED_OUT && !isMain) {
break;
}
} while (result != -ECONNREFUSED && result != -EBADF);
mOut.writeInt32(BC_EXIT_LOOPER);
talkWithDriver(false);
}

joinThreadPool中通过do while不断的循环,并不断调用getAndExecuteCommand函数来进行数据的读写。

/frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::getAndExecuteCommand()
{
status_t result;
int32_t cmd;
//往Binder驱动读写数据
result = talkWithDriver();
if (result >= NO_ERROR) {
size_t IN = mIn.dataAvail();
if (IN < sizeof(int32_t)) return result;
//有其他进程传递数据过来
cmd = mIn.readInt32();
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount++;
if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&
mProcess->mStarvationStartTimeMs == 0) {
mProcess->mStarvationStartTimeMs = uptimeMillis();
}
pthread_mutex_unlock(&mProcess->mThreadCountLock);
//处理其他进程写入的数据
result = executeCommand(cmd);
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount--;
if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads &&
mProcess->mStarvationStartTimeMs != 0) {
int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs;
if (starvationTimeMs > 100) {
ALOGE("binder thread pool (%zu threads) starved for %" PRId64 " ms",
mProcess->mMaxThreads, starvationTimeMs);
}
mProcess->mStarvationStartTimeMs = 0;
}
pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
pthread_mutex_unlock(&mProcess->mThreadCountLock);
}
return result;
}

getAndExecuteCommand函数中的关键流程是talkWithDriver函数,这个函数封装了ioctl函数直接会直接操作binder驱动程序,经过talkWithDriver调用后,会检查mIn缓冲区中是否有数据,如果有数据,就表示有其他进程写入了数据进来,则调用executeCommand处理数据。

我们先看看talkWithDriver函数。

talkWithDriver

/frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
if (mProcess->mDriverFD <= 0) {
return -EBADF;
}
binder_write_read bwr;
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();
if (doReceive && needRead) {
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
// 校验要传输的数据是否为空
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
//往Binder驱动程序收发送数据
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
if (mProcess->mDriverFD <= 0) {
err = -EBADF;
}
} while (err == -EINTR);
if (err >= NO_ERROR) {
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < mOut.dataSize())
mOut.remove(0, bwr.write_consumed);
else
mOut.setDataSize(0);
}
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed);
mIn.setDataPosition(0);
}
return NO_ERROR;
}
return err;
}

talkWithDriver函数中会调用ioctl函数,将binder_write_read结构体传给binder驱动,binder_write_read包含了一个write_buffer和一个read_buffer,write_buffer存放了要写入到其他进程的数据,read_buffer用来接收其他进程返回来的数据。binder驱动程序中会在binder_ioctl函数中响应BINDER_WRITE_READ命令。在上一篇文章中也讲到了ServerMananger陷入无限循环后,也是通过调用ioctl函数来不断的往binder驱动读写数据,并且也讲了Binder驱动是如何响应ServerMananger的ioctl函数中的BINDER_WRITE_READ命令的,这里就不再讲了,和ServerMananger是一样的,唯一的区别是这里是通过IPCThreadState对象封装调用的。

消息发送

经历了前面的步骤后,就能使用Binder进行消息的发送了,接下来开始讲解用户进程和AMS所在system_server进程进行Binder通信的全流程。这里先简单了解一下当我们调用startActivity启动一个activity时所经历的流程。

通知AMS启动Activity启动

startActivity函数最终都会调用startActivityForResult函数,看一下他的实现。

/frameworks/base/core/java/android/app/Activity.java

public void startActivityForResult(@RequiresPermission Intent intent, int requestCode,
@Nullable Bundle options) {
if (mParent == null) {
options = transferSpringboardActivityOptions(options);
//启动activity
Instrumentation.ActivityResult ar =
mInstrumentation.execStartActivity(
this, mMainThread.getApplicationThread(), mToken, this,
intent, requestCode, options);
if (ar != null) {
mMainThread.sendActivityResult(
mToken, mEmbeddedID, requestCode, ar.getResultCode(),
ar.getResultData());
}
if (requestCode >= 0) {
mStartedActivity = true;
}
cancelInputsAndStartExitTransition(options);
} else {
……
}
}

startActivityForResult函数中实际是调用了Instrumentation对象的execStartActivity方法。

/frameworks/base/core/java/android/app/Instrumentation.java

public ActivityResult execStartActivity(
Context who, IBinder contextThread, IBinder token, Activity target,
Intent intent, int requestCode, Bundle options) {
IApplicationThread whoThread = (IApplicationThread) contextThread;
Uri referrer = target != null ? target.onProvideReferrer() : null;
if (referrer != null) {
intent.putExtra(Intent.EXTRA_REFERRER, referrer);
}
……
try {
intent.migrateExtraStreamToClipData();
intent.prepareToLeaveProcess(who);
int result = ActivityManager.getService()
.startActivity(whoThread, who.getBasePackageName(), intent,
intent.resolveTypeIfNeeded(who.getContentResolver()),
token, target != null ? target.mEmbeddedID : null,
requestCode, 0, null, options);
checkStartActivityResult(result, intent);
} catch (RemoteException e) {
throw new RuntimeException("Failure from system", e);
}
return null;
}

execStartActivity函数关键的代码就是ActivityManager.getService() .startActivity,通过这一行代码,就能调用AMS启动activity,这一行代码由三部分组成。

  1. 获取ServiceManager的Binder地址,并根据获取到的Binder地址创建ServiceManager在应用进程的Proxy
  2. 获取ActivityManagerService的Binder地址,更根据AMS的Binder地址创建AMS在应用进程的Proxy
  3. 通知ActivityManagerService启动activity。

Proxy是Server在Clinet端的代理,里面实际是持有了Server端的Binder地址,Clinet可以直接通过调用Proxy对象中某个业务函数的方式,如startActivity函数来直接Server通信,而不用自己去进行Binder通信要求的数据格式创建,协议封装,数据的处理等繁琐操作。如果我们不使用Android提供的Proxy,就要自己约定Server端和Clinet端的数据协议,格式,命令协议等等,这样就让Binder的使用变得繁琐和复杂。Proxy文件一般也是编译时自动生成的,当我们在AIDL文件约定好Server和Clinet的业务调用函数时,编译时会根据AIDL文件自动生成供Client端使用的Proxy。

下面来详细看一下这三步的具体实现。

获取ServcieManager

先看看ActivityManager.getService()的实现

/frameworks/base/core/java/android/app/ActivityManager.java


public static final String ACTIVITY_SERVICE = "activity";
public static IActivityManager getService() {
return IActivityManagerSingleton.get();
}
private static final Singleton<IActivityManager> IActivityManagerSingleton =
new Singleton<IActivityManager>() {
@Override
protected IActivityManager create() {
//获取ActivityManagerService的binder地址
final IBinder b = ServiceManager.getService(Context.ACTIVITY_SERVICE);
//转成AMS的Proxy
final IActivityManager am = IActivityManager.Stub.asInterface(b);
return am;
}
};

可以看到,ActivityManager.getService()函数里面做了这两件事情

  1. 调用ServiceManager.getService(Context.ACTIVITY_SERVICE)获取AMS的Binder地址
  2. 调用IActivityManager.Stub.asInterface(b)根据AMS的binder地址,创建AMS在用户进程的Proxy。

接着来看**ServiceManager.getService(Context.ACTIVITY_SERVICE)**的流程。

/frameworks/base/core/java/android/os/ServiceManager.java

public static IBinder getService(String name) {
try {
//检查缓存是否有AMS的Binder地址
IBinder service = sCache.get(name);
if (service != null) {
return service;
} else {
//通过ServiceManager获取AMS的binder地址
return Binder.allowBlocking(getIServiceManager().getService(name));
}
} catch (RemoteException e) {
Log.e(TAG, "error in getService", e);
}
return null;
}
private static IServiceManager getIServiceManager() {
if (sServiceManager != null) {
return sServiceManager;
}
// 获取ServiceMananger,并转换成proxy
sServiceManager = ServiceManagerNative
.asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
return sServiceManager;
}

ServiceManager.getService()函数中做了这几件事情

  1. 判断sCache缓存中是否由AMS的binder地址
  2. 如果没有,则调用BinderInternal.getContextObject()函数获取ServiceManager的Binder地址
  3. 根据获取到的ServiceManager的binder地址,创建ServiceManager在用户进程的Proxy代理
  4. 创建了ServiceManager的Proxy代理后,通过该代理获取ActivityManagerService的Binder地址

先看BinderInternal.getContextObject()函数的实现,他是一个native函数,并且他在native中的实现函数位于之前提到过的ProcessState对象里面。

/frameworks/native/libs/binder/ProcessState.cpp

sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
return getStrongProxyForHandle(0);
}
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) {
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) {
if (handle == 0) {
Parcel data;
//往ServerMananger发送PING_TRANSACTION命令
status_t status = IPCThreadState::self()->transact(
0, IBinder::PING_TRANSACTION, data, NULL, 0);
if (status == DEAD_OBJECT)
return NULL;
}
//以0为地址,创建BpBinder
b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}

getContextObject主要做了这两件事

  1. 调用IPCThreadState的transact函数往ServerMananger发送PING_TRANSACTION协议,用于校验Binder驱动程序和ServiceMannager是否正常
  2. 以0为Binder地址,也就是ServerManager在Binder驱动程序中的地址,构建BpBinder
IPCThreadState发送PING_TRANSACTION命令

先看看第一件事,调用IPCThreadState的transact函数往Binder驱动程序发送消息。在前面已经讲过了IPCThreadState这个对象,我们知道,不管是ServiceMannager往Binder驱动收发消息,或者是其他任何进程往Binder驱动收发消息,都必须通过通过ioctl函数进行系统调用,所以IPCThreadState实际也只是对ioctl操作的封装,并且采用基于线程的单例模式,下面看一下IPCThreadState发送消息具体实现。

transact

IPCThreadState是通过transact函数进行数据的发送的,我们看一下这个函数的实现。

/frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::trantrasact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
status_t err = data.errorCheck();
flags |= TF_ACCEPT_FDS;
if (err == NO_ERROR) {
//封装构建需要传输的数据
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
if (err != NO_ERROR) {
if (reply) reply->setError(err);
return (mLastError = err);
}
//发送数据
if ((flags & TF_ONE_WAY) == 0) {
if (reply) {
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
} else {
err = waitForResponse(NULL, NULL);
}
return err;
}

trantrasact函数中做了两件事情:

  1. 调用writeTransactionData构建要传输的数据
  2. 调用waitForResponse发送数据
writeTransactionData

先看第一件事情,writeTransactionData构建传输的数据。

/frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr;
tr.target.ptr = 0;
//设置target的handle地址,此时这个handle为0,表示ServiceMananger
tr.target.handle = handle;
tr.code = code;
tr.flags = binderFlags;
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
//将要传输的数据封装到binder_transaction_data数据结构中
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
tr.flags |= TF_STATUS_CODE;
*statusBuffer = err;
tr.data_size = sizeof(status_t);
tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
tr.offsets_size = 0;
tr.data.ptr.offsets = 0;
} else {
return (mLastError = err);
}
mOut.writeInt32(cmd);
mOut.write(&tr, sizeof(tr));
return NO_ERROR;
}

数据构建的流程如下:

  1. 将需要传输的数据及其目标进程target的信息,如binder地址等,封装到binder_transaction_data结构体中
  2. 将封装后的binder_transaction_data放在mOut缓冲区中,并设置协议码为BC_TRANSACTION。在前面我们已经知道了很多协议,如PING_TRANSACTION等,这些都是Server的业务协议,BC_TRANSACTION属于Binder驱动的协议。Binder协议的封装和网络传输很像,网络传输数据时,要从上到下封装HTTP协议,TCP协议,IP协议等。而binder通信也会封装具体的业务协议,如PING_TRANSACTION,然后再封装Binder驱动协议,如BC_TRANSACTION。

构建好数据后,我们接着来看waitForResponse是如何发送数据的。

waitForResponse

transact函数中会根据flags&TF_ONE_WAY == 0 来决定这次Binder通信是否需要返回数据,如果需要的话就会阻塞等待Binder驱动的返回数据。TF_ONE_WAY 的值为1,前面trantrasact的flag入参是0,所以这一次的通信是非ONE_WAY模式,这种模式下会阻塞等待,此时会调用waitForResponse(&fakeReply)函数,它的实现如下。

/frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
uint32_t cmd;
int32_t err;
while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = (uint32_t)mIn.readInt32();
//处理返回数据
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
if (!reply && !acquireResult) goto finish;
break;
case BR_DEAD_REPLY:
err = DEAD_OBJECT;
goto finish;
case BR_FAILED_REPLY:
err = FAILED_TRANSACTION;
goto finish;
case BR_ACQUIRE_RESULT:
{
ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
const int32_t result = mIn.readInt32();
if (!acquireResult) continue;
*acquireResult = result ? NO_ERROR : INVALID_OPERATION;
}
goto finish;
case BR_REPLY:
{
binder_transaction_data tr;
err = mIn.read(&tr, sizeof(tr));
ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
if (err != NO_ERROR) goto finish;
if (reply) {
if ((tr.flags & TF_STATUS_CODE) == 0) {
reply->ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t),
freeBuffer, this);
} else {
err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
freeBuffer(NULL,
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), this);
}
} else {
freeBuffer(NULL,
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), this);
continue;
}
}
goto finish;
default:
err = executeCommand(cmd);
if (err != NO_ERROR) goto finish;
break;
}
}
finish:
if (err != NO_ERROR) {
if (acquireResult) *acquireResult = err;
if (reply) reply->setError(err);
mLastError = err;
}
return err;
}

可以看到waitForResponse通过while(1)来进行阻塞,里面主要做了这几件事情

  1. 调用talkWithDriver往Binder驱动收发数据
  2. 根据Binder驱动的返回协议,处理Binder驱动返回的数据
talkWithDriver

先看talkWithDriver,这个函数在前面也提到过了,他的本质其实也是封装了ioctl函数往Binder驱动收发送数据。

/frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
if (mProcess->mDriverFD <= 0) {
return -EBADF;
}
binder_write_read bwr;
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();
if (doReceive && needRead) {
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
// 校验要传输的数据是否为空
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
//往Binder驱动程序收发送数据
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
if (mProcess->mDriverFD <= 0) {
err = -EBADF;
}
} while (err == -EINTR);
if (err >= NO_ERROR) {
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < mOut.dataSize())
mOut.remove(0, bwr.write_consumed);
else
mOut.setDataSize(0);
}
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed);
mIn.setDataPosition(0);
}
return NO_ERROR;
}
return err;
}

talkWithDriver将mIn和mOut这两个缓冲区封装到binder_write_read结构体的read_buffer和write_buffer中,通过ioctl函数将binder_write_read这个结构体传递给Binder驱动程序,这样Binder驱动就可以通过读取write_buffer来获取要传递的数据,并将Server返回数据写入到read_buffer中。

上一篇文章已经讲过了Binder驱动是如何响应ServiceMananger调用ioctl函数中封装的BINDER_WRITE_READ协议的,这里我们再把Binder驱动响应BINDER_WRITE_READ的流程过一遍。

Binder驱动响应请求数据

接着看Binder驱动程序在binder_ioctl函数中处理BINDER_WRITE_READ协议的逻辑。

/drivers/staging/android/binder.c

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
if (ret)
goto err_unlocked;
binder_lock(__func__);
//创建或获取binder_thread
thread = binder_get_thread(proc);
if (thread == NULL) {
ret = -ENOMEM;
goto err;
}
switch (cmd) {
case BINDER_WRITE_READ:
//读写逻辑处理
ret = binder_ioctl_write_read(filp, cmd, arg, thread);
if (ret)
goto err;
break;
case BINDER_SET_MAX_THREADS:
……
break;
case BINDER_SET_CONTEXT_MGR:
……
break;
case BINDER_THREAD_EXIT:
……
case BINDER_VERSION: {
……
break;
}
default:
ret = -EINVAL;
goto err;
}
ret = 0;
……
return ret;
}

可以看到响应BINDER_WRITE_READ的逻辑代码中调用了binder_ioctl_write_read函数,下面看一下他的实现。

binder_ioctl_write_read

/drivers/staging/android/binder.c

tatic int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread *thread)
{
int ret = 0;
//获取当前进程的binder_proc结构体
struct binder_proc *proc = filp->private_data;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;
if (size != sizeof(struct binder_write_read)) {
ret = -EINVAL;
goto out;
}
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
if (bwr.write_size > 0) {
//处理write_buffer,也就是需要传递给其他进程的数据
ret = binder_thread_write(proc, thread,
bwr.write_buffer,
bwr.write_size,
&bwr.write_consumed);
trace_binder_write_done(ret);
if (ret < 0) {
bwr.read_consumed = 0;
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}
if (bwr.read_size > 0) {
//处理read_buffer,也就是其他进程传递过来的数据
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
trace_binder_read_done(ret);
if (!list_empty(&proc->todo))
wake_up_interruptible(&proc->wait);
if (ret < 0) {
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
out:
return ret;
}

binder_ioctl_write_read会判断bwr里面的write_size和read_size来决定是否要写数据或者读数据,在讲ServiceMananger时,我讲解了ServerMananger处理read_buffer的流程,这里我们看一下write_buffer的处理流程,处理流程在binder_thread_write函数中实现。

binder_thread_write

binder_thread_write主要是对BC_开头的协议的处理,这个协议是通过writeTransactionData构建传输的数据时封装的,在前面提到过,此时的协议是BC_TRANSACTION,表示Clinet向Server发送数据,我们看一下函数里是如何处理的。

/drivers/staging/android/binder.c

static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
while (ptr < end && thread->return_error == BR_OK) {
if (get_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
trace_binder_command(cmd);
if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
binder_stats.bc[_IOC_NR(cmd)]++;
proc->stats.bc[_IOC_NR(cmd)]++;
thread->stats.bc[_IOC_NR(cmd)]++;
}
switch (cmd) {
case BC_INCREFS:
case BC_ACQUIRE:
case BC_RELEASE:
case BC_DECREFS: ……
case BC_INCREFS_DONE:
case BC_ACQUIRE_DONE: ……
case BC_ATTEMPT_ACQUIRE:……
case BC_ACQUIRE_RESULT:……
case BC_FREE_BUFFER: ……
case BC_TRANSACTION:
case BC_REPLY: {
struct binder_transaction_data tr;
if (copy_from_user(&tr, ptr, sizeof(tr)))
return -EFAULT;
ptr += sizeof(tr);
binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
break;
}
case BC_REGISTER_LOOPER:……
case BC_ENTER_LOOPER:……
case BC_EXIT_LOOPER:……
case BC_REQUEST_DEATH_NOTIFICATION:
case BC_CLEAR_DEATH_NOTIFICATION: ……
case BC_DEAD_BINDER_DONE: ……
default:return -EINVAL;
}
*consumed = ptr - buffer;
}
return 0;
}

BC_TRANSACTION的响应逻辑做了两件事

  1. 调用copy_from_user将buffer中的数据从用户空间拷贝到Binder驱动程序的内核空间中
  2. 调用binder_transaction处理数据
binder_transaction

我们看一下binder_transaction函数是如何处理buffer的。

/drivers/staging/android/binder.c

static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply)
{
struct binder_transaction *t;
struct binder_work *tcomplete;
binder_size_t *offp, *off_end;
binder_size_t off_min;
struct binder_proc *target_proc;
//目标binder_proc结构体
struct binder_thread *target_thread = NULL;
//目标线程
struct binder_node *target_node = NULL;
//目标binder节点
struct list_head *target_list;//目标TODO队列
wait_queue_head_t *target_wait; //目标等待队列
struct binder_transaction *in_reply_to = NULL;
struct binder_transaction_log_entry *e;
uint32_t return_error;
if (reply) {
……
} else {
if (tr->target.handle) {
struct binder_ref *ref;
//如果target的binder地址不为0,则在binder_proc结构体的refs_by_desc中查找binder
ref = binder_get_ref(proc, tr->target.handle);
if (ref == NULL) {
……
goto err_invalid_target_handle;
}
target_node = ref->node;
} else {
//如果target为0,说明目标binder为server_mamanger,直接取全局变量binder_context_mgr_node
target_node = binder_context_mgr_node;
if (target_node == NULL) {
return_error = BR_DEAD_REPLY;
goto err_no_context_mgr_node;
}
}
e->to_node = target_node->debug_id;
//获取目标binder的binder_proc结构体
target_proc = target_node->proc;
……
}
if (target_thread) {
e->to_thread = target_thread->pid;
target_list = &target_thread->todo;
target_wait = &target_thread->wait;
} else {
//首次执行target_thread为空,所以会走到这儿
target_list = &target_proc->todo;
target_wait = &target_proc->wait;
}
e->to_proc = target_proc->pid;
//创建binder_transaction节点
t = kzalloc(sizeof(*t), GFP_KERNEL);
//创建一个binder_work节点
tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
if (tcomplete == NULL) {
return_error = BR_FAILED_REPLY;
goto err_alloc_tcomplete_failed;
}
binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);
t->debug_id = ++binder_last_id;
e->debug_id = t->debug_id;
//非oneway的通信方式,把当前thread保存到transaction的from字段
if (!reply && !(tr->flags & TF_ONE_WAY))
t->from = thread;
else
t->from = NULL;
t->sender_euid = task_euid(proc->tsk);
t->to_proc = target_proc;
t->to_thread = target_thread;
t->code = tr->code;
t->flags = tr->flags;
t->priority = task_nice(current);
//给目标进程的buffer分配内存空间
t->buffer = binder_alloc_buf(target_proc, tr->data_size,
tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
if (t->buffer == NULL) {
return_error = BR_FAILED_REPLY;
goto err_binder_alloc_buf_failed;
}
t->buffer->allow_user_free = 0;
t->buffer->debug_id = t->debug_id;
t->buffer->transaction = t;
t->buffer->target_node = target_node;
trace_binder_transaction_alloc_buf(t->buffer);
offp = (binder_size_t *)(t->buffer->data +
ALIGN(tr->data_size, sizeof(void *)));
//将数据拷贝到目标进程的Buffer中
if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
tr->data.ptr.buffer, tr->data_size)) {
proc->pid, thread->pid);
return_error = BR_FAILED_REPLY;
goto err_copy_data_failed;
}
if (copy_from_user(offp, (const void __user *)(uintptr_t)
tr->data.ptr.offsets, tr->offsets_size)) {
return_error = BR_FAILED_REPLY;
goto err_copy_data_failed;
}
off_end = (void *)offp + tr->offsets_size;
off_min = 0;
for (; offp < off_end; offp++) {
struct flat_binder_object *fp;
fp = (struct flat_binder_object *)(t->buffer->data + *offp);
off_min = *offp + sizeof(struct flat_binder_object);
switch (fp->type) {
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
struct binder_ref *ref;
//获取目标进程的binder_node
struct binder_node *node = binder_get_node(proc, fp->binder);
//如果目标进程的binder_node不存在,则为目标进程新建binder_node,并插入到当前进程的binder_proc结构提的rb_node中
if (node == NULL) {
node = binder_new_node(proc, fp->binder, fp->cookie);
if (node == NULL) {
return_error = BR_FAILED_REPLY;
goto err_binder_new_node_failed;
}
node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
}
……
//根据binder_node查找binder_ref,如果查不到就新建,并插入到进程的binder_ref红黑树中
ref = binder_get_ref_for_node(target_proc, node);
if (ref == NULL) {
return_error = BR_FAILED_REPLY;
goto err_binder_get_ref_for_node_failed;
}
……
} break;
case BINDER_TYPE_HANDLE:
case BINDER_TYPE_WEAK_HANDLE: {
//获取目标进程的binder_ref
struct binder_ref *ref = binder_get_ref(proc, fp->handle);
if (ref == NULL) {
……
goto err_binder_get_ref_failed;
}
……
if (ref->node->proc == target_proc) {
if (fp->type == BINDER_TYPE_HANDLE)
fp->type = BINDER_TYPE_BINDER;
else
fp->type = BINDER_TYPE_WEAK_BINDER;
fp->binder = ref->node->ptr;
fp->cookie = ref->node->cookie;
binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL);
trace_binder_transaction_ref_to_node(t, ref);
binder_debug(BINDER_DEBUG_TRANSACTION,
"
ref %d desc %d -> node %d u%016llxn",
ref->debug_id, ref->desc, ref->node->debug_id,
(u64)ref->node->ptr);
} else {
struct binder_ref *new_ref;
new_ref = binder_get_ref_for_node(target_proc, ref->node);
if (new_ref == NULL) {
return_error = BR_FAILED_REPLY;
goto err_binder_get_ref_for_node_failed;
}
fp->handle = new_ref->desc;
binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);
trace_binder_transaction_ref_to_ref(t, ref,
new_ref);
}
} break;
case BINDER_TYPE_FD:……
default:……
}
}
if (reply) {
BUG_ON(t->buffer->async_transaction != 0);
binder_pop_transaction(target_thread, in_reply_to);
} else if (!(t->flags & TF_ONE_WAY)) {
BUG_ON(t->buffer->async_transaction != 0);
t->need_reply = 1;
t->from_parent = thread->transaction_stack;
thread->transaction_stack = t;
} else {
BUG_ON(target_node == NULL);
BUG_ON(t->buffer->async_transaction != 1);
if (target_node->has_async_transaction) {
target_list = &target_node->async_todo;
target_wait = NULL;
} else
target_node->has_async_transaction = 1;
}
t->work.type = BINDER_WORK_TRANSACTION;
//将binder_work 插入到目标队列
list_add_tail(&t->work.entry, target_list);
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
list_add_tail(&tcomplete->entry, &thread->todo);
if (target_wait)
//唤醒目标进程
wake_up_interruptible(target_wait);
return;
}

binder_transaction函数非常的长,下面来介绍一下函数中主要做的事情

  1. 根据target.handle,寻找目标进程的binder_proc,如果handle为0,就会直接取全局变量binder_context_mgr_node,binder_context_mgr_node在上一篇讲过ServiceMananger注册为bindermananger时,驱动就会为ServerMananger专门创建一个全局的binder_node,也就是这个binder_context_mgr_node。这里需要注意,如果target.handle不为0就会去当前进程的binder_proc结构体中寻找目标binder,这里可能有人会有疑问,binder_proc中目标的target是怎么创建的呢?其实当Client通过getService向ServiceManager请求该Service服务的时候,ServiceManager会在注册的Service列表中查找该服务,如果找到就将该服务返回给Client,在这个过程中,ServiceManager会在Client进程的binder_ref引用中插入Server的binder节点,所以Client第二次就能根据handle地址在binder_proc的binder引用中找到目标binder。
  2. 给目标进程的binder_proc结构体中的buffer分配内存,在上一篇我们已经知道binder_proc中的buffer是专门用来缓存数据的。
  3. 将当前进程的binder插入到目标进程的binder_proc的红黑树引用中去,在这里我们知道不管是Clinet进程,还是Server进程的binder_proc中的binder引用,都不是自己主动插入的,都是Binder驱动程序中插入的。
  4. 创建binder_transaction节点,并将其插入目标进程的todo列表
  5. 尝试唤醒目标进程

之前讲过ServiceManager会不断的通过ioctl函数读取是否有其他进程传输数据过来,binder驱动经过binder_transaction的流程,将用户进程的数据写入到了目标进程,也就是ServiceManager的buffer中,ServiceManager调用ioctl函数,发现有数据写入,就会开始处理数据,我们接着来看ServerManager是如何响应PING_TRANSACTION的。

ServiceManager响应PING_TRANSACTION消息

ServiceManager会在binder_loop函数中,不断的读写数据,并解析处理数据。

/frameworks/native/cmds/servicemanager/binder.c

void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
uint32_t readbuf[32];
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
binder_write(bs, readbuf, sizeof(uint32_t));
//进入for循环
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (uintptr_t) readbuf;
//读写取数据
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res < 0) {
ALOGE("binder_loop: ioctl failed (%s)n", strerror(errno));
break;
}
//解析数据
res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
if (res == 0) {
ALOGE("binder_loop: unexpected reply?!n");
break;
}
if (res < 0) {
ALOGE("binder_loop: io error %d %sn", res, strerror(errno));
break;
}
}
}

接着看一下解析函数**binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func)**的实现

/frameworks/native/cmds/servicemanager/service_manager.c

int binder_parse(struct binder_state *bs, struct binder_io *bio,
uintptr_t ptr, size_t size, binder_handler func)
{
int r = 1;
uintptr_t end = ptr + (uintptr_t) size;
while (ptr < end) {
uint32_t cmd = *(uint32_t *) ptr;
ptr += sizeof(uint32_t);
switch(cmd) {
case BR_NOOP:
break;
case BR_TRANSACTION_COMPLETE:
break;
case BR_INCREFS:
case BR_ACQUIRE:
case BR_RELEASE:
case BR_DECREFS:
ptr += sizeof(struct binder_ptr_cookie);
break;
case BR_TRANSACTION: {
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
if ((end - ptr) < sizeof(*txn)) {
return -1;
}
binder_dump_txn(txn);
if (func) {
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;
bio_init(&reply, rdata, sizeof(rdata), 4);
bio_init_from_txn(&msg, txn);
res = func(bs, txn, &msg, &reply);
if (txn->flags & TF_ONE_WAY) {
binder_free_buffer(bs, txn->data.ptr.buffer);
} else {
binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
}
}
ptr += sizeof(*txn);
break;
}
case BR_REPLY:……
case BR_DEAD_BINDER:……
case BR_FAILED_REPLY:
r = -1;
break;
case BR_DEAD_REPLY:
r = -1;
break;
default:
return -1;
}
}
return r;
}

在上一篇也讲到,Binder驱动在执行binder_thread_read函数读取传输数据时,会根据数据写入响应的响应码,最常见的进程间通信的响应码就是BR_TRANSACTION,表示有数据写入,ServiceMananger处理BR_TRANSACTION时,会将数据binder_transaction_data直接交给func这个入参函数处理,func入参函数的实现如下。

/frameworks/native/cmds/servicemanager/service_manager.c

int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si;
uint16_t *s;
size_t len;
uint32_t handle;
uint32_t strict_policy;
int allow_isolated;
if (txn->target.ptr != BINDER_SERVICE_MANAGER)
return -1;
//返回0
if (txn->code == PING_TRANSACTION)
return 0;
……
switch(txn->code) {
case SVC_MGR_GET_SERVICE:
case SVC_MGR_CHECK_SERVICE:
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
handle = do_find_service(s, len, txn->sender_euid, txn->sender_pid);
if (!handle)
break;
bio_put_ref(reply, handle);
return 0;
case SVC_MGR_ADD_SERVICE:
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
handle = bio_get_ref(msg);
allow_isolated = bio_get_uint32(msg) ? 1 : 0;
if (do_add_service(bs, s, len, handle, txn->sender_euid,
allow_isolated, txn->sender_pid))
return -1;
break;
case SVC_MGR_LIST_SERVICES: {
uint32_t n = bio_get_uint32(msg);
if (!svc_can_list(txn->sender_pid, txn->sender_euid)) {
ALOGE("list_service() uid=%d - PERMISSION DENIEDn",
txn->sender_euid);
return -1;
}
si = svclist;
while ((n-- > 0) && si)
si = si->next;
if (si) {
bio_put_string16(reply, si->name);
return 0;
}
return -1;
}
default:
ALOGE("unknown code %dn", txn->code);
return -1;
}
bio_put_uint32(reply, 0);
return 0;
}

可以看到当业务协议是PING_TRANSACTION时,会直接返回0。svcmgr_handler处理的响应逻辑主要还有SVC_MGR_GET_SERVICE,SVC_MGR_CHECK_SERVICE,SVC_MGR_ADD_SERVICE,SVC_MGR_LIST_SERVICES。里面最常用的就是SVC_MGR_ADD_SERVICE,也就是当Server端调用ServiceMananger的addService方法时使用的协议,后面会再详细讲这个协议。

ServiceManager会将0通过ioctl写入binder驱动中,返回给我们的用户进程。到这里,整个PING_TRANSACTION的流程就结束。PING_TRANSACTION的校验完成后,就会以0创建BpBinder,并且返回给Java层,而java层就会拿着这个BPBinder,创建Java层的Proxy。

我们接着往下看

创建ServiceManagerProxy

再回来最前面getService的流程

/frameworks/base/core/java/android/os/ServiceManager.java

public static IBinder getService(String name) {
try {
IBinder service = sCache.get(name);
if (service != null) {
return service;
} else {
return Binder.allowBlocking(getIServiceManager().getService(name));
}
} catch (RemoteException e) {
Log.e(TAG, "error in getService", e);
}
return null;
}
private static IServiceManager getIServiceManager() {
if (sServiceManager != null) {
return sServiceManager;
}
// Find the service manager
sServiceManager = ServiceManagerNative
.asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
return sServiceManager;
}

我们已经了解了BinderInternal.getContextObject()这个native方法所做的两件事情:往ServerManager发送PING_TRANSACTION消息,和以0为地址创建BpBinder。接下来就是创建java层的ServiceManager代理的过程,我们看一下asInterface的实现流程。

/frameworks/base/core/java/android/os/ServiceManager.java

static public IServiceManager asInterface(IBinder obj)
{
if (obj == null) {
return null;
}
IServiceManager in =
(IServiceManager)obj.queryLocalInterface(descriptor);
if (in != null) {
return in;
}
return new ServiceManagerProxy(obj);
}

可以看到asInterface方法实际是以IBinder为入参,创建了ServiceManagerProxy,到这里,获取ServerMananger以及创建ServiceManagerProxy的流程就结束了,有了这个ServiceManagerProxy,应用进程就可以直接和ServiceManager进程通信了,因为ServiceManagerProxy里面封装了ServiceMananger的Binder地址。

获取ActivityManagerServer

获取了ServiceManager的Binder地址,并以ServerMananger的BpBinder创建了ServiceManagerProxy,我们接着就可以通过ServiceManagerProxy获取ActivityManagerServer的Binder地址,并创建对应的ActivityManagerServer在应用进程的Proxy了,通过调用ServiceManager.getService(Context.ACTIVITY_SERVICE)函数就能获取ActivityManagerServer的代理。

/frameworks/base/core/java/android/os/ServiceManager.java

public static IBinder getService(String name) {
try {
IBinder service = sCache.get(name);
if (service != null) {
return service;
} else {
return Binder.allowBlocking(getIServiceManager().getService(name));
}
} catch (RemoteException e) {
Log.e(TAG, "error in getService", e);
}
return null;
}

这里的的ServiceManager就是ServiceManagerProxy,所以我们直接到ServiceManagerProxy对象的源码中看看getService是如何实现的。

/frameworks/base/core/java/android/os/ServiceManagerNative.java

int GET_SERVICE_TRANSACTION = IBinder.FIRST_CALL_TRANSACTION;
class ServiceManagerProxy implements IServiceManager {
……
public IBinder getService(String name) throws RemoteException {
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IServiceManager.descriptor);
data.writeString(name);
mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
IBinder binder = reply.readStrongBinder();
reply.recycle();
data.recycle();
return binder;
}
……
}

这里调用了mRemote的transact函数,传输的业务协议码是GET_SERVICE_TRANSACTION。mRemote就是前面创建BpBinder,接着看BpBinder的transact函数是如何传输数据的。

/frameworks/native/libs/binder/BpBinder.cpp

status_t BpBinder::transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
// Once a binder has died, it will never come back to life.
if (mAlive) {
//调用IPCThreadState传输数据
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}

可以看到,BpBinder里面依然还是调用了IPCThreadState的transact方法来发送数据,IPCThreadState调用transact发送数据的流程在前面已经讲过了,这儿就不重复讲了,这就简单回顾一下这个流程的步骤。

IPCThreadState的transact函数经过了writeTransactionData封装数据,并经过waitForResponse以及talkWithDriver等流程,最终调用ioctl将数据发送给Binder驱动。

Binder驱动收到数据后,经过binder_thread_write流程,寻找目标进程的binder,然后执行binder_transaction流程将数据拷贝到目标进程的binder缓冲区中。

这里的目标进程就是ServiceMannager,ServiceManager会不断的循环读取缓冲区,并解析处理数据。我们接着来看ServerMnanager是如何处理GET_SERVICE_TRANSACTION协议的,在前面已经知道它的处理函数是svcmgr_handler。

ServiceManager响应GET_SERVICE_TRANSACTION协议
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si;
uint16_t *s;
size_t len;
uint32_t handle;
uint32_t strict_policy;
int allow_isolated;
if (txn->target.ptr != BINDER_SERVICE_MANAGER)
return -1;
if (txn->code == PING_TRANSACTION)
return 0;
strict_policy = bio_get_uint32(msg);
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
if ((len != (sizeof(svcmgr_id) / 2)) ||
memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
return -1;
}
if (sehandle && selinux_status_updated() > 0) {
struct selabel_handle *tmp_sehandle = selinux_android_service_context_handle();
if (tmp_sehandle) {
selabel_close(sehandle);
sehandle = tmp_sehandle;
}
}
switch(txn->code) {
case SVC_MGR_GET_SERVICE:
case SVC_MGR_CHECK_SERVICE:
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
handle = do_find_service(s, len, txn->sender_euid, txn->sender_pid);
if (!handle)
break;
bio_put_ref(reply, handle);
return 0;
case SVC_MGR_ADD_SERVICE:……
case SVC_MGR_LIST_SERVICES: ……
default:
ALOGE("unknown code %dn", txn->code);
return -1;
}
bio_put_uint32(reply, 0);
return 0;
}

svcmgr_handler会调用do_find_service寻找目标service,这里的目标service就是activity

uint32_t do_find_service(const uint16_t *s, size_t len, uid_t uid, pid_t spid)
{
struct svcinfo *si = find_svc(s, len);
if (!si || !si->handle) {
return 0;
}
if (!si->allow_isolated) {
uid_t appid = uid % AID_USER;
if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) {
return 0;
}
}
if (!svc_can_find(s, len, spid, uid)) {
return 0;
}
return si->handle;
}
struct svcinfo *svclist = NULL;
struct svcinfo *find_svc(const uint16_t *s16, size_t len)
{
struct svcinfo *si;
for (si = svclist; si; si = si->next) {
if ((len == si->len) &&
!memcmp(s16, si->name, len * sizeof(uint16_t))) {
return si;
}
}
return NULL;
}

找到目标service后就会返回目标service的handle地址,这个handle就是binder地址。ServerMananger的ioctl会将返回数据通过Binder驱动,返回给Clinet,这里还是会走binder_thread_write函数,然后经过binder_transaction流程,将binder插入到Clinet的binder_proc中,并且将数据写入到Clinet的binder_proc的buffer中。

创建ActivityManagerProxy的Proxy

当通过ServerMananger获取AMS的binder信息后,调用IActivityManager.Stub.asInterface(b)就会创建AMS在应用进程的代理,IActivityManager是一个AIDL文件,编译时会自动生成一个包含Proxy对象和Stub对象。其中Proxy对象是给Client使用的,Stub是给Server使用的。

/frameworks/base/core/java/android/app/IActivityManager.aidl

interface IActivityManager {
……
int startActivity(in IApplicationThread caller, in String callingPackage, in Intent intent,
in String resolvedType, in IBinder resultTo, in String resultWho, int requestCode,
int flags, in ProfilerInfo profilerInfo, in Bundle options);
……
}

编译后生成的Proxy文件如下。

public interface IActivityManager extends android.os.IInterface {
/**
* Local-side IPC implementation stub class.
*/
public static abstract class Stub extends android.os.Binder implements android.app.IActivityManager {
private static final java.lang.String DESCRIPTOR = "android.app.IActivityManager";
/**
* Construct the stub at attach it to the interface.
*/
public Stub() {
this.attachInterface(this, DESCRIPTOR);
}
/**
* Cast an IBinder object into an android.app.IActivityManager interface,
* generating a proxy if needed.
*/
public static android.app.IActivityManager asInterface(android.os.IBinder obj) {
if ((obj == null)) {
return null;
}
android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR);
if (((iin != null) && (iin instanceof android.app.IActivityManager))) {
return ((android.app.IActivityManager) iin);
}
return new android.app.IActivityManager.Stub.Proxy(obj);
}
@Override
public android.os.IBinder asBinder() {
return this;
}
@Override
public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException {
switch (code) {
case INTERFACE_TRANSACTION: ……
case TRANSACTION_openContentUri: ……
case TRANSACTION_handleApplicationCrash: ……
case TRANSACTION_startActivity: {
data.enforceInterface(DESCRIPTOR);
android.app.IApplicationThread _arg0;
_arg0 = android.app.IApplicationThread.Stub.asInterface(data.readStrongBinder());
java.lang.String _arg1;
_arg1 = data.readString();
android.content.Intent _arg2;
if ((0 != data.readInt())) {
_arg2 = android.content.Intent.CREATOR.createFromParcel(data);
} else {
_arg2 = null;
}
java.lang.String _arg3;
_arg3 = data.readString();
android.os.IBinder _arg4;
_arg4 = data.readStrongBinder();
java.lang.String _arg5;
_arg5 = data.readString();
int _arg6;
_arg6 = data.readInt();
int _arg7;
_arg7 = data.readInt();
android.app.ProfilerInfo _arg8;
if ((0 != data.readInt())) {
_arg8 = android.app.ProfilerInfo.CREATOR.createFromParcel(data);
} else {
_arg8 = null;
}
android.os.Bundle _arg9;
if ((0 != data.readInt())) {
_arg9 = android.os.Bundle.CREATOR.createFromParcel(data);
} else {
_arg9 = null;
}
int _result = this.startActivity(_arg0, _arg1, _arg2, _arg3, _arg4, _arg5, _arg6, _arg7, _arg8, _arg9);
reply.writeNoException();
reply.writeInt(_result);
return true;
}
private static class Proxy implements android.app.IActivityManager {
private android.os.IBinder mRemote;
Proxy(android.os.IBinder remote) {
mRemote = remote;
}
@Override
public int startActivity(android.app.IApplicationThread caller, java.lang.String callingPackage, android.content.Intent intent, java.lang.String resolvedType, android.os.IBinder resultTo, java.lang.String resultWho, int requestCode, int flags, android.app.ProfilerInfo profilerInfo, android.os.Bundle options) throws android.os.RemoteException {
android.os.Parcel _data = android.os.Parcel.obtain();
android.os.Parcel _reply = android.os.Parcel.obtain();
int _result;
try {
_data.writeInterfaceToken(DESCRIPTOR);
_data.writeStrongBinder((((caller != null)) ? (caller.asBinder()) : (null)));
_data.writeString(callingPackage);
if ((intent != null)) {
_data.writeInt(1);
intent.writeToParcel(_data, 0);
} else {
_data.writeInt(0);
}
_data.writeString(resolvedType);
_data.writeStrongBinder(resultTo);
_data.writeString(resultWho);
_data.writeInt(requestCode);
_data.writeInt(flags);
if ((profilerInfo != null)) {
_data.writeInt(1);
profilerInfo.writeToParcel(_data, 0);
} else {
_data.writeInt(0);
}
if ((options != null)) {
_data.writeInt(1);
options.writeToParcel(_data, 0);
} else {
_data.writeInt(0);
}
mRemote.transact(Stub.TRANSACTION_startActivity, _data, _reply, 0);
_reply.readException();
_result = _reply.readInt();
} finally {
_reply.recycle();
_data.recycle();
}
return _result;
}
}
}
}
}
}

可以看到,AMS的Proxy在调用startActivity时,会将数据封装在Parcel中,然后调用mRemote.transact函数,mRemote在前面提到过,他就是对应Server的binder地址创建的BpBinder。接下来的流程就和前面是一样,IPCThreadState经过层层调用,最终通过ioctl函数将数据传递给了Binder驱动程序,Binder驱动程序在binder_transaction将数据拷贝到ActivityManagerService的binder_proc的buffer中,AMS的binder线程不断的通过循环检测缓存区,发现有数据后,就进行数据的解析和响应。

ActivityManagerService启动Activity

经过了前面的流程,变到了最后一步ActivityManagerService启动Activity这个流程了,前面说过AIDL会生成Proxy对象和Stub对象,Proyx对象是Client使用的,在前面的Client,我们了解了Proxy实际是封装了AMS的binder地址,当调用Proxy的startActivity,会经过层层的数据封装,最终通过ioctl函数将数据经过binder驱动程序写道AMS中,那么AMS是如何使用Stub对象的呢?AMS实际就是继承了IActivityManager.Stub

public class ActivityManagerService extends IActivityManager.Stub
implements Watchdog.Monitor, BatteryStatsImpl.BatteryCallback {
……
@Override
public final int startActivity(IApplicationThread caller, String callingPackage,
Intent intent, String resolvedType, IBinder resultTo, String resultWho, int requestCode,
int startFlags, ProfilerInfo profilerInfo, Bundle bOptions) {
return startActivityAsUser(caller, callingPackage, intent, resolvedType, resultTo,
resultWho, requestCode, startFlags, profilerInfo, bOptions,
UserHandle.getCallingUserId());
}
……
}

我们在看看IActivityManager.Sub的实现

public interface IActivityManager extends android.os.IInterface {
/**
* Local-side IPC implementation stub class.
*/
public static abstract class Stub extends android.os.Binder implements android.app.IActivityManager {
private static final java.lang.String DESCRIPTOR = "android.app.IActivityManager";
/**
* Construct the stub at attach it to the interface.
*/
public Stub() {
this.attachInterface(this, DESCRIPTOR);
}
/**
* Cast an IBinder object into an android.app.IActivityManager interface,
* generating a proxy if needed.
*/
public static android.app.IActivityManager asInterface(android.os.IBinder obj) {
if ((obj == null)) {
return null;
}
android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR);
if (((iin != null) && (iin instanceof android.app.IActivityManager))) {
return ((android.app.IActivityManager) iin);
}
return new android.app.IActivityManager.Stub.Proxy(obj);
}
@Override
public android.os.IBinder asBinder() {
return this;
}
@Override
public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException {
switch (code) {
case INTERFACE_TRANSACTION: ……
case TRANSACTION_openContentUri: ……
case TRANSACTION_handleApplicationCrash: ……
case TRANSACTION_startActivity: {
data.enforceInterface(DESCRIPTOR);
android.app.IApplicationThread _arg0;
_arg0 = android.app.IApplicationThread.Stub.asInterface(data.readStrongBinder());
java.lang.String _arg1;
_arg1 = data.readString();
android.content.Intent _arg2;
if ((0 != data.readInt())) {
_arg2 = android.content.Intent.CREATOR.createFromParcel(data);
} else {
_arg2 = null;
}
java.lang.String _arg3;
_arg3 = data.readString();
android.os.IBinder _arg4;
_arg4 = data.readStrongBinder();
java.lang.String _arg5;
_arg5 = data.readString();
int _arg6;
_arg6 = data.readInt();
int _arg7;
_arg7 = data.readInt();
android.app.ProfilerInfo _arg8;
if ((0 != data.readInt())) {
_arg8 = android.app.ProfilerInfo.CREATOR.createFromParcel(data);
} else {
_arg8 = null;
}
android.os.Bundle _arg9;
if ((0 != data.readInt())) {
_arg9 = android.os.Bundle.CREATOR.createFromParcel(data);
} else {
_arg9 = null;
}
int _result = this.startActivity(_arg0, _arg1, _arg2, _arg3, _arg4, _arg5, _arg6, _arg7, _arg8, _arg9);
reply.writeNoException();
reply.writeInt(_result);
return true;
}
……
}
}
}
}

前面说过Client在onZygoteInit中启动binder后就会陷入不断的循环,并在talkwithdriver函数中解析其他进程传过来的数据,然后交给executeCommand函数。AMS也是同样的流程,AMS的binder线程在talkWithDriver函数中收到了其他进程传过来的数据,然后经过executeCommand函数,最终交给了Stub对象的onTransact函数来处理。

从onTransact函数处理TRANSACTION_startActivity时,就会直接调用this.startActivity()函数,此时,AMS就正式开始了activity的启动流程。

Server

通过对ServiceManager和Clinet的详细介绍,我们其实对Server了解的差不多了,其实他们三者的流程都是类似的,都需要打开binder驱动,然后mmap内存,最后通过不断循环调用ioctl函数来读写数据。而位于system_server进程的AMS和位于用户进程的普通应用,在打开binder,消息发送等流程上几乎是完全一样的,都是通过ProcessState和IPCThreadState来进行的。

打开Binder

先来看看AMS所在的system_server进程是如何打开binder的。当Zygote启动后,就会在Zygote的main函数中启动system_server进程。

/frameworks/base/core/java/com/android/internal/os/ZygoteInit.java

public static void main(String argv[]) {
……
if (startSystemServer) {
startSystemServer(abiList, socketName, zygoteServer);
}
……
}

Zygote的main函数中调用了startSystemServer来启动SystemServer

/frameworks/base/core/java/com/android/internal/os/ZygoteInit.java

private static boolean startSystemServer(String abiList, String socketName) throws MethodAndArgsCaller, RuntimeException {
...
//参数准备
String args[] = {
"--setuid=1000",
"--setgid=1000",
"--setgroups=1001,1002,1003,1004,1005,1006,1007,1008,1009,1010,1018,1021,1032,3001,3002,3003,3006,3007",
"--capabilities=" + capabilities + "," + capabilities,
"--nice-name=system_server",
"--runtime-args",
"com.android.server.SystemServer",
};
ZygoteConnection.Arguments parsedArgs = null;
int pid;
try {
//用于解析参数,生成目标格式
parsedArgs = new ZygoteConnection.Arguments(args);
ZygoteConnection.applyDebuggerSystemProperty(parsedArgs);
ZygoteConnection.applyInvokeWithSystemProperty(parsedArgs);
// fork子进程,该进程是system_server进程【见小节2】
pid = Zygote.forkSystemServer(
parsedArgs.uid, parsedArgs.gid,
parsedArgs.gids,
parsedArgs.debugFlags,
null,
parsedArgs.permittedCapabilities,
parsedArgs.effectiveCapabilities);
} catch (IllegalArgumentException ex) {
throw new RuntimeException(ex);
}
//进入子进程system_server
if (pid == 0) {
if (hasSecondZygote(abiList)) {
waitForSecondaryZygote(socketName);
}
// 完成system_server进程剩余的工作
handleSystemServerProcess(parsedArgs);
}
return true;
}

这里通过Zygote的fork函数,fork出了system_server进程,fork完成后,会执行handleSystemServerProcess

/frameworks/base/core/java/com/android/internal/os/ZygoteInit.java

private static void handleSystemServerProcess( ZygoteConnection.Arguments parsedArgs) throws ZygoteInit.MethodAndArgsCaller {
closeServerSocket(); //关闭父进程zygote复制而来的Socket
Os.umask(S_IRWXG | S_IRWXO);
if (parsedArgs.niceName != null) {
Process.setArgV0(parsedArgs.niceName); //设置当前进程名为"system_server"
}
final String systemServerClasspath = Os.getenv("SYSTEMSERVERCLASSPATH");
if (systemServerClasspath != null) {
//执行dex优化操作
performSystemServerDexOpt(systemServerClasspath);
}
……
ClassLoader cl = null;
if (systemServerClasspath != null) {
// 创建类加载器,并赋予当前线程
cl = new PathClassLoader(systemServerClasspath, ClassLoader.getSystemClassLoader());
Thread.currentThread().setContextClassLoader(cl);
}
RuntimeInit.zygoteInit(parsedArgs.targetSdkVersion, parsedArgs.remainingArgs, cl);
}

handleSystemServerProcess主要是对system_server进程的一些初始化工作,最后执行RuntimeInit.zygoteInit函数。

/frameworks/base/core/java/com/android/internal/os/RuntimeInit.java

public static final void zygoteInit(int targetSdkVersion, String[] argv, ClassLoader classLoader) throws ZygoteInit.MethodAndArgsCaller {
commonInit(); // 通用的一些初始化
nativeZygoteInit(); // onZygoteInit回调 
applicationInit(targetSdkVersion, argv, classLoader); // 应用初始化
}

我们这里只关注zygoteInit中的nativeZygoteInit函数,它是一个native函数。

/frameworks/base/core/jni/AndroidRuntime.cpp

static void com_android_internal_os_RuntimeInit_nativeZygoteInit(JNIEnv* env, jobject clazz) {
gCurRuntime->onZygoteInit();
}

gCurRuntime就是当前进程,即system_server进程的runtime运行环境。

/frameworks/base/cmds/app_process/app_main.cpp

virtual void onZygoteInit()
{
sp<ProcessState> proc = ProcessState::self();
proc->startThreadPool();
}

可以看到,onZygoteInit出现了ProcessState的身影,onZygoteInit执行完成后,system_server进程的binder驱动就初始化完成,并陷入不断的循环进行binder数据的读写中了。

AMS申请注册Service

我们接着看AMS是如何在ServiceManager中注册Service的。SystemServer进程启动后会在main函数中启动各种service

/frameworks/base/services/java/com/android/server/SystemServer.java

public static void main(String[] args) {
new SystemServer().run();
}
private void run() {
mSystemServiceManager = new SystemServiceManager(mSystemContext);
mSystemServiceManager.setRuntimeRestarted(mRuntimeRestart);
……
// Start services.
traceBeginAndSlog("StartServices");
startBootstrapServices();
startCoreServices();
startOtherServices();
SystemServerInitThreadPool.shutdown();
……
}

我们的ActivityManagerService就位于BootstrapServices中。

/frameworks/base/services/java/com/android/server/SystemServer.java

private void startBootstrapServices() {
……
// Activity manager runs the show.
traceBeginAndSlog("StartActivityManager");
mActivityManagerService = mSystemServiceManager.startService(
ActivityManagerService.Lifecycle.class).getService();
mActivityManagerService.setSystemServiceManager(mSystemServiceManager);
……
// Set up the Application instance for the system process and get started.
mActivityManagerService.setSystemProcess();
……
}

当AMS启动后,startBootstrapServices会执行setSystemProcess函数。

/frameworks/base/services/java/com/android/server/SystemServer.java

public void setSystemProcess() {
try {
ServiceManager.addService(Context.ACTIVITY_SERVICE, this, true);
ServiceManager.addService(ProcessStats.SERVICE_NAME, mProcessStats);
ServiceManager.addService("meminfo", new MemBinder(this));
ServiceManager.addService("gfxinfo", new GraphicsBinder(this));
ServiceManager.addService("dbinfo", new DbBinder(this));
if (MONITOR_CPU_USAGE) {
ServiceManager.addService("cpuinfo", new CpuBinder(this));
}
ServiceManager.addService("permission", new PermissionController(this));
ServiceManager.addService("processinfo", new ProcessInfoService(this));
ApplicationInfo info = mContext.getPackageManager().getApplicationInfo(
"android", STOCK_PM_FLAGS | MATCH_SYSTEM_ONLY);
mSystemThread.installSystemApplicationInfo(info, getClass().getClassLoader());
synchronized (this) {
ProcessRecord app = newProcessRecordLocked(info, info.processName, false, 0);
app.persistent = true;
app.pid = MY_PID;
app.maxAdj = ProcessList.SYSTEM_ADJ;
app.makeActive(mSystemThread.getApplicationThread(), mProcessStats);
synchronized (mPidsSelfLocked) {
mPidsSelfLocked.put(app.pid, app);
}
updateLruProcessLocked(app, false, null);
updateOomAdjLocked();
}
} catch (PackageManager.NameNotFoundException e) {
throw new RuntimeException(
"Unable to find android system package", e);
}
}

在setSystemProcess函数中我们可以看到,里面调用了ServiceManager的addService函数,注册了保活activity,meminfo,gfxinfo等各种Service。这里的ServiceManager和前面Clinet端使用的ServiceManager是同一个对象,所以从addService的实现中可以看到,同样是先获取ServiceManager的Proxy,然后调用ServiceManagerd的Proyx中的addService函数。

/frameworks/base/core/java/android/os/ServiceManager.java

public static void addService(String name, IBinder service) {
try {
getIServiceManager().addService(name, service, false);
} catch (RemoteException e) {
Log.e(TAG, "error in addService", e);
}
}

getIServiceManager()会获取ServiceManager的Proxy

/frameworks/base/core/java/android/os/ServiceManager.java

private static IServiceManager getIServiceManager() {
if (sServiceManager != null) {
return sServiceManager;
}
// Find the service manager
sServiceManager = ServiceManagerNative
.asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
return sServiceManager;
}

我们接着看ServiceManagerNative中addService函数的实现。

/frameworks/base/core/java/android/os/ServiceManager.java

int ADD_SERVICE_TRANSACTION = IBinder.FIRST_CALL_TRANSACTION+2;
public void addService(String name, IBinder service, boolean allowIsolated)
throws RemoteException {
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IServiceManager.descriptor);
data.writeString(name);
data.writeStrongBinder(service);
data.writeInt(allowIsolated ? 1 : 0);
mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);
reply.recycle();
data.recycle();
}

可以看到,这里调用的业务协议是ADD_SERVICE_TRANSACTION,后面的流程在前面都讲过,就不重复讲了。

ServiceMananger响应Service注册申请

我们接着看ServiceMananger是如何响应Service注册的,我们还是直接看它的业务协议处理函数svcmgr_handler。

/frameworks/native/cmds/servicemanager/service_manager.c

enum {
/* Must match definitions in IBinder.h and IServiceManager.h */
PING_TRANSACTION
= B_PACK_CHARS('_','P','N','G'),
SVC_MGR_GET_SERVICE = 1,
SVC_MGR_CHECK_SERVICE,
SVC_MGR_ADD_SERVICE,
SVC_MGR_LIST_SERVICES,
};
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply)
{
……
switch(txn->code) {
case SVC_MGR_GET_SERVICE:
case SVC_MGR_CHECK_SERVICE:……
case SVC_MGR_ADD_SERVICE:
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
handle = bio_get_ref(msg);
allow_isolated = bio_get_uint32(msg) ? 1 : 0;
if (do_add_service(bs, s, len, handle, txn->sender_euid,
allow_isolated, txn->sender_pid))
return -1;
break;
case SVC_MGR_LIST_SERVICES:……
default:
return -1;
}
bio_put_uint32(reply, 0);
return 0;
}

svcmgr_handler函数中的SVC_MGR_ADD_SERVICE就对应了AMS中调用的ADD_SERVICE_TRANSACTION协议。我们看一下do_add_service的实现

/frameworks/native/cmds/servicemanager/service_manager.c

int do_add_service(struct binder_state *bs,
const uint16_t *s, size_t len,
uint32_t handle, uid_t uid, int allow_isolated,
pid_t spid)
{
struct svcinfo *si;
if (!handle || (len == 0) || (len > 127))
return -1;
if (!svc_can_register(s, len, spid, uid)) {
return -1;
}
si = find_svc(s, len);
if (si) {
if (si->handle) {
ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDEn",
str8(s, len), handle, uid);
svcinfo_death(bs, si);
}
si->handle = handle;
} else {
si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
if (!si) {
ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORYn",
str8(s, len), handle, uid);
return -1;
}
si->handle = handle;
si->len = len;
memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
si->name[len] = '';
si->death.func = (void*) svcinfo_death;
si->death.ptr = si;
si->allow_isolated = allow_isolated;
si->next = svclist;
svclist = si;
}
binder_acquire(bs, handle);
binder_link_to_death(bs, handle, &si->death);
return 0;
}

do_add_service主要做了下面几件事情

  1. 检查添加的service是否存在,如果存在直接返回该Service的binder地址
  2. 如果不存在,在为该Service创建svcinfo数据结构,设置binder地址(这个binder地址在在binder驱动程序中被创建的,在前面讲binder_transaction中有提讲过,如果目标进行的binder_node不存在,就会为目标进程创建一个binder_node),并添加到队列中。

结尾

到这里,Binder的原理就全部讲完了。还是最前面的那段话,深入的学习Binder不仅仅只是为了知道怎么使用它,而是学习Binder的设计思想,学习它是如何架构的,学习它是如何解耦的,学习它是如何保障安全和性能的。在Binder的设计中,我们也能看到它从其他地方借鉴的部分,比如ServiceManager类似于网络请求中的DNS,Binder协议和业务协议的封装也类似于网络中层层的协议封装。

从不同的技术中总结共同点,提炼出独有的点和优秀的点,然后借鉴,总结,使用,这就是架构学习的成长之路。

最后

以上就是忧郁汉堡为你收集整理的深入掌握Binder原理(下)前言Binder实现原理结尾的全部内容,希望文章能够帮你解决深入掌握Binder原理(下)前言Binder实现原理结尾所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(70)

评论列表共有 0 条评论

立即
投稿
返回
顶部