曾经好奇为什么同样是binder接口,Conetext.getService为什么可以直接获取,而不需要像bindeService一样异步回调,今天终于从能够从上层源码到binder驱动源码,彻底了解这一过程,同时分析binder接口在调用的时候,底层究竟是如何实现的。
1.从getSystemService 到native的getContextObject
ContextImpl.java
1 2 3 4 5 6 7 8 |
@Override public Object getSystemService(String name) { return SystemServiceRegistry.getSystemService(this, name); } |
SystemServiceRegistry.java
1 2 3 4 5 6 7 |
public static Object getSystemService(ContextImpl ctx, String name) { ServiceFetcher<?> fetcher = SYSTEM_SERVICE_FETCHERS.get(name); return fetcher != null ? fetcher.getService(ctx) : null; } |
SYSTEM_SERVICE_FETCHERS是一个hasmap,键值是一个抽象类,ServiceFetcher,它在初始化的时候实现,见如下代码
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
static abstract interface ServiceFetcher<T> { T getService(ContextImpl ctx); } .../省略 registerService(Context.WIFI_SERVICE, WifiManager.class, new CachedServiceFetcher<WifiManager>() { @Override public WifiManager createService(ContextImpl ctx) throws ServiceNotFoundException { IBinder b = ServiceManager.getServiceOrThrow(Context.WIFI_SERVICE); IWifiManager service = IWifiManager.Stub.asInterface(b); return new WifiManager(ctx.getOuterContext(), service, ConnectivityThread.getInstanceLooper()); }}); |
registerService 的作用就是把CachedServiceFetcher 放到 SYSTEM_SERVICE_FETCHERS 中,这样每次取服务时都可以从这个缓存获取,因此我们第二次getSystemService时性能是非常高的,时间复杂度位O(1),然后所有系统服务端最终都会调用到这一步
frameworks/base/core/java/android/os/ServiceManager.java
1 2 3 4 5 6 7 8 9 10 11 |
public static IBinder getServiceOrThrow(String name) throws ServiceNotFoundException { final IBinder binder = getService(name); if (binder != null) { return binder; } else { throw new ServiceNotFoundException(name); } } |
这里也有缓存,当没有的时候会走rawGetService
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
public static IBinder getService(String name) { try { IBinder service = sCache.get(name); if (service != null) { return service; } else { return Binder.allowBlocking(rawGetService(name)); } } catch (RemoteException e) { Log.e(TAG, "error in getService", e); } return null; } |
/frameworks/base/core/java/android/os/Binder.java
allowBlocking 在这里必然走的是BinderProxy这个条件,因为系统服务的实现都是在系统进程,对于app来说拿到的肯定是BinderProxy对象,这里只进行了一个赋值,无需关注。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
public static IBinder allowBlocking(IBinder binder) { try { if (binder instanceof BinderProxy) { ((BinderProxy) binder).mWarnOnBlocking = false; } else if (binder != null && binder.getInterfaceDescriptor() != null && binder.queryLocalInterface(binder.getInterfaceDescriptor()) == null) { Log.w(TAG, "Unable to allow blocking on interface " + binder); } } catch (RemoteException ignored) { } return binder; } |
回到上面的rawGetService方法,里面仅仅是嵌套了一下getIServiceManager,就不再重复贴出了,getIServiceManager实现如下
1 2 3 4 5 6 7 8 9 10 11 12 |
private static IServiceManager getIServiceManager() { if (sServiceManager != null) { return sServiceManager; } // Find the service manager sServiceManager = ServiceManagerNative .asInterface(Binder.allowBlocking(BinderInternal.getContextObject())); return sServiceManager; } |
可以看到为了获取IServiceManager 调用到了BinderInternal.getContextObject(),getContextObject 是一个native方法,可以肯定 返回的IBinder 对象就是IServiceManager的远程接接口
2.从getContextObject 到talkwithDriver
frameworks/base/core/jni/android_util_Binder.cpp
1 2 3 4 5 6 7 8 |
static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz) { sp<IBinder> b = ProcessState::self()->getContextObject(NULL); return javaObjectForIBinder(env, b); } |
ProcessState::self(),是一个单例,在进程启动的时候进行了初始化,他是上层java方法getContextObject 的native实现
frameworks/base/core/jni/android_util_Binder.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val) { if (val == NULL) return NULL; //不是Binderproxy类,已经在改进程有java实体对象,直接返回实体对象 if (val->checkSubclass(&gBinderOffsets)) { // It's a JavaBBinder created by ibinderForJavaObject. Already has Java object. jobject object = static_cast<JavaBBinder*>(val.get())->object(); LOGDEATH("objectForBinder %p: it's our own %p!\n", val.get(), object); return object; } BinderProxyNativeData* nativeData = new BinderProxyNativeData(); nativeData->mOrgue = new DeathRecipientList; nativeData->mObject = val; //根据IBinder 构架binderproxy类 jobject object = env->CallStaticObjectMethod(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mGetInstance, (jlong) nativeData, (jlong) val.get()); .//省略.. return object; } |
javaObjectForIBinder这个方法是根据IBinder为上层创建对应的BinderProxy.java对象
system/libhwbinder/ProcessState.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/) { return getStrongProxyForHandle(0); } sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle) { sp<IBinder> result; AutoMutex _l(mLock); //从已有的对象里查找IBinder,如果不存在则返回一个空实现 handle_entry* e = lookupHandleLocked(handle); if (e != nullptr) { IBinder* b = e->binder; //如果是空实现,需要创建BpHwBinder if (b == nullptr || !e->refs->attemptIncWeak(this)) { b = new BpHwBinder(handle); e->binder = b; if (b) e->refs = b->getWeakRefs(); result = b; } else { result.force_set(b); e->refs->decWeak(this); } } return result; } |
这里主要是根据句柄从现有缓存中查找实现,没有的话创建一个新的BpHwBinder对象
system/libhwbinder/BpHwBinder.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 |
BpHwBinder::BpHwBinder(int32_t handle) : mHandle(handle) , mAlive(1) , mObitsSent(0) , mObituaries(nullptr) { ALOGV("Creating BpHwBinder %p handle %d\n", this, mHandle); extendObjectLifetime(OBJECT_LIFETIME_WEAK); IPCThreadState::self()->incWeakHandle(handle, this); } |
BpHwBinder的构造方法里调用了IPCThreadState::self()的方法为当前的BpHwBinder注册一个弱引用,这个注册最终会走告诉binder驱动,我对句柄为handler的对象增加一个若引用,麻烦其他进程的实体对象回收的时候考虑一下
system/libhwbinder/IPCThreadState.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 |
void IPCThreadState::incWeakHandle(int32_t handle, BpHwBinder *proxy) { LOG_REMOTEREFS("IPCThreadState::incWeakHandle(%d)\n", handle); //mOut是需要写入到binder驱动里的缓存内容 mOut.writeInt32(BC_INCREFS); mOut.writeInt32(handle); // Create a temp reference until the driver has handled this command. proxy->getWeakRefs()->incWeak(mProcess.get()); mPostWriteWeakDerefs.push(proxy->getWeakRefs()); } |
incWeakHandle 往写到驱动的buffer里缓存了BC_INCREFS,这个命令此时并没写入到驱动中,需要等一次transaction执行的时候才会写入,在java层调用任何aidl的函数都会触发transaction.这里就不列出了
frameworks/native/libs/binder/BpBinder.cpp
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
status_t BpBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { //省略... //执行事务 status_t status = IPCThreadState::self()->transact( mHandle, code, data, reply, flags); if (status == DEAD_OBJECT) mAlive = 0; return status; } return DEAD_OBJECT; } |
当上层调用getSystemService的时候,经过binder.java的binder.transact方法,从而调用到BpBinder.cpp里的transact方法,而这个方法调用了单例IPCThreadState.transact方法
IPCThreadState
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
status_t IPCThreadState::transact(int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { status_t err; flags |= TF_ACCEPT_FDS; //向mOut里写Parcel数据,此时并没有写入驱动中 err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, nullptr); if (err != NO_ERROR) { if (reply) reply->setError(err); return (mLastError = err); } if ((flags & TF_ONE_WAY) == 0) { if (UNLIKELY(mCallRestriction != ProcessState::CallRestriction::NONE)) { if (mCallRestriction == ProcessState::CallRestriction::ERROR_IF_NOT_ONEWAY) { ALOGE("Process making non-oneway call (code: %u) but is restricted.", code); CallStack::logStack("non-oneway call", CallStack::getCurrent(10).get(), ANDROID_LOG_ERROR); } else /* FATAL_IF_NOT_ONEWAY */ { LOG_ALWAYS_FATAL("Process may not make oneway calls (code: %u).", code); } } //虽然名字叫waitForResponse,但执行了写入和读取的操作 if (reply) { err = waitForResponse(reply); } else { Parcel fakeReply; err = waitForResponse(&fakeReply); } } else { err = waitForResponse(nullptr, nullptr); } return err; } |
transact里首先使用writeTransactionData 向buffer里填充parcel好的数据,然后调用waitForResponse里的talkWithDriver 把缓冲数据里BC_INCREFS命令和transaction数据写入到缓存中
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) { uint32_t cmd; int32_t err; while (1) { if ((err=talkWithDriver()) < NO_ERROR) break; err = mIn.errorCheck(); if (err < NO_ERROR) break; if (mIn.dataAvail() == 0) continue; cmd = (uint32_t)mIn.readInt32(); switch (cmd) { //省略 case BR_REPLY: { // reply是一个parcel类,填充返回数据后reply里就可以通过read操作得到aildl的返回结果, //java层所有函数返回值都是通过这个拿到的 binder_transaction_data tr; err = mIn.read(&tr, sizeof(tr)); ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); if (err != NO_ERROR) goto finish; if (reply) { if ((tr.flags & TF_STATUS_CODE) == 0) { reply->ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), freeBuffer, this); } } //省略 } goto finish; default: //处理其他驱动返回的命令,如本地实体binder对象的引用增加和减少等 err = executeCommand(cmd); if (err != NO_ERROR) goto finish; break; } } //省略 return err; } |
talkWithDriver里首先会将缓冲在mOut里的数据写到驱动中,根据流程,也就是包含引用添加命令和transaction数据,待远程执行结束后从mIn读取到BBR_REPLY命令,然后把后续数据填充到返回值Parcel对象中 (mOut,mIn)都是parcel对象
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
status_t IPCThreadState::talkWithDriver(bool doReceive) { if (mProcess->mDriverFD < 0) { return -EBADF; } binder_write_read bwr; const bool needRead = mIn.dataPosition() >= mIn.dataSize(); const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0; bwr.write_size = outAvail; bwr.write_buffer = (uintptr_t)mOut.data(); if (doReceive && needRead) { bwr.read_size = mIn.dataCapacity(); bwr.read_buffer = (uintptr_t)mIn.data(); } else { bwr.read_size = 0; bwr.read_buffer = 0; } // Return immediately if there is nothing to do. if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR; bwr.write_consumed = 0; bwr.read_consumed = 0; status_t err; do { //向驱动读或者写数据,是读还是写取决于write_size和read_size,按这个流程首先是执行写入引用增加命令和getSystemService的transaction数据 if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) err = NO_ERROR; else err = -errno; if (mProcess->mDriverFD < 0) { err = -EBADF; } } while (err == -EINTR); //省略 return err; } |
talkWithDriver的参数如果不填默认是true(定义在头文件中),这里向驱动进行了读或者写数据,是读还是写取决于write_size和read_size,按目前的流程,这里首先是执行写入引用增加命令和getSystemService的transaction数据,然后读取返回的数据到mIn中由waitForResponse函数把读取到的数据转换为parcel数据,parcel再一层层抛到java层,最后调用parcel.read系列得到函数的返回值.也就是其他服务的binderproxy对象
至此,驱动上层的所有调用流程分析完毕,下面是驱动层
3.驱动层的命令读写和事务处理
从前一节可以知道,上层在这次方法中有以下重点操作
a.向驱动写入BC_INCREFS 命令和hanlde=0的句柄
b.向驱动写入getSystemService的transaction数据
c.等待驱动返回,读取返回值转为parcel对象
现在一步步分析以上操作在驱动里究竟是怎样做的
以下代码均运行在内核空间,内核空间所有进程都可以访问并且是唯一的,但数据必须由copy_from_user或者_copy_to_user传递
紧接前一节,talkwithdrive执行了binder_ioctl方法,这个方法实际上调用的是驱动binder_ioctl里的实现
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { int ret; struct binder_proc *proc = filp->private_data; struct binder_thread *thread; unsigned int size = _IOC_SIZE(cmd); void __user *ubuf = (void __user *)arg; //挂起用户空间的线程 ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2); if (ret) return ret; mutex_lock(&binder_lock); thread = binder_get_thread(proc); if (thread == NULL) { ret = -ENOMEM; goto err; } //执行命令 switch (cmd) { //前一节的talkWithDriver调用的地方 case BINDER_WRITE_READ: { struct binder_write_read bwr; if (size != sizeof(struct binder_write_read)) { ret = -EINVAL; goto err; } //从用户空间拷贝写入数据 if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { ret = -EFAULT; goto err; } //写入数据,前一节的transaction写入和BC_INCREFS 都走到这里 if (bwr.write_size > 0) { ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed); if (ret < 0) { bwr.read_consumed = 0; if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto err; } } //读取数据 if (bwr.read_size > 0) { ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK); if (!list_empty(&proc->todo)) wake_up_interruptible(&proc->wait); if (ret < 0) { if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto err; } } if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { ret = -EFAULT; goto err; } break; } //省略 case BINDER_THREAD_EXIT: if (binder_debug_mask & BINDER_DEBUG_THREADS) printk(KERN_INFO "binder: %d:%d exit\n", proc->pid, thread->pid); binder_free_thread(proc, thread); thread = NULL; break;; default: ret = -EINVAL; goto err; } ret = 0; err: //唤醒用户空间的线程 if (thread) thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN; mutex_unlock(&binder_lock); wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2); if (ret && ret != -ERESTARTSYS) printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret); return ret; } |
可以看到,当上层调用talkwithdriver的时候,调用的oictl的时候,线程将会被挂起,然后对传入的数据进行写入操作,这个写入操作是由方法binder_thread_write来完成的,这个方法的省略实现如下
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, void __user *buffer, int size, signed long *consumed) { uint32_t cmd; void __user *ptr = buffer + *consumed; void __user *end = buffer + size; //根据传入数据读取命令 while (ptr < end && thread->return_error == BR_OK) { if (get_user(cmd, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); //省略其他代码 switch (cmd) { case BC_INCREFS: case BC_ACQUIRE: case BC_RELEASE: case BC_DECREFS: { uint32_t target; struct binder_ref *ref; const char *debug_string; if (get_user(target, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); //BC_INCREFS handle=0的节点时候,进入这个条件,因为 //在任何进程里的第一个binder_ref 必须是servicemanager if (target == 0 && binder_context_mgr_node && (cmd == BC_INCREFS || cmd == BC_ACQUIRE)) { //创建当前进程里的servicemanger的binder_ref对象 ref = binder_get_ref_for_node(proc, binder_context_mgr_node); if (ref->desc != target) { binder_user_error("binder: %d:" "%d tried to acquire " "reference to desc 0, " "got %d instead\n", proc->pid, thread->pid, ref->desc); } } else ref = binder_get_ref(proc, target); if (ref == NULL) { binder_user_error("binder: %d:%d refcou" "nt change on invalid ref %d\n", proc->pid, thread->pid, target); break; } //省略其他代码... break; } //省略其他代码... case BC_TRANSACTION: case BC_REPLY: { struct binder_transaction_data tr; if (copy_from_user(&tr, ptr, sizeof(tr))) return -EFAULT; ptr += sizeof(tr); //binder_transaction 是两个进程执行事务的核心方法 binder_transaction(proc, thread, &tr, cmd == BC_REPLY); break; } //省略其他代码... return 0; } |
可以看到,当BC_INCREFS执行的时候时候会从当前进程的binder_refs_desc红黑树查找引用节点,当handle不等于0的时候,这个引用必须存在,当handle等于0时,可以先拿到句柄再创建引用。为什么呢?因为binder_context_mgr_node 是在servicemanger启动的时候向驱动设置的一个节点,他是一个全局变量,在任何进程调用时候都可以拿到它(所有进程共用内核空间)。而其他的节点不允许为空,原因是当前节点的引用必须由其他进程提前创建,具体见核心方法binder_transaction
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 |
static void binder_transaction(struct binder_proc *proc, struct binder_thread *thread, struct binder_transaction_data *tr, int reply) { struct binder_transaction *t; //transaction的数据结构,用户保存事务的数据 struct binder_work *tcomplete; size_t *offp, *off_end; struct binder_proc *target_proc;//事务写入的目标进程 struct binder_thread *target_thread = NULL;//需要唤醒的线程 struct binder_node *target_node = NULL;//需要写入的目标节点,这个节点必定是实体对象,因为不可能往一个binder_ref这种引用对象写东西 struct list_head *target_list; wait_queue_head_t *target_wait; struct binder_transaction *in_reply_to = NULL; struct binder_transaction_log_entry *e; uint32_t return_error; e = binder_transaction_log_add(&binder_transaction_log); e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY); e->from_proc = proc->pid; e->from_thread = thread->pid; e->target_handle = tr->target.handle; e->data_size = tr->data_size; e->offsets_size = tr->offsets_size; //当事务的执行方向是从service端回应到client端 if (reply) { //获取调用时候的那次binder_transaction 数据 in_reply_to = thread->transaction_stack; //省略 //事务退栈,多次调用trasaction是有栈信息的 thread->transaction_stack = in_reply_to->to_parent; target_thread = in_reply_to->from; //省略 target_proc = target_thread->proc; } else { //当事务的执行方向是从client端回应到service端 //根据目标binder_node的句柄获得Binder_ref,然后由引用获得实体对象 //target_node if (tr->target.handle) { //不是servicemanager进程 struct binder_ref *ref; ref = binder_get_ref(proc, tr->target.handle); //省略判空.. target_node = ref->node; } else { //servicemanager对象 target_node = binder_context_mgr_node; if (target_node == NULL) { return_error = BR_DEAD_REPLY; goto err_no_context_mgr_node; } } e->to_node = target_node->debug_id; target_proc = target_node->proc; //省略one_way的处理 if (target_thread) { e->to_thread = target_thread->pid; target_list = &target_thread->todo; target_wait = &target_thread->wait; } else { target_list = &target_proc->todo; target_wait = &target_proc->wait; } e->to_proc = target_proc->pid; //创建binder_transaction 数据 t = kzalloc(sizeof(*t), GFP_KERNEL); if (t == NULL) { return_error = BR_FAILED_REPLY; goto err_alloc_t_failed; } binder_stats.obj_created[BINDER_STAT_TRANSACTION]++; tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL); if (tcomplete == NULL) { return_error = BR_FAILED_REPLY; goto err_alloc_tcomplete_failed; } binder_stats.obj_created[BINDER_STAT_TRANSACTION_COMPLETE]++; t->debug_id = ++binder_last_id; e->debug_id = t->debug_id; //省略oneway的处理 t->sender_euid = proc->tsk->cred->euid; t->to_proc = target_proc; t->to_thread = target_thread; t->code = tr->code; t->flags = tr->flags; t->priority = task_nice(current); //从目标进程的共享内存分配buffer t->buffer = binder_alloc_buf(target_proc, tr->data_size, tr->offsets_size, !reply && (t->flags & TF_ONE_WAY)); if (t->buffer == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_alloc_buf_failed; } t->buffer->allow_user_free = 0; t->buffer->debug_id = t->debug_id; t->buffer->transaction = t; t->buffer->target_node = target_node; if (target_node) binder_inc_node(target_node, 1, 0, NULL); offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *))); //从用户空间copy数据到目标进程的缓存中 if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) { binder_user_error("binder: %d:%d got transaction with invalid " "data ptr\n", proc->pid, thread->pid); return_error = BR_FAILED_REPLY; goto err_copy_data_failed; } if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) { binder_user_error("binder: %d:%d got transaction with invalid " "offsets ptr\n", proc->pid, thread->pid); return_error = BR_FAILED_REPLY; goto err_copy_data_failed; } if (!IS_ALIGNED(tr->offsets_size, sizeof(size_t))) { binder_user_error("binder: %d:%d got transaction with " "invalid offsets size, %zd\n", proc->pid, thread->pid, tr->offsets_size); return_error = BR_FAILED_REPLY; goto err_bad_offset; } //不管是调用远程binder方法还是远程进程返回结果,都是从用户空间copy数据 off_end = (void *)offp + tr->offsets_size; //循环处理需要写入的数据结构 for (; offp < off_end; offp++) { struct flat_binder_object *fp; if (*offp > t->buffer->data_size - sizeof(*fp) || t->buffer->data_size < sizeof(*fp) || !IS_ALIGNED(*offp, sizeof(void *))) { binder_user_error("binder: %d:%d got transaction with " "invalid offset, %zd\n", proc->pid, thread->pid, *offp); return_error = BR_FAILED_REPLY; goto err_bad_offset; } fp = (struct flat_binder_object *)(t->buffer->data + *offp); switch (fp->type) { case BINDER_TYPE_BINDER: case BINDER_TYPE_WEAK_BINDER: { struct binder_ref *ref; struct binder_node *node = binder_get_node(proc, fp->binder); if (node == NULL) { node = binder_new_node(proc, fp->binder, fp->cookie); if (node == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_new_node_failed; } node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK; node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS); } if (fp->cookie != node->cookie) { binder_user_error("binder: %d:%d sending u%p " "node %d, cookie mismatch %p != %p\n", proc->pid, thread->pid, fp->binder, node->debug_id, fp->cookie, node->cookie); goto err_binder_get_ref_for_node_failed; } ref = binder_get_ref_for_node(target_proc, node); if (ref == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_get_ref_for_node_failed; } if (fp->type == BINDER_TYPE_BINDER) fp->type = BINDER_TYPE_HANDLE; else fp->type = BINDER_TYPE_WEAK_HANDLE; fp->handle = ref->desc; binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, &thread->todo); if (binder_debug_mask & BINDER_DEBUG_TRANSACTION) printk(KERN_INFO " node %d u%p -> ref %d desc %d\n", node->debug_id, node->ptr, ref->debug_id, ref->desc); } break; case BINDER_TYPE_HANDLE: case BINDER_TYPE_WEAK_HANDLE: { //当transaction中包含当前bindeproxy对象时 struct binder_ref *ref = binder_get_ref(proc, fp->handle); //当binder_proxy和trasactiond的进程是同一个的时候,(例如,ams里,app往ams传递的token对象的过程) if (ref->node->proc == target_proc) { if (fp->type == BINDER_TYPE_HANDLE) fp->type = BINDER_TYPE_BINDER; else fp->type = BINDER_TYPE_WEAK_BINDER; fp->binder = ref->node->ptr; fp->cookie = ref->node->cookie; binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL); if (binder_debug_mask & BINDER_DEBUG_TRANSACTION) printk(KERN_INFO " ref %d desc %d -> node %d u%p\n", ref->debug_id, ref->desc, ref->node->debug_id, ref->node->ptr); } else { // struct binder_ref *new_ref; new_ref = binder_get_ref_for_node(target_proc, ref->node); if (new_ref == NULL) { return_error = BR_FAILED_REPLY; goto err_binder_get_ref_for_node_failed; } fp->handle = new_ref->desc; binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL); if (binder_debug_mask & BINDER_DEBUG_TRANSACTION) printk(KERN_INFO " ref %d desc %d -> ref %d desc %d (node %d)\n", ref->debug_id, ref->desc, new_ref->debug_id, new_ref->desc, ref->node->debug_id); } } break; //当传输的对象包含fd时 case BINDER_TYPE_FD: { //跨进程文件描述符的处理 } break; default: binder_user_error("binder: %d:%d got transactio" "n with invalid object type, %lx\n", proc->pid, thread->pid, fp->type); return_error = BR_FAILED_REPLY; goto err_bad_object_type; } } if (reply) { BUG_ON(t->buffer->async_transaction != 0); binder_pop_transaction(target_thread, in_reply_to); } else if (!(t->flags & TF_ONE_WAY)) { BUG_ON(t->buffer->async_transaction != 0); t->need_reply = 1; t->from_parent = thread->transaction_stack; thread->transaction_stack = t; } else { BUG_ON(target_node == NULL); BUG_ON(t->buffer->async_transaction != 1); if (target_node->has_async_transaction) { target_list = &target_node->async_todo; target_wait = NULL; } else target_node->has_async_transaction = 1; } t->work.type = BINDER_WORK_TRANSACTION; list_add_tail(&t->work.entry, target_list); tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE; list_add_tail(&tcomplete->entry, &thread->todo); if (target_wait) wake_up_interruptible(target_wait); return; //省略.. } |
binder_transaction 是binder中事务处理的核心方法,任何binder调用都会进入到这个方法,一般来说,一次binder接口调用会调用两次 binder_transaction 方法,一次发起,一次回应。
当由service进程或者client调用transaction时候,会进入这个函数,reply为true表示由service进程返回结果client.
注意,其中每个进程下的binder_proc的binder_ref有两颗binder_ref的红黑树,,refs_by_node,refs_by_desc,他们的节点是同用的,只是排序方式不同 ,这个两颗树的插入时间点是一致的(当目标进没有需要传递的binder_node的binder_ref时插入),查询时机不一致,可以简单的说是一个是给其他进程用( refs_by_node ),一个是给自己进程用( refs_by_desc)。
当发现传输的类型是binder实体的时候,如果目标进程没有该 binder_node 的对应引用节点binder_ref( refs_by_node 查询),则帮目标进程创建引用节点,然后将引用节点的句柄传递给目标进程,如果已经有了则不创建,同时继续传递句柄
当发现传输的类型是binder引用的时候,如果binder_ref的实体对象( 用binder在 refs_by_desc 查询,然后由binde_ref 得到实体)binder_node所在进程就是目标进程,则直接向目标进程传递指针,如果binder_ref的实体对象不在目标进程则传递句柄给目标进程
binder代码十分复杂,里面还有大量的引用计数,线程池维护等没有详说,这里仅仅分析了其中的一条流水线