Hi Rong,
   Do you have any numbers which show that the below implementation is 
better than the current implementation?

Regards,
Seenu.

On Wednesday, 25 January 2012 19:11:20 UTC+5:30, Rong wrote:
>
> Hi Folks, 
>
> I've just finished a fresh implementation of the android binder 
> driver, would love to see some suggestions or comments on the code as 
> well as the whole binder IPC idea. The driver can be found on Github 
> at 
>
> https://github.com/rong1129/android-binder-ipc 
>
> in the module/new directory. The rest in that project are a minimum 
> set of framework library, the service manager, and some test 
> applications. 
>
> Reason I did this project was because when I was exploring around the 
> Android kernel and framework stuff, I found the existing binder driver 
> wasn't implemented efficiently esp. in the context of SMP. There's a 
> big mutex (binder_lock) that locks everyone else out when one ioctl is 
> in progress. I spent hours thinking a way to remove it, but turned out 
> impossible - there are basically pointers shared and passed around 
> between processes all over the place, which is why most of the driver 
> is protected by that mutex. It's easy to manage but the downside is no 
> two or more IPCs can be executed at the same time, regardless how many 
> CPUs you have. Also the mutex around ioctls or any long operation can 
> significantly reduce a system's responsiveness. 
>
> So in the new implementation, I took a new approach - an sysV like 
> process message queue is implemented as the foundation of the driver. 
> Unlike the sysV queue, it's used only in the kernel - mainly for 
> drivers. I specially separated it out in the hope that it would also 
> be beneficial to other drivers. The queue is designed so queue 
> identifiers (addresses) can be passed across processes and queues can 
> be accessed by different processes as long as the proper get/set 
> methods are called. 
>
> The binder driver is built on top of the queue mechanism and have 
> other data structure designed carefully so maximum concurrency can be 
> reached while requiring minimum locking. For example, the binder node 
> and refs in the existing driver are replaced with a single structure 
> binder_obj. Objects (nodes and refs in the existing terms) are created 
> only in the current process context (shared by threads) and not 
> accessed by other processes. 
>
> In terms of performance, the current version is slightly better than 
> the existing driver, in particular with concurrent IPC call scenarios. 
> As I just finished the coding and some simple tests, not so much has 
> been done in terms of tuning or optimizations. But I will surely do 
> them in the following days together with completing whatever is left. 
>
> To summarize the status, I managed to implement most part of the 
> protocol, as for now, the standard test app binderAddInts application 
> can work properly. Most of the existing implementation details are 
> covered, except a few things below, 
>
> * mmap and user buffer allocating stuff - it's the only 
> incompatibility so far 
> The existing mmap mechanism does reduce data copying from twice to 
> once, but going into other process's space to allocate and manage 
> buffers would need a big lock to avoid a lot of nasty things and you 
> can't be guaranteed other processes are not killed while you writing 
> to their space. Also the extra buffer management overhead can easily 
> kill all the benefits it actually brings. 
>
> I implemented in a traditional way, where there are two data copying 
> in a transaction: process A to kernel and kernel to process B, which 
> is simple and most drivers would do (if DMA is not involved). This is 
> the only incompatible place in terms kernel/user API so far. There's 
> no difference for the kernel to read data from user space, but for 
> writes, the driver writes the transaction data to the supplied buffer 
> in binder_write_read structure, instead of a pre-allocated mmap-ed 
> buffer, as a result the application is expected to follow the same 
> logic to read the data back and of course provide a larger read buffer 
> when doing ioctl. 
>
> * File descriptor sharing across processes 
> It's not used by the test application so it's not considered yet. Also 
> I'm not convinced it's so useful, as one can easily implement the 
> similar thing in user space by re-opening the file, although having it 
> won't affect much of concurrency as it's already taken care of by 
> VFS. 
>
> * Priority inheritance 
> Not sure if it exists is to avoid priority inversion, but seems 
> there's also priority adjusting in the framework - confusing. I'm not 
> entirely sure how they work together. Will probably look into it a 
> litter later. 
>
> * Reference counting and etc. 
> The whole strong / weak refs in the kernel just complicates the whole 
> driver, IMHO. It should well be enforced just at the user level. 
> There's not so much point of a strong referencing at the driver level, 
> as a process can quit regardless it wishes or not, or whether it has 
> strong references to something or not. What it matters is the driver 
> has to provide a transparent channel and a proper closing-down 
> notification to the applications, so they can maintain those 
> references properly just between themselves. 
>
> At the moment, there's a hack in the implementation to send an acquire 
> command to the user when a binder object is written through the driver 
> - just to stop the application from crashing as if no one holds a 
> reference to the object, it will be destroyed right after addService() 
> call. Took me hours to figure it out. 
>
> That's it - a good summary for the last ten days or so working on the 
> driver and a lot more time trying to understand how it works :(. 
> Anyway, it's GPLed so feel free to try and contribute. 
>
> Cheers, 
> Rong 
>
>
>

-- 
unsubscribe: android-kernel+unsubscr...@googlegroups.com
website: http://groups.google.com/group/android-kernel

Reply via email to