I think the main selling point of an RPC framework/IDL is ease-of-use for defined remote communications that look like function calls. If you have calls you're making to remote servers asking them to do work, you can fairly trivially define the interface and then call through. You can then use native types in function calls and they transparently get transformed and sent across the wire.
The RPC protocols I've seen are based on the idea that the types that can be sent will be predefined -- otherwise it's hard to describe with an IDL. However, we want to support storing unstructured data, or at least data structures that are defined (from the cluster's point of view) at runtime -- one of the main selling points of Geode is PDX serialization, which lets us store arbitrary object structures in the cache. If we were to use an RPC framework we have all the commands accept byte arrays and include some meta-information. This loses us the ease-of-use. What's left in the protocol then is the calls and the number of arguments they accept, and what order we put those (and the serialized arguments) in on the wire. I don't think we gain much by using a preexisting RPC language, and we lose control over the wire format and message structure. If we want to be able to make the protocol really fast, and customized to our use case; if we want to implement asynchronous requests, futures, etc. then we have to write wrappers for a given language anyways, and packing those things through an RPC framework like Thrift or gRPC will be an extra layer of confusing complexity. Best, Galen