Brane, There are two key features we are planning to address with platforms integration efforts apart of trivial cache/compue APIs; 1) Portability, so that objects can freely travel between Java, C++, etc.. Also portability is mandatory to support queries because our query engine is tightly coupled to Java. 2) Zero changes to existing models in user apps, so that Ignite could be integrated into legacy apps with minimal efforts. On Java side our OptimizedMarshaller is already in a very good shape to handle this. This is why I would prefer to develop marshaller from scratch instead of using existing solutions.
Vladimir. On Tue, May 26, 2015 at 11:43 PM, Branko Čibej <br...@apache.org> wrote: > Why don't you just use an existing IDL? Something like Thrift or > Protobufs or ... there are quite a few of them out there. Inventing your > own marshalling is a waste of time. > > -- Brane > > > On 26.05.2015 22:12, Vladimir Ozerov wrote: > > SFINAE could be a way to perform compile-time introspection: > > http://en.wikipedia.org/wiki/Substitution_failure_is_not_an_error > > > > On Tue, May 26, 2015 at 5:40 PM, Denis Magda <dma...@gridgain.com> > wrote: > > > >> Yeap, it's not so easy to marshall/unmarshall data in C++. > >> > >> Take a look at these slides from NVIDIA: > >> > >> > http://on-demand.gputechconf.com/gtc/2012/presentations/S0377-C++-Data-Marshalling-Best-Practices.pdf > >> > >> The slides are quite high level but probably they will expose a solution > >> to p.2. > >> > >> -- > >> Denis > >> > >> > >> On 5/26/2015 12:09 PM, Vladimir Ozerov wrote: > >> > >>> Igniters, > >>> > >>> C++ doesn't have reflection/introspection. For this reason we have to > map > >>> user structs/classes to their marshal/unmarshal handlers (functions) > >>> somehow. > >>> > >>> Various approaches for this are available: > >>> 1) Predefined map [type ID -> marshal/unmarshal functions] which is > >>> configured at runtime before Grid is started. > >>> 2) Provide serializers in runtime. E.g. the following will set specific > >>> serializers on cache projection: > >>> ICache* cache = grid.cache(KeySerializer* k, ValSerialzier* v). > >>> > >>> I think we should start with p.1 as it is flexible and will not require > >>> users to change their existing types. The drawback is that user will > have > >>> to write marshalling logic by hand. But we can introduce some > >>> code-generation facility later (e.g. like Gigaspaces does this). > >>> > >>> Thoughts and ideas are welcomed. > >>> > >>> Vladimir. > >>> > >>> > >