On Fri, Jun 23, 2017 at 10:30 AM, Sean Busbey <[email protected]> wrote:
> On Fri, Jun 23, 2017 at 12:06 PM, Stack <[email protected]> wrote: > > On Wed, Jun 21, 2017 at 9:31 AM, Sean Busbey <[email protected]> wrote: > >.... > > I don't know enough about the integration but is the 'handling of Phoenix > > encoded data' about mapping spark types to a serialization in hbase? If > > not, where is the need for seamless transforms between spark types and a > > natural hbase serialization listed. We need this IIRC. > > > > It's a subtask, really. We already have a pluggable system for mapping > between spark types and a couple of serialization options (the docs > need improvement?). > > SHC has its own pluggable system and has the addition of a phoenix > encoding. The set seems like the most likely out-of-the-box formats > folks might have something in. (I thinkMaybe Kite? I think it's > different than the rest.) > > Or are you saying we can just map all of it the the hbase-common > "types" and then do the pluggable part under it? > Not making any prescription. Was just worried about type marshalling in and out of spark concerned that the serialization would be other than something 'natural' for hbase, that it not performant, and that we might have a profusion of mechanisms. If a noted subtask, thats grand. Thanks, S
