On Tue, Jun 27, 2017 at 5:45 PM Alexander Kolbasov <ak...@cloudera.com>
wrote:

>
>    - Rather then streaming huge snapshots in a single message we should
>    provide streaming protocol with smaller messages and later reassembly on
>    the HDFS side.
>
> [bt] If we are going to keep with the current flow then I think this is
going to be one of the better options.  Multiple calls chunking out the
paths to some optimized number per thrift call (like 1k) with a "thats all
folks" call would allow the structure to be assembled on the HDFS side.

But I feel that we really don't need to send all of the data over to the
HDFS side immediately,  I feel we could make on demand calls (maybe on a
per directory basis) by the HDFS client to Sentry which then populates a
cache on the HDFS node side. Updates would still being pushed as they
occur, hense a slow loading of the cache. Yes this would slow down initial
access on the first call, but since this is for direct HDFS managed and
served paths that initial call slow down would be fairly negligible
assuming the Sentry turn around for the call would be fairly easy.  This is
essentially what Hive does without the cache and the update I believe.




>
>    - Most of the information passed are long strings with common
>    prefixes. We should be able to apply simple compression techniques (e.g.
>    prefix compression) or even run a full compression on the data before
>    sending.
>
>
I think if we are thinking this we should really look at passing a true
tree structure instead of trying to compress the data outright.  If its a
tree structure each part if only listed once in its place in the tree.


>
>    - We should consider using non-thrift data structures for passing the
>    info and just use Thrift as a transport mechanism.
>
> Im not sure why we would break protocol compatibility with something
custom.  I feel we can work around this.  Im not convinced we can, but i
think this should be a last resort.




> - Sasha
>
-- 
*Brian Towles* | Software Engineer
t. (512) 415- <0000000000>8105 e. btow...@cloudera.com <j...@cloudera.com>
cloudera.com <http://www.cloudera.com/>

[image: Cloudera] <http://www.cloudera.com/>

[image: Cloudera on Twitter] <https://twitter.com/cloudera> [image:
Cloudera on Facebook] <https://www.facebook.com/cloudera> [image: Cloudera
on LinkedIn] <https://www.linkedin.com/company/cloudera>
------------------------------

Reply via email to