*Background:*
As we know Apache arrow is a cross-language development platform for
in-memory data, It specifies a standardised language-independent columnar
memory format for flat and hierarchical data, organised for efficient
analytic operations on modern hardware.
So, By integrating carbon to
Can you try the below schema with mentioned tableproperties and check the
query performance with Parquet once.
carbon.sql("create table carbon_test_new_7(flowSeqNum integer ,protocolId
integer ,srcTos integer ,dstTos integer ,tcpBits integer ,srcPort integer
,dstPort integer ,workerId integer
Hello all,
Please find the updated design document for Incremental dataloading
at the below link.
https://docs.google.com/document/d/1AACOYmBpwwNdHjJLOub0utSc6JCBMZn8VL5CvZ9hygA/edit?usp=sharing
On Thu, Apr 4, 2019 at 12:29 PM chetdb wrote:
> 1. Will dataloading to MV be supported for all the