Yes, because we are using the existing JDBC process provided by spark,
therefore HA should be supported by following the link in my previous reply.
You can try that solution and if you face any problem then we can discuss.
On Thu, 4 Feb 2021, 1:59 pm jingych, wrote:
> Hi Kunal Kapoor,
>
>
Hi Kunal Kapoor,
Thanks for your reply.
I've switched the carbon thrift server to the Option 1.
And I'll track the new solution for one or two days, then reply if it's ok.
But I still have a question about the HA solution:
We are using jdbc to connect to the carbon table.
So I want to know does
Hi,
let's continue to discuss about this. When auto merge is enable, should we
return the segment id before or after compaction?
My opinion is we should return the segment id before compaction because:
1. users will focus on his load operation, the merge operation is in backend
and the users may