Thank you!
And I have a question it is not related to this issue.
Do you have an experience about dependency version conflicts when using static
loading coprocessor?
Best regards,
Minwoo Kang
보낸 사람: 张铎(Duo Zhang)
보낸 날짜: 2019년 5월 15일 수요일 14:43
받는 사람: hba
Based on your usage, that only system admin can access HBase directly then
I think it is fine to use table level coprocessor. Usually, just do not let
end user make use of coprocessor directly.
And HBaseAdmin.modifyTable will override the old configs, so usually the
code will be
HTableDescriptor
Thanks! I don't know that.
HBaseAdmin.modifyTable method looks like overwrite the previous configuration.
Is it correct?
In my case, I provide a service using HBase and only admin access HBase
directly.
The reasons why I choose dynamic loading coprocessor.
1) I don't want to meet the dependency
I see multiple moving parts here. Using the batch API causes your client to
do a single RPC call which is more efficient than multiple.
But in my understanding you can also tune your client behaviour by manually
flushing your write buffer after multiple put(Put) calls.
I think you need to test you
You have to call HBaseAdmin.modifyTable to trigger a region reopen.
And for me, I haven't made use of table level coprocessor in real
production, as it is a bit dangerous in a multi-tenant environment. Usually
we will add coprocessor at cluster level, through config file. So I'm not
sure why we do
Thank you for your reply.
I tried to update the table descriptor using set
HTableDescriptor#setValue(byte[], byte[]).
the table descriptor changed sucessfully.
But the region doesn't reopen. so new jar didn't apply.
Why don't we provide a coprocessor jar file update method for users?
Is it not a