Why do you think you are spending a lot of time contending on row locks?

Have you tried configuring your clients to send smaller batches? This may
decrease throughput on a per-client basis but will likely improve latency
and reduce the likelihood of row lock contention.

If you are really spending most of your time contending on row locks then
you will likely run into more fundamental performance issues trying to
scale your writes, since Kudu's MVCC implementation effectively stores a
linked list of updates to a given cell until compaction occurs. See
https://github.com/apache/kudu/blob/master/docs/design-docs/tablet.md#historical-mvcc-in-diskrowsets
for more information about the on-disk design.

If you accumulate too many uncompacted mutations against a given row,
reading the latest value for that row at scan time will be slow because it
has to do a lot of work at read time.

Mike

On Tue, Sep 18, 2018 at 8:48 AM Xiaokai Wang <xiaokai.w...@live.com> wrote:

> Moved here from JIRA.
>
> Hi guys, I met a problem about the keys locks that almost impacts the
> service normal writing.
>
>
> As we all know, a transaction which get all row_key locks will go on next
> step in kudu. Everything looks good, if keys are not concurrent updated.
> But when keys are updated by more than one client at the same time, locks
> are acquired to wait much time. The cases are often in my product
> environment. Does anybody meet the problem? Has any good ideal for this?
>
>
> In my way, I want to try to abandon keys locks, instead using
> *_pool_token_ 'SERIAL' mode which keeping the key of transaction is serial
> and ordered. Dose this work?
>
>
> Hope to get your advice. Thanks.
>
>
> -----
> Regards,
> Xiaokai
>

Reply via email to