[ 
https://issues.apache.org/jira/browse/COUCHDB-465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12745396#action_12745396
 ] 

Bob Dionne commented on COUCHDB-465:
------------------------------------

I tested this all against a clean checkout of trunk and it looks good. The new 
algorithm is faster on inserts[1] but interestingly for single inserts is makes 
for a larger db pre-compact. After compaction the db is smaller by a factor of 
3 

I thought we leave the new "sequentially random" as the default? Paul, is the 
concern just what's advertised to users? 

Algorithms based on the system clock I think can be problematic as they assume 
all machines have the correct time. 


[1] http://gist.github.com/170982

> Produce sequential, but unique, document id's
> ---------------------------------------------
>
>                 Key: COUCHDB-465
>                 URL: https://issues.apache.org/jira/browse/COUCHDB-465
>             Project: CouchDB
>          Issue Type: Improvement
>            Reporter: Robert Newson
>         Attachments: couch_uuids.patch, uuid_generator.patch
>
>
> Currently, if the client does not specify an id (POST'ing a single document 
> or using _bulk_docs) a random 16 byte value is created. This kind of key is 
> particularly brutal on b+tree updates and the append-only nature of couchdb 
> files.
> Attached is a patch to change this to a two-part identifier. The first part 
> is a random 12 byte value and the remainder is a counter. The random prefix 
> is rerandomized when the counter reaches its maximum. The rollover in the 
> patch is at 16 million but can obviously be changed. The upshot is that the 
> b+tree is updated in a better fashion, which should lead to performance 
> benefits.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to