identify which block of IDs to assign each one?
Thanks,
David
From: Ted Dunning [mailto:tdunn...@maprtech.com]
Sent: Monday, October 29, 2012 12:58 PM
To: user@hadoop.apache.org
Subject: Re: Cluster wide atomic operations
On Sun, Oct 28, 2012 at 9:15 PM, David Parks davidpark
On 29 October 2012 01:15, David Parks davidpark...@yahoo.com wrote:
I need a unique permanent ID assigned to new item encountered, which has
a constraint that it is in the range of, let’s say for simple discussion,
one to one million.
I'd go for UUID generation, which you can do in parallel
: Saturday, October 27, 2012 12:23 PM
To: user@hadoop.apache.org
Subject: Re: Cluster wide atomic operations
This is better asked on the Zookeeper lists.
The first answer is that global atomic operations are a generally bad idea.
The second answer is that if you an batch these operations up
a global counter?
On Fri, Oct 26, 2012 at 11:07 PM, David Parks davidpark...@yahoo.com
wrote:
How can we manage cluster-wide atomic operations? Such as maintaining an
auto-increment counter.
Does Hadoop provide native support for these kinds of operations?
An in case ultimate answer involves
On Sun, Oct 28, 2012 at 9:15 PM, David Parks davidpark...@yahoo.com wrote:
I need a unique permanent ID assigned to new item encountered, which has
a constraint that it is in the range of, let’s say for simple discussion,
one to one million.
Having such a limited range may require that you
z
How can we manage cluster-wide atomic operations? Such as maintaining an
auto-increment counter.
Does Hadoop provide native support for these kinds of operations?
An in case ultimate answer involves zookeeper, I'd love to work out doing
this in AWS/EMR.
counter?
On Fri, Oct 26, 2012 at 11:07 PM, David Parks davidpark...@yahoo.comwrote:
How can we manage cluster-wide atomic operations? Such as maintaining an
auto-increment counter.
Does Hadoop provide native support for these kinds of operations?
An in case ultimate answer involves zookeeper, I'd