[jira] [Commented] (ZOOKEEPER-2368) Client watches are not disconnected on close
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15812264#comment-15812264 ] Jordan Zimmerman commented on ZOOKEEPER-2368: - Sure, that's reasonable. > Client watches are not disconnected on close > > > Key: ZOOKEEPER-2368 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2368 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.4.0, 3.5.0 >Reporter: Timothy Ward >Assignee: Timothy Ward > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2368.patch > > > If I have a ZooKeeper client connected to an ensemble then obviously I can > register watches. > If the client is disconnected (for example by a failing ensemble member) then > I get a disconnection event for all of my watches. If, on the other hand, my > client is closed then I *do not* get a disconnection event. This asymmetry > makes it really hard to clear up properly when using the asynchronous API, as > there is no way to "fail" data reads/updates when the client is closed. > I believe that the correct behaviour should be for all watchers to receive a > disconnection event when the client is closed. The watchers can then respond > as appropriate, and can differentiate between a "server disconnect" and a > "client disconnect" by checking the ZooKeeper#getState() method. > This would not be a breaking behaviour change as Watchers are already > required to handle disconnection events. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2368) Client watches are not disconnected on close
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808215#comment-15808215 ] Jordan Zimmerman commented on ZOOKEEPER-2368: - FYI - I did some quick tests with Curator and this patch works fine. > Client watches are not disconnected on close > > > Key: ZOOKEEPER-2368 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2368 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.4.0, 3.5.0 >Reporter: Timothy Ward >Assignee: Timothy Ward > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2368.patch > > > If I have a ZooKeeper client connected to an ensemble then obviously I can > register watches. > If the client is disconnected (for example by a failing ensemble member) then > I get a disconnection event for all of my watches. If, on the other hand, my > client is closed then I *do not* get a disconnection event. This asymmetry > makes it really hard to clear up properly when using the asynchronous API, as > there is no way to "fail" data reads/updates when the client is closed. > I believe that the correct behaviour should be for all watchers to receive a > disconnection event when the client is closed. The watchers can then respond > as appropriate, and can differentiate between a "server disconnect" and a > "client disconnect" by checking the ZooKeeper#getState() method. > This would not be a breaking behaviour change as Watchers are already > required to handle disconnection events. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2368) Client watches are not disconnected on close
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808019#comment-15808019 ] Jordan Zimmerman commented on ZOOKEEPER-2368: - Another thing: wouldn't {{KeeperState.Expired}} be the appropriate event not disconnected? By definition the session is ending by closing the ZooKeeper handle. > Client watches are not disconnected on close > > > Key: ZOOKEEPER-2368 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2368 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.4.0, 3.5.0 >Reporter: Timothy Ward >Assignee: Timothy Ward > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2368.patch > > > If I have a ZooKeeper client connected to an ensemble then obviously I can > register watches. > If the client is disconnected (for example by a failing ensemble member) then > I get a disconnection event for all of my watches. If, on the other hand, my > client is closed then I *do not* get a disconnection event. This asymmetry > makes it really hard to clear up properly when using the asynchronous API, as > there is no way to "fail" data reads/updates when the client is closed. > I believe that the correct behaviour should be for all watchers to receive a > disconnection event when the client is closed. The watchers can then respond > as appropriate, and can differentiate between a "server disconnect" and a > "client disconnect" by checking the ZooKeeper#getState() method. > This would not be a breaking behaviour change as Watchers are already > required to handle disconnection events. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2368) Client watches are not disconnected on close
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808014#comment-15808014 ] Jordan Zimmerman commented on ZOOKEEPER-2368: - I've been thinking about this issue lately and I think it's actually very important to add. But, to protect backward compatibility, it can be added as an option. i.e. you'd call a method for ZooKeeper to add this behavior. Maybe even an alternate close() method. Thoughts? > Client watches are not disconnected on close > > > Key: ZOOKEEPER-2368 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2368 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.4.0, 3.5.0 >Reporter: Timothy Ward >Assignee: Timothy Ward > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2368.patch > > > If I have a ZooKeeper client connected to an ensemble then obviously I can > register watches. > If the client is disconnected (for example by a failing ensemble member) then > I get a disconnection event for all of my watches. If, on the other hand, my > client is closed then I *do not* get a disconnection event. This asymmetry > makes it really hard to clear up properly when using the asynchronous API, as > there is no way to "fail" data reads/updates when the client is closed. > I believe that the correct behaviour should be for all watchers to receive a > disconnection event when the client is closed. The watchers can then respond > as appropriate, and can differentiate between a "server disconnect" and a > "client disconnect" by checking the ZooKeeper#getState() method. > This would not be a breaking behaviour change as Watchers are already > required to handle disconnection events. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-1416) Persistent Recursive Watch
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793085#comment-15793085 ] Jordan Zimmerman commented on ZOOKEEPER-1416: - FYI - I've written Curator recipes for PersistentWatch and a replacement for all of Curator's various "cache" implementations. The new code is so much simpler to reason about. Having to manage watches on every node is extremely complicated. Dealing with a single watcher is orders of magnitude simpler, not to mention the memory/resource savings. https://github.com/apache/curator/pull/181 > Persistent Recursive Watch > -- > > Key: ZOOKEEPER-1416 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1416 > Project: ZooKeeper > Issue Type: Improvement > Components: c client, documentation, java client, server >Reporter: Phillip Liu >Assignee: Jordan Zimmerman > Attachments: ZOOKEEPER-1416.patch, ZOOKEEPER-1416.patch > > Original Estimate: 504h > Remaining Estimate: 504h > > h4. The Problem > A ZooKeeper Watch can be placed on a single znode and when the znode changes > a Watch event is sent to the client. If there are thousands of znodes being > watched, when a client (re)connect, it would have to send thousands of watch > requests. At Facebook, we have this problem storing information for thousands > of db shards. Consequently a naming service that consumes the db shard > definition issues thousands of watch requests each time the service starts > and changes client watcher. > h4. Proposed Solution > We add the notion of a Persistent Recursive Watch in ZooKeeper. Persistent > means no Watch reset is necessary after a watch-fire. Recursive means the > Watch applies to the node and descendant nodes. A Persistent Recursive Watch > behaves as follows: > # Recursive Watch supports all Watch semantics: CHILDREN, DATA, and EXISTS. > # CHILDREN and DATA Recursive Watches can be placed on any znode. > # EXISTS Recursive Watches can be placed on any path. > # A Recursive Watch behaves like a auto-watch registrar on the server side. > Setting a Recursive Watch means to set watches on all descendant znodes. > # When a watch on a descendant fires, no subsequent event is fired until a > corresponding getData(..) on the znode is called, then Recursive Watch > automically apply the watch on the znode. This maintains the existing Watch > semantic on an individual znode. > # A Recursive Watch overrides any watches placed on a descendant znode. > Practically this means the Recursive Watch Watcher callback is the one > receiving the event and event is delivered exactly once. > A goal here is to reduce the number of semantic changes. The guarantee of no > intermediate watch event until data is read will be maintained. The only > difference is we will automatically re-add the watch after read. At the same > time we add the convience of reducing the need to add multiple watches for > sibling znodes and in turn reduce the number of watch messages sent from the > client to the server. > There are some implementation details that needs to be hashed out. Initial > thinking is to have the Recursive Watch create per-node watches. This will > cause a lot of watches to be created on the server side. Currently, each > watch is stored as a single bit in a bit set relative to a session - up to 3 > bits per client per znode. If there are 100m znodes with 100k clients, each > watching all nodes, then this strategy will consume approximately 3.75TB of > ram distributed across all Observers. Seems expensive. > Alternatively, a blacklist of paths to not send Watches regardless of Watch > setting can be set each time a watch event from a Recursive Watch is fired. > The memory utilization is relative to the number of outstanding reads and at > worst case it's 1/3 * 3.75TB using the parameters given above. > Otherwise, a relaxation of no intermediate watch event until read guarantee > is required. If the server can send watch events regardless of one has > already been fired without corresponding read, then the server can simply > fire watch events without tracking. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-1416) Persistent Recursive Watch
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15781440#comment-15781440 ] Jordan Zimmerman commented on ZOOKEEPER-1416: - It turns out that ignoring {{NodeChildrenChanged}} for persistent watches is very easy and makes its usage more clear IMO. Barring objection I'm going to push this change. > Persistent Recursive Watch > -- > > Key: ZOOKEEPER-1416 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1416 > Project: ZooKeeper > Issue Type: Improvement > Components: c client, documentation, java client, server >Reporter: Phillip Liu >Assignee: Jordan Zimmerman > Attachments: ZOOKEEPER-1416.patch, ZOOKEEPER-1416.patch > > Original Estimate: 504h > Remaining Estimate: 504h > > h4. The Problem > A ZooKeeper Watch can be placed on a single znode and when the znode changes > a Watch event is sent to the client. If there are thousands of znodes being > watched, when a client (re)connect, it would have to send thousands of watch > requests. At Facebook, we have this problem storing information for thousands > of db shards. Consequently a naming service that consumes the db shard > definition issues thousands of watch requests each time the service starts > and changes client watcher. > h4. Proposed Solution > We add the notion of a Persistent Recursive Watch in ZooKeeper. Persistent > means no Watch reset is necessary after a watch-fire. Recursive means the > Watch applies to the node and descendant nodes. A Persistent Recursive Watch > behaves as follows: > # Recursive Watch supports all Watch semantics: CHILDREN, DATA, and EXISTS. > # CHILDREN and DATA Recursive Watches can be placed on any znode. > # EXISTS Recursive Watches can be placed on any path. > # A Recursive Watch behaves like a auto-watch registrar on the server side. > Setting a Recursive Watch means to set watches on all descendant znodes. > # When a watch on a descendant fires, no subsequent event is fired until a > corresponding getData(..) on the znode is called, then Recursive Watch > automically apply the watch on the znode. This maintains the existing Watch > semantic on an individual znode. > # A Recursive Watch overrides any watches placed on a descendant znode. > Practically this means the Recursive Watch Watcher callback is the one > receiving the event and event is delivered exactly once. > A goal here is to reduce the number of semantic changes. The guarantee of no > intermediate watch event until data is read will be maintained. The only > difference is we will automatically re-add the watch after read. At the same > time we add the convience of reducing the need to add multiple watches for > sibling znodes and in turn reduce the number of watch messages sent from the > client to the server. > There are some implementation details that needs to be hashed out. Initial > thinking is to have the Recursive Watch create per-node watches. This will > cause a lot of watches to be created on the server side. Currently, each > watch is stored as a single bit in a bit set relative to a session - up to 3 > bits per client per znode. If there are 100m znodes with 100k clients, each > watching all nodes, then this strategy will consume approximately 3.75TB of > ram distributed across all Observers. Seems expensive. > Alternatively, a blacklist of paths to not send Watches regardless of Watch > setting can be set each time a watch event from a Recursive Watch is fired. > The memory utilization is relative to the number of outstanding reads and at > worst case it's 1/3 * 3.75TB using the parameters given above. > Otherwise, a relaxation of no intermediate watch event until read guarantee > is required. If the server can send watch events regardless of one has > already been fired without corresponding read, then the server can simply > fire watch events without tracking. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-1416) Persistent Recursive Watch
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15779538#comment-15779538 ] Jordan Zimmerman commented on ZOOKEEPER-1416: - Question for reviewers: right now, when a node is created or deleted, the watcher gets two events. One for the parent node as {{NodeChildrenChanged}} and one for the node being created/deleted. This is the most flexible behavior but maybe redundant. Options: * Leave as is: 2 events * Only have the {{NodeChildrenChanged}} * Only have the {{NodeCreated}} / {{NodeDeleted}} > Persistent Recursive Watch > -- > > Key: ZOOKEEPER-1416 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1416 > Project: ZooKeeper > Issue Type: Improvement > Components: c client, documentation, java client, server >Reporter: Phillip Liu >Assignee: Jordan Zimmerman > Attachments: ZOOKEEPER-1416.patch, ZOOKEEPER-1416.patch > > Original Estimate: 504h > Remaining Estimate: 504h > > h4. The Problem > A ZooKeeper Watch can be placed on a single znode and when the znode changes > a Watch event is sent to the client. If there are thousands of znodes being > watched, when a client (re)connect, it would have to send thousands of watch > requests. At Facebook, we have this problem storing information for thousands > of db shards. Consequently a naming service that consumes the db shard > definition issues thousands of watch requests each time the service starts > and changes client watcher. > h4. Proposed Solution > We add the notion of a Persistent Recursive Watch in ZooKeeper. Persistent > means no Watch reset is necessary after a watch-fire. Recursive means the > Watch applies to the node and descendant nodes. A Persistent Recursive Watch > behaves as follows: > # Recursive Watch supports all Watch semantics: CHILDREN, DATA, and EXISTS. > # CHILDREN and DATA Recursive Watches can be placed on any znode. > # EXISTS Recursive Watches can be placed on any path. > # A Recursive Watch behaves like a auto-watch registrar on the server side. > Setting a Recursive Watch means to set watches on all descendant znodes. > # When a watch on a descendant fires, no subsequent event is fired until a > corresponding getData(..) on the znode is called, then Recursive Watch > automically apply the watch on the znode. This maintains the existing Watch > semantic on an individual znode. > # A Recursive Watch overrides any watches placed on a descendant znode. > Practically this means the Recursive Watch Watcher callback is the one > receiving the event and event is delivered exactly once. > A goal here is to reduce the number of semantic changes. The guarantee of no > intermediate watch event until data is read will be maintained. The only > difference is we will automatically re-add the watch after read. At the same > time we add the convience of reducing the need to add multiple watches for > sibling znodes and in turn reduce the number of watch messages sent from the > client to the server. > There are some implementation details that needs to be hashed out. Initial > thinking is to have the Recursive Watch create per-node watches. This will > cause a lot of watches to be created on the server side. Currently, each > watch is stored as a single bit in a bit set relative to a session - up to 3 > bits per client per znode. If there are 100m znodes with 100k clients, each > watching all nodes, then this strategy will consume approximately 3.75TB of > ram distributed across all Observers. Seems expensive. > Alternatively, a blacklist of paths to not send Watches regardless of Watch > setting can be set each time a watch event from a Recursive Watch is fired. > The memory utilization is relative to the number of outstanding reads and at > worst case it's 1/3 * 3.75TB using the parameters given above. > Otherwise, a relaxation of no intermediate watch event until read guarantee > is required. If the server can send watch events regardless of one has > already been fired without corresponding read, then the server can simply > fire watch events without tracking. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-1416) Persistent Recursive Watch
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15778916#comment-15778916 ] Jordan Zimmerman commented on ZOOKEEPER-1416: - testCurrentObserverIsParticipantInNewConfig() is a known flakey test I believe. > Persistent Recursive Watch > -- > > Key: ZOOKEEPER-1416 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1416 > Project: ZooKeeper > Issue Type: Improvement > Components: c client, documentation, java client, server >Reporter: Phillip Liu >Assignee: Jordan Zimmerman > Attachments: ZOOKEEPER-1416.patch, ZOOKEEPER-1416.patch > > Original Estimate: 504h > Remaining Estimate: 504h > > h4. The Problem > A ZooKeeper Watch can be placed on a single znode and when the znode changes > a Watch event is sent to the client. If there are thousands of znodes being > watched, when a client (re)connect, it would have to send thousands of watch > requests. At Facebook, we have this problem storing information for thousands > of db shards. Consequently a naming service that consumes the db shard > definition issues thousands of watch requests each time the service starts > and changes client watcher. > h4. Proposed Solution > We add the notion of a Persistent Recursive Watch in ZooKeeper. Persistent > means no Watch reset is necessary after a watch-fire. Recursive means the > Watch applies to the node and descendant nodes. A Persistent Recursive Watch > behaves as follows: > # Recursive Watch supports all Watch semantics: CHILDREN, DATA, and EXISTS. > # CHILDREN and DATA Recursive Watches can be placed on any znode. > # EXISTS Recursive Watches can be placed on any path. > # A Recursive Watch behaves like a auto-watch registrar on the server side. > Setting a Recursive Watch means to set watches on all descendant znodes. > # When a watch on a descendant fires, no subsequent event is fired until a > corresponding getData(..) on the znode is called, then Recursive Watch > automically apply the watch on the znode. This maintains the existing Watch > semantic on an individual znode. > # A Recursive Watch overrides any watches placed on a descendant znode. > Practically this means the Recursive Watch Watcher callback is the one > receiving the event and event is delivered exactly once. > A goal here is to reduce the number of semantic changes. The guarantee of no > intermediate watch event until data is read will be maintained. The only > difference is we will automatically re-add the watch after read. At the same > time we add the convience of reducing the need to add multiple watches for > sibling znodes and in turn reduce the number of watch messages sent from the > client to the server. > There are some implementation details that needs to be hashed out. Initial > thinking is to have the Recursive Watch create per-node watches. This will > cause a lot of watches to be created on the server side. Currently, each > watch is stored as a single bit in a bit set relative to a session - up to 3 > bits per client per znode. If there are 100m znodes with 100k clients, each > watching all nodes, then this strategy will consume approximately 3.75TB of > ram distributed across all Observers. Seems expensive. > Alternatively, a blacklist of paths to not send Watches regardless of Watch > setting can be set each time a watch event from a Recursive Watch is fired. > The memory utilization is relative to the number of outstanding reads and at > worst case it's 1/3 * 3.75TB using the parameters given above. > Otherwise, a relaxation of no intermediate watch event until read guarantee > is required. If the server can send watch events regardless of one has > already been fired without corresponding read, then the server can simply > fire watch events without tracking. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-1416) Persistent Recursive Watch
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-1416: Attachment: ZOOKEEPER-1416.patch Fixed Jenkins issues. > Persistent Recursive Watch > -- > > Key: ZOOKEEPER-1416 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1416 > Project: ZooKeeper > Issue Type: Improvement > Components: c client, documentation, java client, server >Reporter: Phillip Liu >Assignee: Jordan Zimmerman > Attachments: ZOOKEEPER-1416.patch, ZOOKEEPER-1416.patch > > Original Estimate: 504h > Remaining Estimate: 504h > > h4. The Problem > A ZooKeeper Watch can be placed on a single znode and when the znode changes > a Watch event is sent to the client. If there are thousands of znodes being > watched, when a client (re)connect, it would have to send thousands of watch > requests. At Facebook, we have this problem storing information for thousands > of db shards. Consequently a naming service that consumes the db shard > definition issues thousands of watch requests each time the service starts > and changes client watcher. > h4. Proposed Solution > We add the notion of a Persistent Recursive Watch in ZooKeeper. Persistent > means no Watch reset is necessary after a watch-fire. Recursive means the > Watch applies to the node and descendant nodes. A Persistent Recursive Watch > behaves as follows: > # Recursive Watch supports all Watch semantics: CHILDREN, DATA, and EXISTS. > # CHILDREN and DATA Recursive Watches can be placed on any znode. > # EXISTS Recursive Watches can be placed on any path. > # A Recursive Watch behaves like a auto-watch registrar on the server side. > Setting a Recursive Watch means to set watches on all descendant znodes. > # When a watch on a descendant fires, no subsequent event is fired until a > corresponding getData(..) on the znode is called, then Recursive Watch > automically apply the watch on the znode. This maintains the existing Watch > semantic on an individual znode. > # A Recursive Watch overrides any watches placed on a descendant znode. > Practically this means the Recursive Watch Watcher callback is the one > receiving the event and event is delivered exactly once. > A goal here is to reduce the number of semantic changes. The guarantee of no > intermediate watch event until data is read will be maintained. The only > difference is we will automatically re-add the watch after read. At the same > time we add the convience of reducing the need to add multiple watches for > sibling znodes and in turn reduce the number of watch messages sent from the > client to the server. > There are some implementation details that needs to be hashed out. Initial > thinking is to have the Recursive Watch create per-node watches. This will > cause a lot of watches to be created on the server side. Currently, each > watch is stored as a single bit in a bit set relative to a session - up to 3 > bits per client per znode. If there are 100m znodes with 100k clients, each > watching all nodes, then this strategy will consume approximately 3.75TB of > ram distributed across all Observers. Seems expensive. > Alternatively, a blacklist of paths to not send Watches regardless of Watch > setting can be set each time a watch event from a Recursive Watch is fired. > The memory utilization is relative to the number of outstanding reads and at > worst case it's 1/3 * 3.75TB using the parameters given above. > Otherwise, a relaxation of no intermediate watch event until read guarantee > is required. If the server can send watch events regardless of one has > already been fired without corresponding read, then the server can simply > fire watch events without tracking. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-1416) Persistent Recursive Watch
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-1416: Attachment: ZOOKEEPER-1416.patch Here is a completed implementation for a persistent, recursive watch addition for ZK. These watches are set via a new method, addPersistentWatch() and are removed via the existing watcher removal methods. Persistent, recursive watches have these characteristics: * Once set, they do not auto-remove when triggered * They trigger for all event types (child, data, etc.) on the node they are registered for and any child znode recursively. * They are efficiently implemented by using the existing watch internals. A new class PathIterator walks up the path parent-by-parent when checking if a watcher applies. Persistent watcher specific tests are in PersistentWatcherTest.java. I'd appreciated feedback on other additional tests that should be added. > Persistent Recursive Watch > -- > > Key: ZOOKEEPER-1416 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1416 > Project: ZooKeeper > Issue Type: Improvement > Components: c client, documentation, java client, server >Reporter: Phillip Liu >Assignee: Jordan Zimmerman > Attachments: ZOOKEEPER-1416.patch > > Original Estimate: 504h > Remaining Estimate: 504h > > h4. The Problem > A ZooKeeper Watch can be placed on a single znode and when the znode changes > a Watch event is sent to the client. If there are thousands of znodes being > watched, when a client (re)connect, it would have to send thousands of watch > requests. At Facebook, we have this problem storing information for thousands > of db shards. Consequently a naming service that consumes the db shard > definition issues thousands of watch requests each time the service starts > and changes client watcher. > h4. Proposed Solution > We add the notion of a Persistent Recursive Watch in ZooKeeper. Persistent > means no Watch reset is necessary after a watch-fire. Recursive means the > Watch applies to the node and descendant nodes. A Persistent Recursive Watch > behaves as follows: > # Recursive Watch supports all Watch semantics: CHILDREN, DATA, and EXISTS. > # CHILDREN and DATA Recursive Watches can be placed on any znode. > # EXISTS Recursive Watches can be placed on any path. > # A Recursive Watch behaves like a auto-watch registrar on the server side. > Setting a Recursive Watch means to set watches on all descendant znodes. > # When a watch on a descendant fires, no subsequent event is fired until a > corresponding getData(..) on the znode is called, then Recursive Watch > automically apply the watch on the znode. This maintains the existing Watch > semantic on an individual znode. > # A Recursive Watch overrides any watches placed on a descendant znode. > Practically this means the Recursive Watch Watcher callback is the one > receiving the event and event is delivered exactly once. > A goal here is to reduce the number of semantic changes. The guarantee of no > intermediate watch event until data is read will be maintained. The only > difference is we will automatically re-add the watch after read. At the same > time we add the convience of reducing the need to add multiple watches for > sibling znodes and in turn reduce the number of watch messages sent from the > client to the server. > There are some implementation details that needs to be hashed out. Initial > thinking is to have the Recursive Watch create per-node watches. This will > cause a lot of watches to be created on the server side. Currently, each > watch is stored as a single bit in a bit set relative to a session - up to 3 > bits per client per znode. If there are 100m znodes with 100k clients, each > watching all nodes, then this strategy will consume approximately 3.75TB of > ram distributed across all Observers. Seems expensive. > Alternatively, a blacklist of paths to not send Watches regardless of Watch > setting can be set each time a watch event from a Recursive Watch is fired. > The memory utilization is relative to the number of outstanding reads and at > worst case it's 1/3 * 3.75TB using the parameters given above. > Otherwise, a relaxation of no intermediate watch event until read guarantee > is required. If the server can send watch events regardless of one has > already been fired without corresponding read, then the server can simply > fire watch events without tracking. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (ZOOKEEPER-1416) Persistent Recursive Watch
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman reassigned ZOOKEEPER-1416: --- Assignee: Jordan Zimmerman (was: Thawan Kooburat) > Persistent Recursive Watch > -- > > Key: ZOOKEEPER-1416 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1416 > Project: ZooKeeper > Issue Type: Improvement > Components: c client, documentation, java client, server >Reporter: Phillip Liu >Assignee: Jordan Zimmerman > Original Estimate: 504h > Remaining Estimate: 504h > > h4. The Problem > A ZooKeeper Watch can be placed on a single znode and when the znode changes > a Watch event is sent to the client. If there are thousands of znodes being > watched, when a client (re)connect, it would have to send thousands of watch > requests. At Facebook, we have this problem storing information for thousands > of db shards. Consequently a naming service that consumes the db shard > definition issues thousands of watch requests each time the service starts > and changes client watcher. > h4. Proposed Solution > We add the notion of a Persistent Recursive Watch in ZooKeeper. Persistent > means no Watch reset is necessary after a watch-fire. Recursive means the > Watch applies to the node and descendant nodes. A Persistent Recursive Watch > behaves as follows: > # Recursive Watch supports all Watch semantics: CHILDREN, DATA, and EXISTS. > # CHILDREN and DATA Recursive Watches can be placed on any znode. > # EXISTS Recursive Watches can be placed on any path. > # A Recursive Watch behaves like a auto-watch registrar on the server side. > Setting a Recursive Watch means to set watches on all descendant znodes. > # When a watch on a descendant fires, no subsequent event is fired until a > corresponding getData(..) on the znode is called, then Recursive Watch > automically apply the watch on the znode. This maintains the existing Watch > semantic on an individual znode. > # A Recursive Watch overrides any watches placed on a descendant znode. > Practically this means the Recursive Watch Watcher callback is the one > receiving the event and event is delivered exactly once. > A goal here is to reduce the number of semantic changes. The guarantee of no > intermediate watch event until data is read will be maintained. The only > difference is we will automatically re-add the watch after read. At the same > time we add the convience of reducing the need to add multiple watches for > sibling znodes and in turn reduce the number of watch messages sent from the > client to the server. > There are some implementation details that needs to be hashed out. Initial > thinking is to have the Recursive Watch create per-node watches. This will > cause a lot of watches to be created on the server side. Currently, each > watch is stored as a single bit in a bit set relative to a session - up to 3 > bits per client per znode. If there are 100m znodes with 100k clients, each > watching all nodes, then this strategy will consume approximately 3.75TB of > ram distributed across all Observers. Seems expensive. > Alternatively, a blacklist of paths to not send Watches regardless of Watch > setting can be set each time a watch event from a Recursive Watch is fired. > The memory utilization is relative to the number of outstanding reads and at > worst case it's 1/3 * 3.75TB using the parameters given above. > Otherwise, a relaxation of no intermediate watch event until read guarantee > is required. If the server can send watch events regardless of one has > already been fired without corresponding read, then the server can simply > fire watch events without tracking. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2648) Container node never gets deleted if it never had children
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15759293#comment-15759293 ] Jordan Zimmerman commented on ZOOKEEPER-2648: - This is a feature IMO. We discussed this in the original design. > Container node never gets deleted if it never had children > -- > > Key: ZOOKEEPER-2648 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2648 > Project: ZooKeeper > Issue Type: Bug > Components: server >Affects Versions: 3.5.0 >Reporter: Hadriel Kaplan > > If a client creates a Container node, but does not also create a child within > that Container, the Container will never be deleted. This may seem like a bug > in the client for not subsequently creating a child, but we can't assume the > client remains connected, or that the client didn't just change its mind (due > to some recipe being canceled, for example). > The bug is in ContainerManager.getCandidates(), which only considers a node a > candidate if its Cversion > 0. The comments indicate this was done > intentionally, to avoid a race condition whereby the Container was created > right before a cleaning period, and would get cleaned up before the child > could be created - so to avoid that the check is performed to verify the > Cversion > 0. > Instead, I propose that if the Cversion is 0 but the Ctime is more than a > checkIntervalMs old, then it be deleted. In other words, if the Container > node has been around for a whole cleaning round already and no child has been > created since, then go ahead and clean it up. > I can provide a patch if others agree with such a change. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15751303#comment-15751303 ] Jordan Zimmerman commented on ZOOKEEPER-2642: - The test failures seem to be a chronic issue in ZK in general and not related to this PR: https://issues.apache.org/jira/browse/ZOOKEEPER-2080?jql=project%20%3D%20ZOOKEEPER%20AND%20text%20~%20%22AssertionFailedError%3A%20waiting%20for%20server%22 > ZOOKEEPER-2014 breaks existing clients for little benefit > - > > Key: ZOOKEEPER-2642 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 > Project: ZooKeeper > Issue Type: Bug > Components: c client, java client >Affects Versions: 3.5.2 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman >Priority: Blocker > Attachments: ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch > > > ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. > It appears this was done to document that these are methods have access > restrictions. However, this change breaks Apache Curator (and possibly other > clients). Curator APIs will have to be changed and/or special methods need to > be added. A breaking change of this kind should only be done when the benefit > is overwhelming. In this case, the same information can be conveyed with > documentation and possibly a deprecation notice. > Revert the creation of the ZooKeeperAdmin class and move the reconfig() > methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2642: Attachment: ZOOKEEPER-2642.patch Note deprecated APIs in the doc and fix some style issues in ZooKeeper.java > ZOOKEEPER-2014 breaks existing clients for little benefit > - > > Key: ZOOKEEPER-2642 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 > Project: ZooKeeper > Issue Type: Bug > Components: c client, java client >Affects Versions: 3.5.2 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman >Priority: Blocker > Attachments: ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch > > > ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. > It appears this was done to document that these are methods have access > restrictions. However, this change breaks Apache Curator (and possibly other > clients). Curator APIs will have to be changed and/or special methods need to > be added. A breaking change of this kind should only be done when the benefit > is overwhelming. In this case, the same information can be conveyed with > documentation and possibly a deprecation notice. > Revert the creation of the ZooKeeperAdmin class and move the reconfig() > methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15739945#comment-15739945 ] Jordan Zimmerman commented on ZOOKEEPER-2642: - Are these tests known to be flakey? They work on my machine and a message such as "junit.framework.AssertionFailedError: Threads didn't join" seems suspicious to me. > ZOOKEEPER-2014 breaks existing clients for little benefit > - > > Key: ZOOKEEPER-2642 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 > Project: ZooKeeper > Issue Type: Bug > Components: c client, java client >Affects Versions: 3.5.2 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman > Attachments: ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch > > > ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. > It appears this was done to document that these are methods have access > restrictions. However, this change breaks Apache Curator (and possibly other > clients). Curator APIs will have to be changed and/or special methods need to > be added. A breaking change of this kind should only be done when the benefit > is overwhelming. In this case, the same information can be conveyed with > documentation and possibly a deprecation notice. > Revert the creation of the ZooKeeperAdmin class and move the reconfig() > methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2642: Attachment: ZOOKEEPER-2642.patch Still trying to make Findbugs happy > ZOOKEEPER-2014 breaks existing clients for little benefit > - > > Key: ZOOKEEPER-2642 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 > Project: ZooKeeper > Issue Type: Bug > Components: c client, java client >Affects Versions: 3.5.2 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman > Attachments: ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch, > ZOOKEEPER-2642.patch > > > ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. > It appears this was done to document that these are methods have access > restrictions. However, this change breaks Apache Curator (and possibly other > clients). Curator APIs will have to be changed and/or special methods need to > be added. A breaking change of this kind should only be done when the benefit > is overwhelming. In this case, the same information can be conveyed with > documentation and possibly a deprecation notice. > Revert the creation of the ZooKeeperAdmin class and move the reconfig() > methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2642: Attachment: ZOOKEEPER-2642.patch Fix Javadoc findbugs issues > ZOOKEEPER-2014 breaks existing clients for little benefit > - > > Key: ZOOKEEPER-2642 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 > Project: ZooKeeper > Issue Type: Bug > Components: c client, java client >Affects Versions: 3.5.2 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman > Attachments: ZOOKEEPER-2642.patch, ZOOKEEPER-2642.patch > > > ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. > It appears this was done to document that these are methods have access > restrictions. However, this change breaks Apache Curator (and possibly other > clients). Curator APIs will have to be changed and/or special methods need to > be added. A breaking change of this kind should only be done when the benefit > is overwhelming. In this case, the same information can be conveyed with > documentation and possibly a deprecation notice. > Revert the creation of the ZooKeeperAdmin class and move the reconfig() > methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2642: Attachment: ZOOKEEPER-2642.patch > ZOOKEEPER-2014 breaks existing clients for little benefit > - > > Key: ZOOKEEPER-2642 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 > Project: ZooKeeper > Issue Type: Bug > Components: c client, java client >Affects Versions: 3.5.2 >Reporter: Jordan Zimmerman > Attachments: ZOOKEEPER-2642.patch > > > ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. > It appears this was done to document that these are methods have access > restrictions. However, this change breaks Apache Curator (and possibly other > clients). Curator APIs will have to be changed and/or special methods need to > be added. A breaking change of this kind should only be done when the benefit > is overwhelming. In this case, the same information can be conveyed with > documentation and possibly a deprecation notice. > Revert the creation of the ZooKeeperAdmin class and move the reconfig() > methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2024) Major throughput improvement with mixed workloads
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15738339#comment-15738339 ] Jordan Zimmerman commented on ZOOKEEPER-2024: - [~kfirlevari] Where is the code for ZooNet mentioned in the white paper? I'd love to add this to Apache Curator. > Major throughput improvement with mixed workloads > - > > Key: ZOOKEEPER-2024 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2024 > Project: ZooKeeper > Issue Type: Improvement > Components: quorum, server >Reporter: Kfir Lev-Ari >Assignee: Kfir Lev-Ari > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch, > ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch, > ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch, > ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch, > ZOOKEEPER-2024.patch, ZOOKEEPER-2024.patch > > > The patch is applied to the commit processor, and solves two problems: > 1. Stalling - once the commit processor encounters a local write request, it > stalls local processing of all sessions until it receives a commit of that > request from the leader. > In mixed workloads, this severely hampers performance as it does not allow > read-only sessions to proceed at faster speed than read-write ones. > 2. Starvation - as long as there are read requests to process, older remote > committed write requests are starved. > This occurs due to a bug fix > (https://issues.apache.org/jira/browse/ZOOKEEPER-1505) that forces processing > of local read requests before handling any committed write. The problem is > only manifested under high local read load. > Our solution solves these two problems. It improves throughput in mixed > workloads (in our tests, by up to 8x), and reduces latency, especially higher > percentiles (i.e., slowest requests). > The main idea is to separate sessions that inherently need to stall in order > to enforce order semantics, from ones that do not need to stall. To this end, > we add data structures for buffering and managing pending requests of stalled > sessions; these requests are moved out of the critical path to these data > structures, allowing continued processing of unaffected sessions. > Please see the docs: > 1) https://goo.gl/m1cINJ - includes a detailed description of the new commit > processor algorithm. > 2) The attached patch implements our solution, and a collection of related > unit tests (https://reviews.apache.org/r/25160) > 3) https://goo.gl/W0xDUP - performance results. > (See https://issues.apache.org/jira/browse/ZOOKEEPER-2023 for the > corresponding new system test that produced these performance measurements) > > See also https://issues.apache.org/jira/browse/ZOOKEEPER-1609 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (ZOOKEEPER-2642) ZOOKEEPER-2014 breaks existing clients for little benefit
Jordan Zimmerman created ZOOKEEPER-2642: --- Summary: ZOOKEEPER-2014 breaks existing clients for little benefit Key: ZOOKEEPER-2642 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2642 Project: ZooKeeper Issue Type: Bug Components: c client, java client Affects Versions: 3.5.2 Reporter: Jordan Zimmerman ZOOKEEPER-2014 moved the reconfig() methods into a new class, ZooKeeperAdmin. It appears this was done to document that these are methods have access restrictions. However, this change breaks Apache Curator (and possibly other clients). Curator APIs will have to be changed and/or special methods need to be added. A breaking change of this kind should only be done when the benefit is overwhelming. In this case, the same information can be conveyed with documentation and possibly a deprecation notice. Revert the creation of the ZooKeeperAdmin class and move the reconfig() methods back to the ZooKeeper class with additional documentation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2014) Only admin should be allowed to reconfig a cluster
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15728484#comment-15728484 ] Jordan Zimmerman commented on ZOOKEEPER-2014: - I realize I'm very late to this issue but I truly don't understand the benefit of this. This change has completely broken Curator and I'm now struggling to figure out how to fix it. How does breaking all existing clients help ZooKeeper usage? > Only admin should be allowed to reconfig a cluster > -- > > Key: ZOOKEEPER-2014 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2014 > Project: ZooKeeper > Issue Type: Bug > Components: server >Affects Versions: 3.5.0 >Reporter: Raul Gutierrez Segales >Assignee: Michael Han >Priority: Blocker > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-2014.patch, ZOOKEEPER-2014.patch, > ZOOKEEPER-2014.patch, ZOOKEEPER-2014.patch, ZOOKEEPER-2014.patch, > ZOOKEEPER-2014.patch, ZOOKEEPER-2014.patch, ZOOKEEPER-2014.patch, > ZOOKEEPER-2014.patch, ZOOKEEPER-2014.patch, ZOOKEEPER-2014.patch, > ZOOKEEPER-2014.patch, ZOOKEEPER-2014.patch, ZOOKEEPER-2014.patch, > ZOOKEEPER-2014.patch > > > ZOOKEEPER-107 introduces reconfiguration support via the reconfig() call. We > should, at the very least, ensure that only the Admin can reconfigure a > cluster. Perhaps restricting access to /zookeeper/config as well, though this > is debatable. Surely one could ensure Admin only access via an ACL, but that > would leave everyone who doesn't use ACLs unprotected. We could also force a > default ACL to make it a bit more consistent (maybe). > Finally, making reconfig() only available to Admins means they have to run > with zookeeper.DigestAuthenticationProvider.superDigest (which I am not sure > if everyone does, or how would it work with other authentication providers). > Review board https://reviews.apache.org/r/51546/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-1525) Plumb ZooKeeperServer object into auth plugins
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15674124#comment-15674124 ] Jordan Zimmerman commented on ZOOKEEPER-1525: - Wow - I just checked. It may not be worth it to merge into 3.5.3. So, I'll change it to 3.6.0. Hopefully we don't have to wait a long time for 3.6.0? > Plumb ZooKeeperServer object into auth plugins > -- > > Key: ZOOKEEPER-1525 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1525 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.5.0 >Reporter: Warren Turkal >Assignee: Jordan Zimmerman > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch > > > I want to plumb the ZooKeeperServer object into the auth plugins so that I > can store authentication data in zookeeper itself. With access to the > ZooKeeperServer object, I also have access to the ZKDatabase and can look up > entries in the local copy of the zookeeper data. > In order to implement this, I make sure that a ZooKeeperServer instance is > passed in to the ProviderRegistry.initialize() method. Then initialize() will > try to find a constructor for the AuthenticationProvider that takes a > ZooKeeperServer instance. If the constructor is found, it will be used. > Otherwise, initialize() will look for a constructor that takes no arguments > and use that instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-1525) Plumb ZooKeeperServer object into auth plugins
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15670568#comment-15670568 ] Jordan Zimmerman commented on ZOOKEEPER-1525: - [~fpj] What branch is 3.5? How can I duplicate this? > Plumb ZooKeeperServer object into auth plugins > -- > > Key: ZOOKEEPER-1525 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1525 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.5.0 >Reporter: Warren Turkal >Assignee: Jordan Zimmerman > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch > > > I want to plumb the ZooKeeperServer object into the auth plugins so that I > can store authentication data in zookeeper itself. With access to the > ZooKeeperServer object, I also have access to the ZKDatabase and can look up > entries in the local copy of the zookeeper data. > In order to implement this, I make sure that a ZooKeeperServer instance is > passed in to the ProviderRegistry.initialize() method. Then initialize() will > try to find a constructor for the AuthenticationProvider that takes a > ZooKeeperServer instance. If the constructor is found, it will be used. > Otherwise, initialize() will look for a constructor that takes no arguments > and use that instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-1525) Plumb ZooKeeperServer object into auth plugins
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668120#comment-15668120 ] Jordan Zimmerman commented on ZOOKEEPER-1525: - Looks to be flakey tests - they pass for me. > Plumb ZooKeeperServer object into auth plugins > -- > > Key: ZOOKEEPER-1525 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1525 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.5.0 >Reporter: Warren Turkal >Assignee: Jordan Zimmerman > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch > > > I want to plumb the ZooKeeperServer object into the auth plugins so that I > can store authentication data in zookeeper itself. With access to the > ZooKeeperServer object, I also have access to the ZKDatabase and can look up > entries in the local copy of the zookeeper data. > In order to implement this, I make sure that a ZooKeeperServer instance is > passed in to the ProviderRegistry.initialize() method. Then initialize() will > try to find a constructor for the AuthenticationProvider that takes a > ZooKeeperServer instance. If the constructor is found, it will be used. > Otherwise, initialize() will look for a constructor that takes no arguments > and use that instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-1525) Plumb ZooKeeperServer object into auth plugins
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-1525: Attachment: ZOOKEEPER-1525.patch Move ServerAuthenticationProvider args into container classes so that this can be upgraded more easily in the future without resorting more wrappers. > Plumb ZooKeeperServer object into auth plugins > -- > > Key: ZOOKEEPER-1525 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1525 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.5.0 >Reporter: Warren Turkal >Assignee: Jordan Zimmerman > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch > > > I want to plumb the ZooKeeperServer object into the auth plugins so that I > can store authentication data in zookeeper itself. With access to the > ZooKeeperServer object, I also have access to the ZKDatabase and can look up > entries in the local copy of the zookeeper data. > In order to implement this, I make sure that a ZooKeeperServer instance is > passed in to the ProviderRegistry.initialize() method. Then initialize() will > try to find a constructor for the AuthenticationProvider that takes a > ZooKeeperServer instance. If the constructor is found, it will be used. > Otherwise, initialize() will look for a constructor that takes no arguments > and use that instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2384) Support atomic increment / decrement of znode value
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15639919#comment-15639919 ] Jordan Zimmerman commented on ZOOKEEPER-2384: - FYI - Curator has a recipe to do this (http://curator.apache.org/curator-recipes/distributed-atomic-long.html) which is pretty complicated. Having native support would be nice. But, to do it would require that ZK have non-opaque data for the first time. It might be a can of worms if not done correctly. > Support atomic increment / decrement of znode value > --- > > Key: ZOOKEEPER-2384 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2384 > Project: ZooKeeper > Issue Type: Improvement >Reporter: Ted Yu > Labels: atomic > > Use case is to store reference count (integer type) in znode. > It is desirable to provide support for atomic increment / decrement of the > znode value. > Suggestion from Flavio: > {quote} > you can read the znode, keep the version of the znode, update the value, > write back conditionally. The condition for the setData operation to succeed > is that the version is the same that it read > {quote} > While the above is feasible, developer has to implement retry logic > him/herself. It is not easy to combine increment / decrement with other > operations using multi. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-1525) Plumb ZooKeeperServer object into auth plugins
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15637340#comment-15637340 ] Jordan Zimmerman commented on ZOOKEEPER-1525: - As has been discussed elsewhere, the FindBugs issues are not related to this PR. So, AFAICT, this passes. > Plumb ZooKeeperServer object into auth plugins > -- > > Key: ZOOKEEPER-1525 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1525 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.5.0 >Reporter: Warren Turkal >Assignee: Jordan Zimmerman > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch > > > I want to plumb the ZooKeeperServer object into the auth plugins so that I > can store authentication data in zookeeper itself. With access to the > ZooKeeperServer object, I also have access to the ZKDatabase and can look up > entries in the local copy of the zookeeper data. > In order to implement this, I make sure that a ZooKeeperServer instance is > passed in to the ProviderRegistry.initialize() method. Then initialize() will > try to find a constructor for the AuthenticationProvider that takes a > ZooKeeperServer instance. If the constructor is found, it will be used. > Otherwise, initialize() will look for a constructor that takes no arguments > and use that instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-1525) Plumb ZooKeeperServer object into auth plugins
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-1525: Attachment: ZOOKEEPER-1525.patch getServerProvider and getProvider imps were reversed > Plumb ZooKeeperServer object into auth plugins > -- > > Key: ZOOKEEPER-1525 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1525 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.5.0 >Reporter: Warren Turkal >Assignee: Jordan Zimmerman > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch > > > I want to plumb the ZooKeeperServer object into the auth plugins so that I > can store authentication data in zookeeper itself. With access to the > ZooKeeperServer object, I also have access to the ZKDatabase and can look up > entries in the local copy of the zookeeper data. > In order to implement this, I make sure that a ZooKeeperServer instance is > passed in to the ProviderRegistry.initialize() method. Then initialize() will > try to find a constructor for the AuthenticationProvider that takes a > ZooKeeperServer instance. If the constructor is found, it will be used. > Otherwise, initialize() will look for a constructor that takes no arguments > and use that instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-1525) Plumb ZooKeeperServer object into auth plugins
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15630599#comment-15630599 ] Jordan Zimmerman commented on ZOOKEEPER-1525: - I'll work on these issues. > Plumb ZooKeeperServer object into auth plugins > -- > > Key: ZOOKEEPER-1525 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1525 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.5.0 >Reporter: Warren Turkal >Assignee: Jordan Zimmerman > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch > > > I want to plumb the ZooKeeperServer object into the auth plugins so that I > can store authentication data in zookeeper itself. With access to the > ZooKeeperServer object, I also have access to the ZKDatabase and can look up > entries in the local copy of the zookeeper data. > In order to implement this, I make sure that a ZooKeeperServer instance is > passed in to the ProviderRegistry.initialize() method. Then initialize() will > try to find a constructor for the AuthenticationProvider that takes a > ZooKeeperServer instance. If the constructor is found, it will be used. > Otherwise, initialize() will look for a constructor that takes no arguments > and use that instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2608) Create CLI option for TTL ephemerals
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2608: Attachment: ZOOKEEPER-2608-3.patch Fixed spacing nits > Create CLI option for TTL ephemerals > > > Key: ZOOKEEPER-2608 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2608 > Project: ZooKeeper > Issue Type: Sub-task > Components: c client, java client, jute, server >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2608-2.patch, ZOOKEEPER-2608-3.patch, > ZOOKEEPER-2608.patch > > > Need to update CreateCommand to have the TTL node option -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2608) Create CLI option for TTL ephemerals
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15560292#comment-15560292 ] Jordan Zimmerman commented on ZOOKEEPER-2608: - There are no tests for this code path AFAIK > Create CLI option for TTL ephemerals > > > Key: ZOOKEEPER-2608 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2608 > Project: ZooKeeper > Issue Type: Sub-task > Components: c client, java client, jute, server >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2608-2.patch, ZOOKEEPER-2608.patch > > > Need to update CreateCommand to have the TTL node option -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15560113#comment-15560113 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - Looks like this got merged. Should we close or wait for the CLI change? > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169-6.patch, > ZOOKEEPER-2169-7.patch, ZOOKEEPER-2169-8.patch, ZOOKEEPER-2169-9.patch, > ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2608) Create CLI option for TTL ephemerals
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2608: Attachment: ZOOKEEPER-2608-2.patch Resubmitting now that ZOOKEEPER-2169 has been merged > Create CLI option for TTL ephemerals > > > Key: ZOOKEEPER-2608 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2608 > Project: ZooKeeper > Issue Type: Sub-task > Components: c client, java client, jute, server >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2608-2.patch, ZOOKEEPER-2608.patch > > > Need to update CreateCommand to have the TTL node option -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2608) Create CLI option for TTL ephemerals
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15559826#comment-15559826 ] Jordan Zimmerman commented on ZOOKEEPER-2608: - ZOOKEEPER-2169 must be merged before this will pass > Create CLI option for TTL ephemerals > > > Key: ZOOKEEPER-2608 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2608 > Project: ZooKeeper > Issue Type: Sub-task > Components: c client, java client, jute, server >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2608.patch > > > Need to update CreateCommand to have the TTL node option -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2169: Attachment: ZOOKEEPER-2169-9.patch New patch off of latest master > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169-6.patch, > ZOOKEEPER-2169-7.patch, ZOOKEEPER-2169-8.patch, ZOOKEEPER-2169-9.patch, > ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2608) Create CLI option for TTL ephemerals
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2608: Attachment: ZOOKEEPER-2608.patch > Create CLI option for TTL ephemerals > > > Key: ZOOKEEPER-2608 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2608 > Project: ZooKeeper > Issue Type: Sub-task > Components: c client, java client, jute, server >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2608.patch > > > Need to update CreateCommand to have the TTL node option -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2169: Attachment: ZOOKEEPER-2169-8.patch Removed bad import per [~fournc] > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169-6.patch, > ZOOKEEPER-2169-7.patch, ZOOKEEPER-2169-8.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2609) Add TTL Node APIs to C client
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2609: Assignee: (was: Jordan Zimmerman) > Add TTL Node APIs to C client > - > > Key: ZOOKEEPER-2609 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2609 > Project: ZooKeeper > Issue Type: Sub-task > Components: c client, java client, jute, server >Reporter: Jordan Zimmerman > Fix For: 3.6.0 > > > Need to update CreateCommand to have the TTL node option -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2609) Add TTL Node APIs to C client
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2609: Description: Need to update the C lib to have the TTL node option (was: Need to update CreateCommand to have the TTL node option) > Add TTL Node APIs to C client > - > > Key: ZOOKEEPER-2609 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2609 > Project: ZooKeeper > Issue Type: Sub-task > Components: c client, java client, jute, server >Reporter: Jordan Zimmerman > Fix For: 3.6.0 > > > Need to update the C lib to have the TTL node option -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (ZOOKEEPER-2609) Add TTL Node APIs to C client
Jordan Zimmerman created ZOOKEEPER-2609: --- Summary: Add TTL Node APIs to C client Key: ZOOKEEPER-2609 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2609 Project: ZooKeeper Issue Type: Sub-task Components: c client, java client, jute, server Reporter: Jordan Zimmerman Assignee: Jordan Zimmerman Fix For: 3.6.0 Need to update CreateCommand to have the TTL node option -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2169: Comment: was deleted (was: I'll do it. You have to do: git diff --no-prefix master) > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169-6.patch, > ZOOKEEPER-2169-7.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1522#comment-1522 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - I'll do it. You have to do: git diff --no-prefix master > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169-6.patch, > ZOOKEEPER-2169-7.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2169: Attachment: ZOOKEEPER-2169-7.patch Latest patch (matches PR on Github) > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169-6.patch, > ZOOKEEPER-2169-7.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1523#comment-1523 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - I'll do it. You have to do: git diff --no-prefix master > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169-6.patch, > ZOOKEEPER-2169-7.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1472#comment-1472 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - So, do I need to post this patch here? > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (ZOOKEEPER-1525) Plumb ZooKeeperServer object into auth plugins
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman reassigned ZOOKEEPER-1525: --- Assignee: Jordan Zimmerman (was: Tim Crowder) > Plumb ZooKeeperServer object into auth plugins > -- > > Key: ZOOKEEPER-1525 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1525 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.5.0 >Reporter: Warren Turkal >Assignee: Jordan Zimmerman > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch > > > I want to plumb the ZooKeeperServer object into the auth plugins so that I > can store authentication data in zookeeper itself. With access to the > ZooKeeperServer object, I also have access to the ZKDatabase and can look up > entries in the local copy of the zookeeper data. > In order to implement this, I make sure that a ZooKeeperServer instance is > passed in to the ProviderRegistry.initialize() method. Then initialize() will > try to find a constructor for the AuthenticationProvider that takes a > ZooKeeperServer instance. If the constructor is found, it will be used. > Otherwise, initialize() will look for a constructor that takes no arguments > and use that instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-1525) Plumb ZooKeeperServer object into auth plugins
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15551934#comment-15551934 ] Jordan Zimmerman commented on ZOOKEEPER-1525: - Note: ZOOKEEPER-2143 merged into this PR as it is a natural fit > Plumb ZooKeeperServer object into auth plugins > -- > > Key: ZOOKEEPER-1525 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1525 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.5.0 >Reporter: Warren Turkal >Assignee: Jordan Zimmerman > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch > > > I want to plumb the ZooKeeperServer object into the auth plugins so that I > can store authentication data in zookeeper itself. With access to the > ZooKeeperServer object, I also have access to the ZKDatabase and can look up > entries in the local copy of the zookeeper data. > In order to implement this, I make sure that a ZooKeeperServer instance is > passed in to the ProviderRegistry.initialize() method. Then initialize() will > try to find a constructor for the AuthenticationProvider that takes a > ZooKeeperServer instance. If the constructor is found, it will be used. > Otherwise, initialize() will look for a constructor that takes no arguments > and use that instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (ZOOKEEPER-2143) Pass the operation and path to the AuthenticationProvider
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman resolved ZOOKEEPER-2143. - Resolution: Implemented Note: this has been merged into ZOOKEEPER-1525 > Pass the operation and path to the AuthenticationProvider > - > > Key: ZOOKEEPER-2143 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2143 > Project: ZooKeeper > Issue Type: Sub-task >Reporter: Karol Dudzinski > > Currently, the AuthenticationProvider only gets passed the id of the client > and the acl expression. If one wishes to perform auth checks based on the > action or path being acted on, that needs to be included in the acl > expression. This results in lots of potentially individual acl's being > created which led us to find ZOOKEEPER-2141. It would be great if both the > action and path were passed to the AuthenticationProvider. > I understand that this needs to be completely backwards compatible. One > solution that comes to mind is to create an interface which extends > AuthenticationProvider but adds a new matches which takes the additional > parameters. Internally, ZK would use the new interface everywhere. To > preserve compatibility, ProviderRegistry could check for classes implementing > the original AuthenticationProvdier interface and wrap them to allow the new > interface to be used everywhere internally. Any thoughts on this approach? > Happy to provide a patch to demonstrate what I mean. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2143) Pass the operation and path to the AuthenticationProvider
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2143: Issue Type: Sub-task (was: Improvement) Parent: ZOOKEEPER-1525 > Pass the operation and path to the AuthenticationProvider > - > > Key: ZOOKEEPER-2143 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2143 > Project: ZooKeeper > Issue Type: Sub-task >Reporter: Karol Dudzinski > > Currently, the AuthenticationProvider only gets passed the id of the client > and the acl expression. If one wishes to perform auth checks based on the > action or path being acted on, that needs to be included in the acl > expression. This results in lots of potentially individual acl's being > created which led us to find ZOOKEEPER-2141. It would be great if both the > action and path were passed to the AuthenticationProvider. > I understand that this needs to be completely backwards compatible. One > solution that comes to mind is to create an interface which extends > AuthenticationProvider but adds a new matches which takes the additional > parameters. Internally, ZK would use the new interface everywhere. To > preserve compatibility, ProviderRegistry could check for classes implementing > the original AuthenticationProvdier interface and wrap them to allow the new > interface to be used everywhere internally. Any thoughts on this approach? > Happy to provide a patch to demonstrate what I mean. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-1525) Plumb ZooKeeperServer object into auth plugins
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545855#comment-15545855 ] Jordan Zimmerman commented on ZOOKEEPER-1525: - My PR is essentially the same as the OP's patch but a bit more backward compatible. > Plumb ZooKeeperServer object into auth plugins > -- > > Key: ZOOKEEPER-1525 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1525 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.5.0 >Reporter: Warren Turkal >Assignee: Tim Crowder > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch > > > I want to plumb the ZooKeeperServer object into the auth plugins so that I > can store authentication data in zookeeper itself. With access to the > ZooKeeperServer object, I also have access to the ZKDatabase and can look up > entries in the local copy of the zookeeper data. > In order to implement this, I make sure that a ZooKeeperServer instance is > passed in to the ProviderRegistry.initialize() method. Then initialize() will > try to find a constructor for the AuthenticationProvider that takes a > ZooKeeperServer instance. If the constructor is found, it will be used. > Otherwise, initialize() will look for a constructor that takes no arguments > and use that instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-1525) Plumb ZooKeeperServer object into auth plugins
[ https://issues.apache.org/jira/browse/ZOOKEEPER-1525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15545521#comment-15545521 ] Jordan Zimmerman commented on ZOOKEEPER-1525: - This patch no longer applies for me. Is this still viable? I need this functionality. If this is patch is no longer maintained I'm happy to update it. > Plumb ZooKeeperServer object into auth plugins > -- > > Key: ZOOKEEPER-1525 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1525 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.5.0 >Reporter: Warren Turkal >Assignee: Tim Crowder > Fix For: 3.5.3, 3.6.0 > > Attachments: ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch, > ZOOKEEPER-1525.patch, ZOOKEEPER-1525.patch > > > I want to plumb the ZooKeeperServer object into the auth plugins so that I > can store authentication data in zookeeper itself. With access to the > ZooKeeperServer object, I also have access to the ZKDatabase and can look up > entries in the local copy of the zookeeper data. > In order to implement this, I make sure that a ZooKeeperServer instance is > passed in to the ProviderRegistry.initialize() method. Then initialize() will > try to find a constructor for the AuthenticationProvider that takes a > ZooKeeperServer instance. If the constructor is found, it will be used. > Otherwise, initialize() will look for a constructor that takes no arguments > and use that instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15540640#comment-15540640 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - NOTE: test failures seem to have nothing to do with this PR > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15540042#comment-15540042 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - A merge, a merge. My kingdom for a merge. > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2355) Ephemeral node is never deleted if follower fails while reading the proposal packet
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470968#comment-15470968 ] Jordan Zimmerman commented on ZOOKEEPER-2355: - FYI - We're on 3.5.x so that's needed as well > Ephemeral node is never deleted if follower fails while reading the proposal > packet > --- > > Key: ZOOKEEPER-2355 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2355 > Project: ZooKeeper > Issue Type: Bug > Components: quorum, server >Reporter: Arshad Mohammad >Assignee: Arshad Mohammad >Priority: Critical > Fix For: 3.4.10, 3.5.3 > > Attachments: ZOOKEEPER-2355-01.patch, ZOOKEEPER-2355-02.patch, > ZOOKEEPER-2355-03.patch, ZOOKEEPER-2355-04.patch > > > ZooKeeper ephemeral node is never deleted if follower fail while reading the > proposal packet > The scenario is as follows: > # Configure three node ZooKeeper cluster, lets say nodes are A, B and C, > start all, assume A is leader, B and C are follower > # Connect to any of the server and create ephemeral node /e1 > # Close the session, ephemeral node /e1 will go for deletion > # While receiving delete proposal make Follower B to fail with > {{SocketTimeoutException}}. This we need to do to reproduce the scenario > otherwise in production environment it happens because of network fault. > # Remove the fault, just check that faulted Follower is now connected with > quorum > # Connect to any of the server, create the same ephemeral node /e1, created > is success. > # Close the session, ephemeral node /e1 will go for deletion > # {color:red}/e1 is not deleted from the faulted Follower B, It should have > been deleted as it was again created with another session{color} > # {color:green}/e1 is deleted from Leader A and other Follower C{color} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2355) Ephemeral node is never deleted if follower fails while reading the proposal packet
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470796#comment-15470796 ] Jordan Zimmerman commented on ZOOKEEPER-2355: - This is a very serious bug for us at Elasticsearch. Is there any way to get an emergency release out for this? > Ephemeral node is never deleted if follower fails while reading the proposal > packet > --- > > Key: ZOOKEEPER-2355 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2355 > Project: ZooKeeper > Issue Type: Bug > Components: quorum, server >Reporter: Arshad Mohammad >Assignee: Arshad Mohammad >Priority: Critical > Fix For: 3.4.10, 3.5.3 > > Attachments: ZOOKEEPER-2355-01.patch, ZOOKEEPER-2355-02.patch, > ZOOKEEPER-2355-03.patch, ZOOKEEPER-2355-04.patch > > > ZooKeeper ephemeral node is never deleted if follower fail while reading the > proposal packet > The scenario is as follows: > # Configure three node ZooKeeper cluster, lets say nodes are A, B and C, > start all, assume A is leader, B and C are follower > # Connect to any of the server and create ephemeral node /e1 > # Close the session, ephemeral node /e1 will go for deletion > # While receiving delete proposal make Follower B to fail with > {{SocketTimeoutException}}. This we need to do to reproduce the scenario > otherwise in production environment it happens because of network fault. > # Remove the fault, just check that faulted Follower is now connected with > quorum > # Connect to any of the server, create the same ephemeral node /e1, created > is success. > # Close the session, ephemeral node /e1 will go for deletion > # {color:red}/e1 is not deleted from the faulted Follower B, It should have > been deleted as it was again created with another session{color} > # {color:green}/e1 is deleted from Leader A and other Follower C{color} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15469440#comment-15469440 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - No. I mask out the special ephemeral value from the Stat. If I didn't it could break existing apps. > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15461322#comment-15461322 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - The more I think about it I think this should be a separate issue. There are implications for watchers, etc. Do watchers fire when a ZNode is touched? Do we need a new Watcher type, etc.? > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15461315#comment-15461315 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - I'm happy to add it - should I just do it? Do we need a vote? Separate issue? > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15461306#comment-15461306 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - It could be a can of worms but other APIs are really needed. There's currently no way to determine if a ZNode is a Container and/or TTL node, get the TTL of a TTL node, etc. > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15461296#comment-15461296 ] Jordan Zimmerman edited comment on ZOOKEEPER-2169 at 9/3/16 4:01 PM: - I could add a touch API if you want. It would be a nice feature. However, it does add a new API name that needs to be supported. I can't think of an existing API that could be overloaded to do it. was (Author: randgalt): I could add a touch API if you want. It would be a nice feature. However, it does add a new API name that needs to be supported. > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2355) Ephemeral node is never deleted if follower fails while reading the proposal packet
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15458720#comment-15458720 ] Jordan Zimmerman commented on ZOOKEEPER-2355: - We've now experienced this at Elasticsearch. This is a Critical issue that should be released sooner rather than later. > Ephemeral node is never deleted if follower fails while reading the proposal > packet > --- > > Key: ZOOKEEPER-2355 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2355 > Project: ZooKeeper > Issue Type: Bug > Components: quorum, server >Reporter: Arshad Mohammad >Assignee: Martin Kuchta >Priority: Critical > Fix For: 3.4.10 > > Attachments: ZOOKEEPER-2355-01.patch, ZOOKEEPER-2355-02.patch, > ZOOKEEPER-2355-03.patch > > > ZooKeeper ephemeral node is never deleted if follower fail while reading the > proposal packet > The scenario is as follows: > # Configure three node ZooKeeper cluster, lets say nodes are A, B and C, > start all, assume A is leader, B and C are follower > # Connect to any of the server and create ephemeral node /e1 > # Close the session, ephemeral node /e1 will go for deletion > # While receiving delete proposal make Follower B to fail with > {{SocketTimeoutException}}. This we need to do to reproduce the scenario > otherwise in production environment it happens because of network fault. > # Remove the fault, just check that faulted Follower is now connected with > quorum > # Connect to any of the server, create the same ephemeral node /e1, created > is success. > # Close the session, ephemeral node /e1 will go for deletion > # {color:red}/e1 is not deleted from the faulted Follower B, It should have > been deleted as it was again created with another session{color} > # {color:green}/e1 is deleted from Leader A and other Follower C{color} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15457252#comment-15457252 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - We still need C APIs for Container Nodes. I haven't written C/C++ in 20 years. Someone else will need to do it. > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2169: Attachment: (was: ZOOKEEPER-2169-6.patch) > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Issue Comment Deleted] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2169: Comment: was deleted (was: Fixed misnamed CreateTTLTest.java) > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2169: Attachment: ZOOKEEPER-2169-6.patch Fixed misnamed CreateTTLTest.java > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169-6.patch, > ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15444142#comment-15444142 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - I don't think it's necessary. ContainerManager#checkContainers() either works or it doesn't. What would you like tested? > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15421127#comment-15421127 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - *ping* > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (ZOOKEEPER-2476) Not possible to upgrade via reconfig a Participant+Observer cluster to a Participant+Participant cluster
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380395#comment-15380395 ] Jordan Zimmerman edited comment on ZOOKEEPER-2476 at 7/16/16 1:04 AM: -- But Observers _can_ be upgraded to Participants right? What's the difference here. Why would an Observer ever reject the proposal? was (Author: randgalt): But Observers _can_ be upgraded to the Participants right? What's the difference here. Why would an Observer ever reject the proposal? > Not possible to upgrade via reconfig a Participant+Observer cluster to a > Participant+Participant cluster > > > Key: ZOOKEEPER-2476 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2476 > Project: ZooKeeper > Issue Type: Bug > Components: quorum, server >Affects Versions: 3.5.1 >Reporter: Jordan Zimmerman >Assignee: Alexander Shraer >Priority: Critical > Attachments: ZOOKEEPER-2476.patch > > > Contrary to the documentation, it is not possible to upgrade via reconfig a > Participant+Observer cluster to a Participant+Participant cluster. > KeeperException.NewConfigNoQuorum is thrown instead. > PrepRequestProcessor should recognize this special case and let it pass. Test > will be enclosed shortly. I'll work on a fix as well, but I imagine that > [~shralex] will want to look at it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2476) Not possible to upgrade via reconfig a Participant+Observer cluster to a Participant+Participant cluster
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380395#comment-15380395 ] Jordan Zimmerman commented on ZOOKEEPER-2476: - But Observers _can_ be upgraded to the Participants right? What's the difference here. Why would an Observer ever reject the proposal? > Not possible to upgrade via reconfig a Participant+Observer cluster to a > Participant+Participant cluster > > > Key: ZOOKEEPER-2476 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2476 > Project: ZooKeeper > Issue Type: Bug > Components: quorum, server >Affects Versions: 3.5.1 >Reporter: Jordan Zimmerman >Assignee: Alexander Shraer >Priority: Critical > Attachments: ZOOKEEPER-2476.patch > > > Contrary to the documentation, it is not possible to upgrade via reconfig a > Participant+Observer cluster to a Participant+Participant cluster. > KeeperException.NewConfigNoQuorum is thrown instead. > PrepRequestProcessor should recognize this special case and let it pass. Test > will be enclosed shortly. I'll work on a fix as well, but I imagine that > [~shralex] will want to look at it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2476) Not possible to upgrade via reconfig a Participant+Observer cluster to a Participant+Participant cluster
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380381#comment-15380381 ] Jordan Zimmerman commented on ZOOKEEPER-2476: - Seems harsh to force clients to do this. Why can't ZK do this for me? Also, what's the difference between a "non-voting follower" and an Observer? I thought that that was what an Observer was. > Not possible to upgrade via reconfig a Participant+Observer cluster to a > Participant+Participant cluster > > > Key: ZOOKEEPER-2476 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2476 > Project: ZooKeeper > Issue Type: Bug > Components: quorum, server >Affects Versions: 3.5.1 >Reporter: Jordan Zimmerman >Assignee: Alexander Shraer >Priority: Critical > Attachments: ZOOKEEPER-2476.patch > > > Contrary to the documentation, it is not possible to upgrade via reconfig a > Participant+Observer cluster to a Participant+Participant cluster. > KeeperException.NewConfigNoQuorum is thrown instead. > PrepRequestProcessor should recognize this special case and let it pass. Test > will be enclosed shortly. I'll work on a fix as well, but I imagine that > [~shralex] will want to look at it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2476) Not possible to upgrade via reconfig a Participant+Observer cluster to a Participant+Participant cluster
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15380371#comment-15380371 ] Jordan Zimmerman commented on ZOOKEEPER-2476: - In this case we have 1 participant and 1 observer. I can't think of a reason why the upgrade shouldn't be allowed. The 1 existing server can manage the entire "quorum". The observer being upgraded cannot possibly be out of sync. > Not possible to upgrade via reconfig a Participant+Observer cluster to a > Participant+Participant cluster > > > Key: ZOOKEEPER-2476 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2476 > Project: ZooKeeper > Issue Type: Bug > Components: quorum, server >Affects Versions: 3.5.1 >Reporter: Jordan Zimmerman >Assignee: Alexander Shraer >Priority: Critical > Attachments: ZOOKEEPER-2476.patch > > > Contrary to the documentation, it is not possible to upgrade via reconfig a > Participant+Observer cluster to a Participant+Participant cluster. > KeeperException.NewConfigNoQuorum is thrown instead. > PrepRequestProcessor should recognize this special case and let it pass. Test > will be enclosed shortly. I'll work on a fix as well, but I imagine that > [~shralex] will want to look at it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2476) Not possible to upgrade via reconfig a Participant+Observer cluster to a Participant+Participant cluster
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2476: Attachment: ZOOKEEPER-2476.patch Here is a test that shows the problem > Not possible to upgrade via reconfig a Participant+Observer cluster to a > Participant+Participant cluster > > > Key: ZOOKEEPER-2476 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2476 > Project: ZooKeeper > Issue Type: Bug > Components: quorum, server >Affects Versions: 3.5.1 >Reporter: Jordan Zimmerman >Assignee: Alexander Shraer >Priority: Critical > Attachments: ZOOKEEPER-2476.patch > > > Contrary to the documentation, it is not possible to upgrade via reconfig a > Participant+Observer cluster to a Participant+Participant cluster. > KeeperException.NewConfigNoQuorum is thrown instead. > PrepRequestProcessor should recognize this special case and let it pass. Test > will be enclosed shortly. I'll work on a fix as well, but I imagine that > [~shralex] will want to look at it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (ZOOKEEPER-2476) Not possible to upgrade via reconfig a Participant+Observer cluster to a Participant+Participant cluster
Jordan Zimmerman created ZOOKEEPER-2476: --- Summary: Not possible to upgrade via reconfig a Participant+Observer cluster to a Participant+Participant cluster Key: ZOOKEEPER-2476 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2476 Project: ZooKeeper Issue Type: Bug Components: quorum, server Affects Versions: 3.5.1 Reporter: Jordan Zimmerman Assignee: Alexander Shraer Priority: Critical Contrary to the documentation, it is not possible to upgrade via reconfig a Participant+Observer cluster to a Participant+Participant cluster. KeeperException.NewConfigNoQuorum is thrown instead. PrepRequestProcessor should recognize this special case and let it pass. Test will be enclosed shortly. I'll work on a fix as well, but I imagine that [~shralex] will want to look at it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (ZOOKEEPER-2368) Client watches are not disconnected on close
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373274#comment-15373274 ] Jordan Zimmerman edited comment on ZOOKEEPER-2368 at 7/12/16 5:13 PM: -- When I get a chance I can run Curator's tests on this. Or maybe Timothy can do that. For Curator, it already handles shutdown internally for all of its recipes (assuming correct usage). My only concern is that the Disconnect event would occur out-of-band from the ZooKeeper closure (i.e. a different thread at a different point in time). was (Author: randgalt): When I get a change I can run Curator's tests on this. Or maybe Timothy can do that. For Curator, it already handles shutdown internally for all of its recipes (assuming correct usage). My only concern is that the Disconnect event would occur out-of-band from the ZooKeeper closure (i.e. a different thread at a different point in time). > Client watches are not disconnected on close > > > Key: ZOOKEEPER-2368 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2368 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.4.0, 3.5.0 >Reporter: Timothy Ward > Fix For: 3.5.2 > > Attachments: ZOOKEEPER-2368.patch > > > If I have a ZooKeeper client connected to an ensemble then obviously I can > register watches. > If the client is disconnected (for example by a failing ensemble member) then > I get a disconnection event for all of my watches. If, on the other hand, my > client is closed then I *do not* get a disconnection event. This asymmetry > makes it really hard to clear up properly when using the asynchronous API, as > there is no way to "fail" data reads/updates when the client is closed. > I believe that the correct behaviour should be for all watchers to receive a > disconnection event when the client is closed. The watchers can then respond > as appropriate, and can differentiate between a "server disconnect" and a > "client disconnect" by checking the ZooKeeper#getState() method. > This would not be a breaking behaviour change as Watchers are already > required to handle disconnection events. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2368) Client watches are not disconnected on close
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15373274#comment-15373274 ] Jordan Zimmerman commented on ZOOKEEPER-2368: - When I get a change I can run Curator's tests on this. Or maybe Timothy can do that. For Curator, it already handles shutdown internally for all of its recipes (assuming correct usage). My only concern is that the Disconnect event would occur out-of-band from the ZooKeeper closure (i.e. a different thread at a different point in time). > Client watches are not disconnected on close > > > Key: ZOOKEEPER-2368 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2368 > Project: ZooKeeper > Issue Type: Improvement >Affects Versions: 3.4.0, 3.5.0 >Reporter: Timothy Ward > Fix For: 3.5.2 > > Attachments: ZOOKEEPER-2368.patch > > > If I have a ZooKeeper client connected to an ensemble then obviously I can > register watches. > If the client is disconnected (for example by a failing ensemble member) then > I get a disconnection event for all of my watches. If, on the other hand, my > client is closed then I *do not* get a disconnection event. This asymmetry > makes it really hard to clear up properly when using the asynchronous API, as > there is no way to "fail" data reads/updates when the client is closed. > I believe that the correct behaviour should be for all watchers to receive a > disconnection event when the client is closed. The watchers can then respond > as appropriate, and can differentiate between a "server disconnect" and a > "client disconnect" by checking the ZooKeeper#getState() method. > This would not be a breaking behaviour change as Watchers are already > required to handle disconnection events. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2464) NullPointerException on ContainerManager
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15361836#comment-15361836 ] Jordan Zimmerman commented on ZOOKEEPER-2464: - 1 line change and no test case was provided - so I didn't add a test > NullPointerException on ContainerManager > > > Key: ZOOKEEPER-2464 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2464 > Project: ZooKeeper > Issue Type: Bug > Components: server >Affects Versions: 3.5.1 >Reporter: Stefano Salmaso >Assignee: Jordan Zimmerman > Attachments: ZOOKEEPER-2464.patch > > > I would like to expose you to a problem that we are experiencing. > We are using a cluster of 7 zookeeper and we use them to implement a > distributed lock using Curator > (http://curator.apache.org/curator-recipes/shared-reentrant-lock.html) > So .. we tried to play with the servers to see if everything worked properly > and we stopped and start servers to see that the system worked well > (like stop 03, stop 05, stop 06, start 05, start 06, start 03) > We saw a strange behavior. > The number of znodes grew up without stopping (normally we had 4000 or 5000, > we got to 60,000 and then we stopped our application) > In zookeeeper logs I saw this (on leader only, one every minute) > 2016-07-04 14:53:50,302 [myid:7] - ERROR > [ContainerManagerTask:ContainerManager$1@84] - Error checking containers > java.lang.NullPointerException >at > org.apache.zookeeper.server.ContainerManager.getCandidates(ContainerManager.java:151) >at > org.apache.zookeeper.server.ContainerManager.checkContainers(ContainerManager.java:111) >at > org.apache.zookeeper.server.ContainerManager$1.run(ContainerManager.java:78) >at java.util.TimerThread.mainLoop(Timer.java:555) >at java.util.TimerThread.run(Timer.java:505) > We have not yet deleted the data ... so the problem can be reproduced on our > servers -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2464) NullPointerException on ContainerManager
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2464: Attachment: ZOOKEEPER-2464.patch node.getChildren() can legally return null > NullPointerException on ContainerManager > > > Key: ZOOKEEPER-2464 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2464 > Project: ZooKeeper > Issue Type: Bug > Components: server >Affects Versions: 3.5.1 >Reporter: Stefano Salmaso >Assignee: Jordan Zimmerman > Attachments: ZOOKEEPER-2464.patch > > > I would like to expose you to a problem that we are experiencing. > We are using a cluster of 7 zookeeper and we use them to implement a > distributed lock using Curator > (http://curator.apache.org/curator-recipes/shared-reentrant-lock.html) > So .. we tried to play with the servers to see if everything worked properly > and we stopped and start servers to see that the system worked well > (like stop 03, stop 05, stop 06, start 05, start 06, start 03) > We saw a strange behavior. > The number of znodes grew up without stopping (normally we had 4000 or 5000, > we got to 60,000 and then we stopped our application) > In zookeeeper logs I saw this (on leader only, one every minute) > 2016-07-04 14:53:50,302 [myid:7] - ERROR > [ContainerManagerTask:ContainerManager$1@84] - Error checking containers > java.lang.NullPointerException >at > org.apache.zookeeper.server.ContainerManager.getCandidates(ContainerManager.java:151) >at > org.apache.zookeeper.server.ContainerManager.checkContainers(ContainerManager.java:111) >at > org.apache.zookeeper.server.ContainerManager$1.run(ContainerManager.java:78) >at java.util.TimerThread.mainLoop(Timer.java:555) >at java.util.TimerThread.run(Timer.java:505) > We have not yet deleted the data ... so the problem can be reproduced on our > servers -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (ZOOKEEPER-2464) NullPointerException on ContainerManager
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman reassigned ZOOKEEPER-2464: --- Assignee: Jordan Zimmerman > NullPointerException on ContainerManager > > > Key: ZOOKEEPER-2464 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2464 > Project: ZooKeeper > Issue Type: Bug > Components: server >Affects Versions: 3.5.1 >Reporter: Stefano Salmaso >Assignee: Jordan Zimmerman > > I would like to expose you to a problem that we are experiencing. > We are using a cluster of 7 zookeeper and we use them to implement a > distributed lock using Curator > (http://curator.apache.org/curator-recipes/shared-reentrant-lock.html) > So .. we tried to play with the servers to see if everything worked properly > and we stopped and start servers to see that the system worked well > (like stop 03, stop 05, stop 06, start 05, start 06, start 03) > We saw a strange behavior. > The number of znodes grew up without stopping (normally we had 4000 or 5000, > we got to 60,000 and then we stopped our application) > In zookeeeper logs I saw this (on leader only, one every minute) > 2016-07-04 14:53:50,302 [myid:7] - ERROR > [ContainerManagerTask:ContainerManager$1@84] - Error checking containers > java.lang.NullPointerException >at > org.apache.zookeeper.server.ContainerManager.getCandidates(ContainerManager.java:151) >at > org.apache.zookeeper.server.ContainerManager.checkContainers(ContainerManager.java:111) >at > org.apache.zookeeper.server.ContainerManager$1.run(ContainerManager.java:78) >at java.util.TimerThread.mainLoop(Timer.java:555) >at java.util.TimerThread.run(Timer.java:505) > We have not yet deleted the data ... so the problem can be reproduced on our > servers -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15353095#comment-15353095 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - Any updates on this PR? It would be a great feature for ZK. > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276597#comment-15276597 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - latest updates are now there > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15270978#comment-15270978 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - https://reviews.apache.org/r/46983 > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2169: Attachment: ZOOKEEPER-2169-5.patch Fixed findbugs issue > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169-5.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2169: Attachment: ZOOKEEPER-2169-4.patch This is now a complete patch with docs, tests, etc. > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169-4.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2169: Attachment: ZOOKEEPER-2169-3.patch Fixed test failure > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169-3.patch, > ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2169: Attachment: ZOOKEEPER-2169-2.patch Forgot to include new files > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169-2.patch, ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15270193#comment-15270193 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - I need to add ttl support to transaction OPs > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15270192#comment-15270192 ] Jordan Zimmerman commented on ZOOKEEPER-2169: - Also, docs are needed, etc. etc. probably other stuff. > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2169: Attachment: ZOOKEEPER-2169.patch This patch takes advantage of 3.5's container support. Most of the work needed to support TTLs is there already. * In order not to break on-disk and protocol compatibility the ephemeralOwner is yet-again overloaded to have special meaning. * New opcodes and transaction records had to be added in a similar manner to Containers * More tests are needed > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > Attachments: ZOOKEEPER-2169.patch > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (ZOOKEEPER-2169) Enable creation of nodes with TTLs
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman reassigned ZOOKEEPER-2169: --- Assignee: Jordan Zimmerman > Enable creation of nodes with TTLs > -- > > Key: ZOOKEEPER-2169 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2169 > Project: ZooKeeper > Issue Type: New Feature > Components: c client, java client, jute, server >Affects Versions: 3.6.0 >Reporter: Camille Fournier >Assignee: Jordan Zimmerman > Fix For: 3.6.0 > > > As a user, I would like to be able to create a node that is NOT tied to a > session but that WILL expire automatically if action is not taken by some > client within a time window. > I propose this to enable clients interacting with ZK via http or other "thin > clients" to create ephemeral-like nodes. > Some ideas for the design, up for discussion: > The node should support all normal ZK node operations including ACLs, > sequential key generation, etc, however, it should not support the ephemeral > flag. The node will be created with a TTL that is updated via a refresh > operation. > The ZK quorum will watch this node similarly to the way that it watches for > session liveness; if the node is not refreshed within the TTL, it will expire. > QUESTIONS: > 1) Should we let the refresh operation set the TTL to a different base value? > 2) If so, should the setting of the TTL to a new base value cause a watch to > fire? > 3) Do we want to allow these nodes to have children or prevent this similar > to ephemeral nodes? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2413) ContainerManager doesn't close the Timer it creates when stop() is called
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2413: Attachment: ZOOKEEPER-2413.patch > ContainerManager doesn't close the Timer it creates when stop() is called > - > > Key: ZOOKEEPER-2413 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2413 > Project: ZooKeeper > Issue Type: Bug > Components: server >Affects Versions: 3.5.1 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman > Attachments: ZOOKEEPER-2413.patch > > > ContainerManager creates a Timer object. It's stop() method cancel's the > running task but doesn't close the Timer itself. This ends up leaking a > Thread (internal to the Timer). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (ZOOKEEPER-2413) ContainerManager doesn't close the Timer it creates when stop() is called
Jordan Zimmerman created ZOOKEEPER-2413: --- Summary: ContainerManager doesn't close the Timer it creates when stop() is called Key: ZOOKEEPER-2413 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2413 Project: ZooKeeper Issue Type: Bug Components: server Affects Versions: 3.5.1 Reporter: Jordan Zimmerman Assignee: Jordan Zimmerman ContainerManager creates a Timer object. It's stop() method cancel's the running task but doesn't close the Timer itself. This ends up leaking a Thread (internal to the Timer). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2359) ZooKeeper client has unnecessary logs for watcher removal errors
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15132382#comment-15132382 ] Jordan Zimmerman commented on ZOOKEEPER-2359: - This shouldn't need any tests. It's just removing two lines of logging code. > ZooKeeper client has unnecessary logs for watcher removal errors > > > Key: ZOOKEEPER-2359 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2359 > Project: ZooKeeper > Issue Type: Improvement > Components: java client >Affects Versions: 3.5.1 >Reporter: Jordan Zimmerman >Assignee: Jordan Zimmerman > Attachments: ZOOKEEPER-2359.patch > > > ClientCnxn.java logs errors during watcher removal: > LOG.error("Failed to find watcher!", nwe); > LOG.error("Exception when removing watcher", ke); > An error code/exception is generated so the logs are noisy and unnecessary. > If the client handles the error there's still a log message. This is > different than other APIs. These logs should be removed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (ZOOKEEPER-2359) ZooKeeper client has unnecessary logs for watcher errors
Jordan Zimmerman created ZOOKEEPER-2359: --- Summary: ZooKeeper client has unnecessary logs for watcher errors Key: ZOOKEEPER-2359 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2359 Project: ZooKeeper Issue Type: Improvement Components: java client Affects Versions: 3.5.1 Reporter: Jordan Zimmerman ClientCnxn.java logs errors during watcher removal: LOG.error("Failed to find watcher!", nwe); LOG.error("Exception when removing watcher", ke); An error code/exception is generated so the logs are noisy and unnecessary. If the client handles the error there's still a log message. This is different than other APIs. These logs should be removed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2359) ZooKeeper client has unnecessary logs for watcher removal errors
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2359: Summary: ZooKeeper client has unnecessary logs for watcher removal errors (was: ZooKeeper client has unnecessary logs for watcher errors) > ZooKeeper client has unnecessary logs for watcher removal errors > > > Key: ZOOKEEPER-2359 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2359 > Project: ZooKeeper > Issue Type: Improvement > Components: java client >Affects Versions: 3.5.1 >Reporter: Jordan Zimmerman > > ClientCnxn.java logs errors during watcher removal: > LOG.error("Failed to find watcher!", nwe); > LOG.error("Exception when removing watcher", ke); > An error code/exception is generated so the logs are noisy and unnecessary. > If the client handles the error there's still a log message. This is > different than other APIs. These logs should be removed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2359) ZooKeeper client has unnecessary logs for watcher removal errors
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2359: Attachment: ZOOKEEPER-2359.patch > ZooKeeper client has unnecessary logs for watcher removal errors > > > Key: ZOOKEEPER-2359 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2359 > Project: ZooKeeper > Issue Type: Improvement > Components: java client >Affects Versions: 3.5.1 >Reporter: Jordan Zimmerman > Attachments: ZOOKEEPER-2359.patch > > > ClientCnxn.java logs errors during watcher removal: > LOG.error("Failed to find watcher!", nwe); > LOG.error("Exception when removing watcher", ke); > An error code/exception is generated so the logs are noisy and unnecessary. > If the client handles the error there's still a log message. This is > different than other APIs. These logs should be removed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2288) During shutdown, server may fail to ack completed transactions to clients.
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14952089#comment-14952089 ] Jordan Zimmerman commented on ZOOKEEPER-2288: - FYI - Curator now has workaround methods. You can delete "quietly" (Curator hides the NoNode exception). You can also now do a create-or-set-data whereby Curator will set the data if the node already exists. > During shutdown, server may fail to ack completed transactions to clients. > -- > > Key: ZOOKEEPER-2288 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2288 > Project: ZooKeeper > Issue Type: Bug > Components: server >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: ZOOKEEPER-2288.001.patch > > > During shutdown, requests may still be in flight in the request processing > pipeline. Some of these requests have reached a state where the transaction > has executed and committed, but has not yet been acknowledged back to the > client. It's possible that these transactions will not ack to the client > before the shutdown sequence completes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2274) ZooKeeperServerMain is difficult to subclass for unit testing
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2274: Attachment: ZOOKEEPER-2274.2.patch > ZooKeeperServerMain is difficult to subclass for unit testing > - > > Key: ZOOKEEPER-2274 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2274 > Project: ZooKeeper > Issue Type: Improvement > Components: server, tests >Affects Versions: 3.5.1 >Reporter: Jordan Zimmerman > Attachments: ZOOKEEPER-2274.2.patch, ZOOKEEPER-2274.patch > > > Apache Curator needs a testable version of ZooKeeperServerMain. In the past, > Curator has used javassist, reflection, etc. but this is all clumsy. With a > few trivial changes, Curator could use ZooKeeperServerMain directly by > subclassing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (ZOOKEEPER-2260) Paginated getChildren call
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741691#comment-14741691 ] Jordan Zimmerman commented on ZOOKEEPER-2260: - Are there async versions of the methods? > Paginated getChildren call > -- > > Key: ZOOKEEPER-2260 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2260 > Project: ZooKeeper > Issue Type: New Feature >Affects Versions: 3.4.5, 3.4.6, 3.5.0, 4.0.0 >Reporter: Marco P. >Priority: Minor > Labels: api, features > Fix For: 4.0.0 > > Attachments: ZOOKEEPER-2260.patch > > > Add pagination support to the getChildren() call, allowing clients to iterate > over children N at the time. > Motivations for this include: > - Getting out of a situation where so many children were created that > listing them exceeded the network buffer sizes (making it impossible to > recover by deleting)[1] > - More efficient traversal of nodes with large number of children [2] > I do have a patch (for 3.4.6) we've been using successfully for a while, but > I suspect much more work is needed for this to be accepted. > [1] https://issues.apache.org/jira/browse/ZOOKEEPER-272 > [2] https://issues.apache.org/jira/browse/ZOOKEEPER-282 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (ZOOKEEPER-2274) ZooKeeperServerMain is difficult to subclass for unit testing
[ https://issues.apache.org/jira/browse/ZOOKEEPER-2274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jordan Zimmerman updated ZOOKEEPER-2274: Attachment: ZOOKEEPER-2274.patch > ZooKeeperServerMain is difficult to subclass for unit testing > - > > Key: ZOOKEEPER-2274 > URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2274 > Project: ZooKeeper > Issue Type: Improvement > Components: server, tests >Affects Versions: 3.5.1 >Reporter: Jordan Zimmerman > Attachments: ZOOKEEPER-2274.patch > > > Apache Curator needs a testable version of ZooKeeperServerMain. In the past, > Curator has used javassist, reflection, etc. but this is all clumsy. With a > few trivial changes, Curator could use ZooKeeperServerMain directly by > subclassing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (ZOOKEEPER-2274) ZooKeeperServerMain is difficult to subclass for unit testing
Jordan Zimmerman created ZOOKEEPER-2274: --- Summary: ZooKeeperServerMain is difficult to subclass for unit testing Key: ZOOKEEPER-2274 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-2274 Project: ZooKeeper Issue Type: Improvement Components: server, tests Affects Versions: 3.5.1 Reporter: Jordan Zimmerman Apache Curator needs a testable version of ZooKeeperServerMain. In the past, Curator has used javassist, reflection, etc. but this is all clumsy. With a few trivial changes, Curator could use ZooKeeperServerMain directly by subclassing. -- This message was sent by Atlassian JIRA (v6.3.4#6332)