Curator does nothing additional with sync. Sync is a feature of ZooKeeper not Curator. Curator merely exposes an API for it.
-JZ > On Mar 14, 2019, at 9:35 AM, Robin Wolters > <[email protected]> wrote: > > That is indeed an option, thanks. > > But for my own curiosity, how does the sync operation behave for Curator? > 1) Does it also sync the child nodes of the specified path? > 2) Does it sync (transfer data for) a node even if it was up to date? > 3) In Curator, would I have to wait for the callback of sync or can I > just use sync and go ahead, knowing the next operation is queued? > > Regards, > Robin > > On Wed, 13 Mar 2019 at 17:07, Jordan Zimmerman > <[email protected]> wrote: >> >> It sounds like you’re describing one of the Barrier recipes. Curator has >> several. I’d look to those as a possible solution. >> >> ==================== >> Jordan Zimmerman >> >>> On Mar 13, 2019, at 9:56 AM, Robin Wolters >>> <[email protected]> wrote: >>> >>> Thanks for the reply. I understand that this is not possible in general. >>> >>> In my case the read and write are started from the same overarching >>> application (but different zookeeper connections and hence possibly >>> different nodes). >>> I start the read only after I know the write has succeeded, but I >>> don't know if it has reached all nodes yet. >>> So I expected that a sync gives me the guarantee that the next read >>> reflects at least this specific write. >>> It's okay if possible further writes are not in yet. >>> >>> Is this "selective" consistency not possible with my approach? >>> >>> Best regards, >>> Robin >>> >>> On Wed, 13 Mar 2019 at 15:47, Jordan Zimmerman >>> <[email protected]> wrote: >>>> >>>> ZooKeeper is an eventually consistent system. Reads are always consistent >>>> in that they reflect previous writes, however it is not possible to do >>>> what you describe. Reads are fulfilled by the Node your client is >>>> connected to. Writes are always through the leader Node. In a dynamic >>>> ensemble with lots of concurrent reads/writes there is no such thing a >>>> read reflecting all active writes. >>>> >>>> You should consider a RDBMS like MySQL instead of something like ZooKeeper. >>>> >>>> ==================== >>>> Jordan Zimmerman >>>> >>>>> On Mar 13, 2019, at 6:37 AM, Robin Wolters >>>>> <[email protected]> wrote: >>>>> >>>>> Hello, >>>>> >>>>> I use Zookeeper in a cluster setup and some of my read operations need >>>>> to be consistent, meaning I have to make sure that a read always >>>>> reflects all previous writes (which might be performed on another >>>>> zookeeper server and has not reached all other instances). >>>>> The idea is to force a sync before those reads to make them >>>>> “consistent” reads with: >>>>> client.sync().forPath(path) >>>>> >>>>> For this, I have these questions left: >>>>> 1. Do you need to manually await the callback of sync before reading, >>>>> or is the next read operation queued until the sync is complete? >>>>> 2. Which amount of data is transferred between the nodes in this kind >>>>> of manual sync? >>>>> a) Does it always transfer and process data from the master server >>>>> even if the syncing node is up-to-date on this path - or only for >>>>> those nodes that are really out of sync (i.e. sync only possible >>>>> deltas)? >>>>> b) Does a sync on the path also force the parent nodes to sync? >>>>> c) Does a sync on the path also force all child nodes to sync? >>>>> d) How would one manually sync the complete data (as the regular >>>>> sync does) of a node? Is client.sync().forPath("/") the way to do >>>>> this? >>>>> >>>>> Anyone experiences with this? >>>>> >>>>> Best regards, >>>>> Robin
