Le Thursday 05 Sep 2013 à 11:24:40 (+0200), Stefan Hajnoczi a écrit : > On Wed, Sep 04, 2013 at 11:55:23AM +0200, Benoît Canet wrote: > > > > I'm not sure if multiple journals will work in practice. Doesn't this > > > > re-introduce the need to order update steps and flush between them? > > > > > > This is a question for Benoît, who made this requirement. I asked him > > > the same a while ago and apparently his explanation made some sense to > > > me, or I would have remembered that I don't want it. ;-) > > > > The reason behind the multiple journal requirement is that if a block get > > created and deleted in a cyclic way it can generate cyclic > > insertions/deletions > > journal entries. > > The journal could easilly be filled if this pathological corner case happen. > > When it happen the dedup code repack the journal by writting only the non > > redundant information into a new journal and then use the new one. > > It would not be easy to do so if non dedup journal entries are present in > > the > > journal hence the multiple journal requirement. > > > > The deduplication also need two journals because when the first one is > > frozen it > > take some time to write the hash table to disk and anyway new entries must > > be > > stored somewhere at the same time. The code cannot block. > > > > > It might have something to do with the fact that deduplication uses the > > > journal more as a kind of cache for hash values that can be dropped and > > > rebuilt after a crash. > > > > For dedupe the journal is more a "resume after exit" tool. > > I'm not sure anymore if dedupe needs the same kind of "journal" as a > metadata journal for qcow2. > > Since you have a dirty flag to discard the "journal" on crash, the > journal is not used for data integrity. > > That makes me wonder if the metadata journal is the right structure for > dedupe? Maybe your original proposal was fine for dedupe and we just > misinterpreted it because we thought this needs to be a safe journal.
Kevin what do you think of this ? I could strip down the dedupe journal code to specialize it. Best regards Benoît