On 19/10/12 00:09, Zooko Wilcox-O'Hearn wrote:
> Okay, great! I was very confused to think that querying and changing
> leases would be a long expensive process. That's the whole *point* of
> leasedb is to make that fast and reliable. Duh.
> 
> I agree that the protocol you sketched out in your email would work,
> including that the client can wait for confirmation that it worked and
> retry it if it didn't (such as if the server failed at any point).
> 
> It kind of sounds like with this current revision of the scheme you
> don't really need to crawl over all the shares for the purpose of
> garbage collection.

Yes, that's true. I hadn't thought about that very hard before this
discussion; I was trying to limit design changes to only those strictly
needed to support leasedb. But yes, maintaining a queue of shares-to-delete
would be more efficient (in terms of deleting garbage more quickly).

> You still need to crawl in order to build an index
> of shares for the first time (i.e., when you first turn on leasedb, or
> if your leasedb gets destroyed or corrupted) and even if you have a
> leasedb, you might still want to crawl in order to discover
> externally-imported shares, and I *guess* it might be useful to crawl
> in order to discover that shares have been externally removed, but
> there's no need to crawl in order to expire or garbage-collect shares,
> is there?

No. That's the minimal change from the existing code, though. I don't
think there's any overall simplification available unless we could remove
crawlers *entirely*, which I don't think we can do. We can always optimize
garbage collection later.

-- 
David-Sarah Hopwood ⚥

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev

Reply via email to