Den 2012-01-01 21:31 skrev Kjell Rilbe såhär: > 3. Stay at 32 bit id:s but somehow "compact" all id:s older than OIT(?) > into OIT-1. Allow id:s to wrap around. > Pros: > - No extra disk space. > - Probably rather simple code changes for id usage. > Cons: > - May be difficult to find an effective way to compact old id:s. > - Even if an effective way to compact old id:s is found, it may be > complicated to implement. > - May be difficult or impossible to perform compacting without large > amounts of write locks. > - Potential for problems if OIT isn't incremented (but such OIT problems > should be solved anyway). I'm thinking about this solution, in case a solution is actually needed (see recent subthread).
I assume the sweep only looks at record versions that are deleted, and "mark them" (?) for garbage collection if they have transaction id less than OIT. Correct? This is not sufficient for the "consolidation" of old transaction id:s. What's needed is, in principle, a task that reads through ALL record versions, and for each one with transaction id < OIT, change it to OIT - 1. When it has done that for the entire database, it can move the max useable transaction id to OIT - 2. Then it can wait until the database starts to exhaust the "transaction id space" again before repeating the cycle. To make this a bit less work intensive, would it be possible and a good idea to mark each database page with the lowest transaction id in use on that page? In that case, the task could skip all pages where this value is >= OIT - 1. But would it require a lot of overhead to keep this info up to date? I don't know how a page is used to access a record on it.... But doesn't cooperative garbage collection mean that on each page access, all deleted records versions on that page are marked for garbage collection? In that case I assume it will read the transaction id and deletion state of all record versions on the page anyway, and that's all that's needed to keep the page's lowest transaction id up to date. Or am I missing something (likely...)? I assume the lowest transaction id on a page can never become lower, and a new page will always have a "near-the-tip" lowest transaction id. So the consolidation task would not have to re-check a page that is updated after the task checks it but before the task completes the cycle. But how to make sure it checks all pages? Is there any well-defined order in which it could check all pages, without running the risk of missing some of them, even if the database is "live"? Kjell -- -------------------------------------- Kjell Rilbe DataDIA AB E-post: kj...@datadia.se Telefon: 08-761 06 55 Mobil: 0733-44 24 64 ------------------------------------------------------------------------------ Write once. Port to many. Get the SDK and tools to simplify cross-platform app development. Create new or port existing apps to sell to consumers worldwide. Explore the Intel AppUpSM program developer opportunity. appdeveloper.intel.com/join http://p.sf.net/sfu/intel-appdev Firebird-Devel mailing list, web interface at https://lists.sourceforge.net/lists/listinfo/firebird-devel