If the update makes the records longer (after run-length encoding) it's even more fun, as you might get fragmentation (of records across pages) and access times can then increase by a very large factor indeed, even to the extent of completely crippling the performance of an entire application.

On 05/04/2016 09:21, liviuslivius liviusliv...@poczta.onet.pl [firebird-support] wrote:
Hi,
i must update big table 100 GB
and as we know when we do update then new record version will be created.
scenarion 1:
1. Table size 100GB (db size 200GB)
2. Update field in all records generate 100GB new record versions
3. table size after is 200GB (db size 300GB)
4. sweep remove 100GB and mark pages as free (table size 100 GB but database still 300GB)
5. backup and restre bring db to it previous size (db size 200GB)
but what happen when i do this?
scenarion 2:
1. Table size 100GB (db size 200GB)
2. i lock database with nbackup -L
3. Update field in all records generate 100GB delta file (db size 200GB)
4. table size in db is 100GB and delta is 100GB (db size 200GB)
5. i unlock database nbackup -U
A. table will be 100GB and no free pages? (db size will be 200GB and no need to bacup and restore process) B. table will be 100GB and in db will be 100GB free pages? (db size will be 300GB and i need to bakup and restore?)
what is the answer for this A or B?
regards,
Karol Bieniaszewski



--
Tim Ward

  • [firebird-supp... liviuslivius liviusliv...@poczta.onet.pl [firebird-support]
    • Re: [fire... Tim Ward t...@telensa.com [firebird-support]
    • Re: [fire... Dimitry Sibiryakov s...@ibphoenix.com [firebird-support]
    • [firebird... hv...@users.sourceforge.net [firebird-support]

Reply via email to