ODP: ODP: [firebird-support] Re: Strange behavior on very large table
I see that you have only 1024 page buffers. Do you use SuperServer or Classic? If Superserver then increase it. Regards, Karol Bieniaszewski
Re: ODP: [firebird-support] Re: Strange behavior on very large table
02.03.2019 10:50, Karol Bieniaszewski liviusliv...@poczta.onet.pl [firebird-support] wrote: > You have quite big fill 94%, that there is a chence that new page must be > allocated – but > here still you have free slots. It may be a sign of record fragmentation which is bad from performance POV. Average record length is not big enough to fill data pages completely so I would guess that his application may do something strange: insert "empty" record and then update it with data. That would be bad. -- WBR, SD. ++ Visit http://www.firebirdsql.org and click the Documentation item on the main (top) menu. Try FAQ and other links from the left-side menu there. Also search the knowledgebases at http://www.ibphoenix.com/resources/documents/ ++ Yahoo Groups Links <*> To visit your group on the web, go to: http://groups.yahoo.com/group/firebird-support/ <*> Your email settings: Individual Email | Traditional <*> To change settings online go to: http://groups.yahoo.com/group/firebird-support/join (Yahoo! ID required) <*> To change settings via email: firebird-support-dig...@yahoogroups.com firebird-support-fullfeatu...@yahoogroups.com <*> To unsubscribe from this group, send an email to: firebird-support-unsubscr...@yahoogroups.com <*> Your use of Yahoo Groups is subject to: https://info.yahoo.com/legal/us/yahoo/utos/terms/
ODP: [firebird-support] Re: Strange behavior on very large table
You have 91345 versions but Max versions is 1. Then i do not think that performance problem is here. You have quite big fill 94%, that there is a chence that new page must be allocated – but here still you have free slots. You have gap between oldest active nad next transaction. It is not big but you must look if this do not cause problem somewhere else Which affect overall performance. No more to say. I do not know anything about indexes, referencess and queries involved in this system. Regards, Karol Bieniaszewski
[firebird-support] Re: Strange behavior on very large table
Hi Karol, here's the gstat information I got once I was able to get the database to effectively go into a tailspin when I finished updating that table: The database: Database header page information: Flags 0 Checksum12345 Generation 67880 Page size 16384 ODS version 10.1 Oldest transaction 65630 Oldest active 67799 Oldest snapshot 67799 Next transaction67864 Bumped transaction 1 Sequence number 0 Next attachment ID 0 Implementation ID 19 Shadow count0 Page buffers1024 Next header page0 Database dialect3 Creation date Feb 18, 2019 12:56:24 Attributes force write Variable header data: Sweep interval: 2 *END* The table in question: CLAIM_LOAD (283) Primary pointer page: 282, Index root page: 328 Average record length: 1272.01, total records: 990032 Average version length: 1402.69, total versions: 91345, max versions: 1 Data pages: 91470, data page slots: 92296, average fill: 94% Fill distribution: 0 - 19% = 0 20 - 39% = 0 40 - 59% = 1 60 - 79% = 0 80 - 99% = 91469 Does this give you anything of interest that I should address? Myles
[firebird-support] Re: Strange behavior on very large table
Hi Karol, I'm going to run some tests on the database this weekend as I think I know how to trigger this behavior. Once I get it to do that, I'll get the results and post them. Thanks Myles
[firebird-support] Re: Strange behavior on very large table
Hi Lester, I have definitely asked similar questions to the client. But they have started to use this historical database for their own reporting, etc. and that isn't its original purpose. Having said that, there is not an easy way to identify changes, but I may talk to them further about whether they can ask the provider to only give them the delta of changes from previous data cuts. It would certainly be much easier if I only had to deal with a smaller amount of data to add/update to the existing content. Myles
[firebird-support] Re: Strange behavior on very large table
Thanks Mark. The nightly load process is triggered by the availability of the source CSV file that is provided by a 3rd party, and we don't have that much control over when this is produced. It is one of those things that the second we get to access it, then the database has to begin the kill & fill load process. By the time it is finished, there isn't enough of a time window available afterwards to do a backup at that time before the users begin their work each day. However your last point might be the most viable. There are some stored procedures that depend on the table, however they could be dropped first, then the table dropped and then we recreate the stored procedures afterwards. That would be a viable solution in my mind. If this means it won't force garbage collection, it may be the best option. I'll see if I can create some scripts to do that and test it. Thanks for the input. Myles
Re: [firebird-support] Re: Strange behavior on very large table
On 22-2-2019 05:33, my...@techsol.org [firebird-support] wrote: > Thanks Helen. Just a few clarifications, that might help whittle this > down a bit... > > >Given that this table is "temporary" storage, one supposes that you > are deleting rows from it regularly. Do you happen to be deleting > 900,000 rows each day before you load up the latest batch of 900,000? > > Yes, that is correct. Each morning it does a "kill & fill" on this > table. All 900,000 rows are deleted, then a new set of that data is > loaded from a CSV file via an external application. This happens about > 2 hours after the system has completed its backup procedures. It could possibly behave better if you do that before the backup (assuming that is suitable for your business needs). Assuming you don't disable sweep as part of the backup. > >If you 9;re not doing any particular housework on it (restoring from > backup and/or resetting the indexes periodically), then it would be > normal to expect degrading performance until the next time that > housekeeping is done. > > For about the last 6 months we have put the database through a regular > gfix mend process once a week, following with a backup of the database > and then a restore. This doesn't seem to have changed the behavior, > however. Why are you regularly performing gfix -mend? That in itself is a pretty bad sign. Using -mend can be destructive. From the gfix documentation: """ The option required to fix a corrupted database is the gfix -m[end] command. However, it cannot fix all problems and may result in a loss of data. """ and """ gfix -m[end] database_name This causes the corruptions in data records to be ignored. While this sounds like a good thing, it is not. Subsequent database actions (such as taking a backup) will not include the corrupted records, leading to data loss. """ and """ Using the -mend option can lead to silent deletions of data because gfix doesn't care about internal database constraints like foreign keys etc, the -mend option simply says to gfix "go ahead and clean out anything you don't like". """ If you are doing anything regularly, it should be gfix -sweep (consider trying that after loading the data). Alternatively, consider dropping and recreating the table before populating. This will avoid any possible overhead associated with garbage collection. This is of course only a viable course of action if there are no dependencies on the table. -- Mark Rotteveel
Re: [firebird-support] Re: Strange behavior on very large table
On 22/02/2019 04:33, my...@techsol.org [firebird-support] wrote: > >Given that this table is "temporary" storage, one supposes that you > are deleting rows from it regularly. Do you happen to be deleting > 900,000 rows each day before you load up the latest batch of 900,000? How much information is static between cycles? Does all the data NEED to be in the one table? Can you re-configure so that this action becomes 'add new table' ... 'drop old table' ... which will be a lot tidier anyway. I had a system loading a previous days actions which initially just kept adding and keeping history, and with a more modern processor and SSD disk we would probably get away with it today ;) But back then it got to a point where the table was too big, and we dropped to just maintaining a months worth live with every day in it's own table and only the summary tables maintaining the historic counts. All the historic data is still available and if there was a need it might be interesting to put it all back together into one table ... if I could get past the 'data protection' restrictions :( I am actually expecting a request to make sure it is all destroyed but it's not directly identifying people from 20 years ago. -- Lester Caine - G8HFL - Contact - https://lsces.co.uk/wiki/?page=contact L.S.Caine Electronic Services - https://lsces.co.uk EnquirySolve - https://enquirysolve.com/ Model Engineers Digital Workshop - https://medw.co.uk Rainbow Digital Media - https://rainbowdigitalmedia.co.uk
Re: [firebird-support] Re: Strange behavior on very large table
16 KB page size is ok and is maximal for FB1.5 to FB3. During problem time run gfix -h and show us the results and also gstat results for that tabe and indexes on it. P.S. to recreate index you do not need to set it inactive. You can activate already active index. But this operation can be done (safe) only when noone else work on that database.Also running -mend without looking about errors on the database is not good option, as you can loose same data. Regards,Karol Bieniaszewski null
[firebird-support] Re: Strange behavior on very large table
Thanks Helen. Just a few clarifications, that might help whittle this down a bit... >Given that this table is "temporary" storage, one supposes that you are deleting rows from it regularly. Do you happen to be deleting 900,000 rows each day before you load up the latest batch of 900,000? Yes, that is correct. Each morning it does a "kill & fill" on this table. All 900,000 rows are deleted, then a new set of that data is loaded from a CSV file via an external application. This happens about 2 hours after the system has completed its backup procedures. >If you're not doing any particular housework on it (restoring from backup and/or resetting the indexes periodically), then it would be normal to expect degrading performance until the next time that housekeeping is done. For about the last 6 months we have put the database through a regular gfix mend process once a week, following with a backup of the database and then a restore. This doesn't seem to have changed the behavior, however. The behavior is quite unpredictable. It seems that any attempt to access the table after the Kill & Fill has been done, or within a few hours of it, sends the server off into a 100% CPU processor load for hours. But if this is done 6 hours after the kill & fill, then it seems to work just fine. I have also seen the same issues if I attempt to access that table after a database server is rebooted. Not sure if that makes any difference here at all, or signifies anything else that could be going on? Myles
[firebird-support] Re: Strange behavior on very large table
Thank you for your comment. Very helpful. I ran gstat on the database, and on the table in question. It is the default 16K page size. Are you saying that a larger page size would be more appropriate? If so, how large would you suggest? And is it correct that the only way to change the page size would be to backup the database and then restore it with a page option to change out the size?