Den 2014-02-28 00:36 skrev Dmitry Kuzmenko såhär: > > Hello, Kjell! > > Friday, February 28, 2014, 3:23:34 AM, you wrote: > > KR> I'm going to do a gbak backup/restore on my 100 Gbyte database this > KR> weekend, and was wondering if perhaps I should bump up the page size > KR> from 8 kbyte to 16 kbyte. > > KR> What should I look for? > > KR> I'm running gstat -a -i right now. Is this the appropriate options to > KR> get the relevant figures? > > No. The best way, always, minimum gsta -h, or maximum gstat -r. > Only gstat -r shows record size, versions, etc. > You should look at indices, that have depth greater than 3 (equal to 4 > or more). >
OK, I ran gstat -r and found a few indices with depth = 4, as follows: ----------------------------------------------- Företag (146) Primary pointer page: 231, Index root page: 232 Average record length: 292.38, total records: 2462168 Average version length: 0.00, total versions: 0, max versions: 0 Data pages: 101698, data page slots: 101698, average fill: 92% Fill distribution: 0 - 19% = 0 20 - 39% = 0 40 - 59% = 1 60 - 79% = 3 80 - 99% = 101694 -- Not searched very often. Inserts are not time critical. Probably not an issue. Index IX_Företag_NamnCI (22) Depth: 4, leaf buckets: 22405, nodes: 2462168 Average data length: 34.17, total dup: 345654, max dup: 968 Fill distribution: 0 - 19% = 58 20 - 39% = 15 40 - 59% = 20446 60 - 79% = 28 80 - 99% = 1858 ----------------------------------------------- Uppgift (172) Primary pointer page: 284, Index root page: 285 Average record length: 77.90, total records: 208909973 Average version length: 0.00, total versions: 0, max versions: 0 Data pages: 3049518, data page slots: 3049518, average fill: 80% Fill distribution: 0 - 19% = 0 20 - 39% = 0 40 - 59% = 0 60 - 79% = 2001636 80 - 99% = 1047882 -- Heavily used for both searches and inserts. -- A performance issue here would have an impact. Most critical for reading. Index IX_PK_Uppgift (0) Depth: 4, leaf buckets: 314034, nodes: 208909973 Average data length: 5.01, total dup: 0, max dup: 0 Fill distribution: 0 - 19% = 47 20 - 39% = 0 40 - 59% = 5952 60 - 79% = 2785 80 - 99% = 305250 -- Heavily used for both searches and inserts. -- A performance issue here would have an impact. Most critical for reading. Index IX_Uppgift_Hållare (2) Depth: 4, leaf buckets: 434584, nodes: 208909973 Average data length: 3.95, total dup: 45804696, max dup: 998003 Fill distribution: 0 - 19% = 555 20 - 39% = 0 40 - 59% = 316053 60 - 79% = 1476 80 - 99% = 116500 ----------------------------------------------- Uppgiftshållare (175) Primary pointer page: 290, Index root page: 291 Average record length: 42.23, total records: 166793365 Average version length: 0.00, total versions: 0, max versions: 0 Data pages: 1689968, data page slots: 1689968, average fill: 72% Fill distribution: 0 - 19% = 1 20 - 39% = 0 40 - 59% = 0 60 - 79% = 1689967 80 - 99% = 0 -- Heavily used for both searches and inserts. -- A performance issue here would have an impact. Most critical for reading. Index IX_PK_Uppgiftshållare (0) Depth: 4, leaf buckets: 247620, nodes: 166793365 Average data length: 5.05, total dup: 0, max dup: 0 Fill distribution: 0 - 19% = 1 20 - 39% = 0 40 - 59% = 141 60 - 79% = 0 80 - 99% = 247478 -- Heavily used for both searches and inserts. -- A performance issue here would have an impact. Most critical for reading. Index IX_Uppgiftshållare_HanDS2 (1) Depth: 4, leaf buckets: 155606, nodes: 166793365 Average data length: 0.48, total dup: 151362720, max dup: 46 Fill distribution: 0 - 19% = 144 20 - 39% = 1 40 - 59% = 40447 60 - 79% = 619 80 - 99% = 114395 -- Heavily used for both searches and inserts. -- A performance issue here would have an impact. Most critical for reading. Index UppgiftshållareUnika (3) Depth: 4, leaf buckets: 424641, nodes: 166793365 Average data length: 10.97, total dup: 0, max dup: 0 Fill distribution: 0 - 19% = 1669 20 - 39% = 0 40 - 59% = 99013 60 - 79% = 2327 80 - 99% = 321632 So, based on that gstat info and my comments for each, would you recommend to try 16 kbyte page size? > Anyway, you may always compare performance. 100gb is not such a big > database that could not be set to 8k back at appropriate time if > you will find that something become slower at 16k. > I know it's not to be regarded as very large compared to many others, but it's large enough to require careful choices to get adequate performance on our rather low budget server. :-) Regards, Kjell