Re: [PERFORM] PostgreSQL as a local in-memory cache
Tom Lane wrote: > Dave Page writes: > > On Wed, Jun 23, 2010 at 9:25 PM, Robert Haas wrote: > >> I don't think we need a system-wide setting for that. ?I believe that > >> the unlogged tables I'm working on will handle that case. > > > Aren't they going to be truncated at startup? If the entire system is > > running without WAL, we would only need to do that in case of an > > unclean shutdown wouldn't we? > > The problem with a system-wide no-WAL setting is it means you can't > trust the system catalogs after a crash. Which means you are forced to True, and in fact any postmaster crash could lead to curruption. > use initdb to recover from any crash, in return for not a lot of savings > (for typical usages where there's not really much churn in the > catalogs). I tend to agree with Robert that a way to not log content > updates for individual user tables is likely to be much more useful in > practice. OK, TODO removed. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + None of us is going to be here forever. + -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] PostgreSQL as a local in-memory cache
Dave Page writes: > On Wed, Jun 23, 2010 at 9:25 PM, Robert Haas wrote: >> I don't think we need a system-wide setting for that. I believe that >> the unlogged tables I'm working on will handle that case. > Aren't they going to be truncated at startup? If the entire system is > running without WAL, we would only need to do that in case of an > unclean shutdown wouldn't we? The problem with a system-wide no-WAL setting is it means you can't trust the system catalogs after a crash. Which means you are forced to use initdb to recover from any crash, in return for not a lot of savings (for typical usages where there's not really much churn in the catalogs). I tend to agree with Robert that a way to not log content updates for individual user tables is likely to be much more useful in practice. regards, tom lane -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] PostgreSQL as a local in-memory cache
Robert Haas wrote: > On Wed, Jun 23, 2010 at 3:37 PM, Bruce Momjian wrote: > > Tom Lane wrote: > >> Dimitri Fontaine writes: > >> > Josh Berkus writes: > >> >> a) Eliminate WAL logging entirely > > > > If we elimiate WAL logging, that means a reinstall is required for even > > a postmaster crash, which is a new non-durable behavior. > > > > Also, we just added wal_level = minimal, which might end up being a poor > > name choice of we want wal_level = off in PG 9.1. ?Perhaps we should > > have used wal_level = crash_safe in 9.0. > > > > I have added the following TODO: > > > > ? ? ? ?Consider a non-crash-safe wal_level that eliminates WAL activity > > > > ? ? ? ? ? ?* > > http://archives.postgresql.org/pgsql-performance/2010-06/msg00300.php > > I don't think we need a system-wide setting for that. I believe that > the unlogged tables I'm working on will handle that case. Uh, will we have some global unlogged setting, like for the system tables and stuff? It seems like an heavy burden to tell people they have to create ever object as unlogged, and we would still generate log for things like transaction commits. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + None of us is going to be here forever. + -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] PostgreSQL as a local in-memory cache
On Wed, Jun 23, 2010 at 9:25 PM, Robert Haas wrote: > On Wed, Jun 23, 2010 at 3:37 PM, Bruce Momjian wrote: >> Tom Lane wrote: >>> Dimitri Fontaine writes: >>> > Josh Berkus writes: >>> >> a) Eliminate WAL logging entirely >> >> If we elimiate WAL logging, that means a reinstall is required for even >> a postmaster crash, which is a new non-durable behavior. >> >> Also, we just added wal_level = minimal, which might end up being a poor >> name choice of we want wal_level = off in PG 9.1. Perhaps we should >> have used wal_level = crash_safe in 9.0. >> >> I have added the following TODO: >> >> Consider a non-crash-safe wal_level that eliminates WAL activity >> >> * >> http://archives.postgresql.org/pgsql-performance/2010-06/msg00300.php > > I don't think we need a system-wide setting for that. I believe that > the unlogged tables I'm working on will handle that case. Aren't they going to be truncated at startup? If the entire system is running without WAL, we would only need to do that in case of an unclean shutdown wouldn't we? -- Dave Page EnterpriseDB UK: http://www.enterprisedb.com The Enterprise Postgres Company -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] PostgreSQL as a local in-memory cache
On Wed, Jun 23, 2010 at 3:37 PM, Bruce Momjian wrote: > Tom Lane wrote: >> Dimitri Fontaine writes: >> > Josh Berkus writes: >> >> a) Eliminate WAL logging entirely > > If we elimiate WAL logging, that means a reinstall is required for even > a postmaster crash, which is a new non-durable behavior. > > Also, we just added wal_level = minimal, which might end up being a poor > name choice of we want wal_level = off in PG 9.1. Perhaps we should > have used wal_level = crash_safe in 9.0. > > I have added the following TODO: > > Consider a non-crash-safe wal_level that eliminates WAL activity > > * > http://archives.postgresql.org/pgsql-performance/2010-06/msg00300.php I don't think we need a system-wide setting for that. I believe that the unlogged tables I'm working on will handle that case. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise Postgres Company -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] PostgreSQL as a local in-memory cache
Pavel Stehule wrote: > 2010/6/23 Bruce Momjian : > > Tom Lane wrote: > >> Dimitri Fontaine writes: > >> > Josh Berkus writes: > >> >> a) Eliminate WAL logging entirely > > > > If we elimiate WAL logging, that means a reinstall is required for even > > a postmaster crash, which is a new non-durable behavior. > > > > Also, we just added wal_level = minimal, which might end up being a poor > > name choice of we want wal_level = off in PG 9.1. ?Perhaps we should > > have used wal_level = crash_safe in 9.0. > > > > I have added the following TODO: > > > > ? ? ? ?Consider a non-crash-safe wal_level that eliminates WAL activity > > > > ? ? ? ? ? ?* > > http://archives.postgresql.org/pgsql-performance/2010-06/msg00300.php > > > > -- > > isn't fsync to off enought? Well, testing reported in the thread showed other settings also help, though the checkpoint lengthening was not tested. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + None of us is going to be here forever. + -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] PostgreSQL as a local in-memory cache
2010/6/23 Bruce Momjian : > Tom Lane wrote: >> Dimitri Fontaine writes: >> > Josh Berkus writes: >> >> a) Eliminate WAL logging entirely > > If we elimiate WAL logging, that means a reinstall is required for even > a postmaster crash, which is a new non-durable behavior. > > Also, we just added wal_level = minimal, which might end up being a poor > name choice of we want wal_level = off in PG 9.1. Perhaps we should > have used wal_level = crash_safe in 9.0. > > I have added the following TODO: > > Consider a non-crash-safe wal_level that eliminates WAL activity > > * > http://archives.postgresql.org/pgsql-performance/2010-06/msg00300.php > > -- isn't fsync to off enought? Regards Pavel > Bruce Momjian http://momjian.us > EnterpriseDB http://enterprisedb.com > > + None of us is going to be here forever. + > > -- > Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance > -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] PostgreSQL as a local in-memory cache
Tom Lane wrote: > Dimitri Fontaine writes: > > Josh Berkus writes: > >> a) Eliminate WAL logging entirely > >> b) Eliminate checkpointing > >> c) Turn off the background writer > >> d) Have PostgreSQL refuse to restart after a crash and instead call an > >> exteral script (for reprovisioning) > > > Well I guess I'd prefer a per-transaction setting, allowing to bypass > > WAL logging and checkpointing. > > Not going to happen; this is all or nothing. > > > Forcing the backend to care itself for > > writing the data I'm not sure is a good thing, but if you say so. > > Yeah, I think proposal (c) is likely to be a net loss. > > (a) and (d) are probably simple, if by "reprovisioning" you mean > "rm -rf $PGDATA; initdb". Point (b) will be a bit trickier because > there are various housekeeping activities tied into checkpoints. > I think you can't actually remove checkpoints altogether, just > skip the flush-dirty-pages part. Based on this thread, I have developed the following documentation patch that outlines the performance enhancements possible if durability is not required. The patch also documents that synchronous_commit = false has potential committed transaction loss from a database crash (as well as an OS crash). -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + None of us is going to be here forever. + Index: doc/src/sgml/config.sgml === RCS file: /cvsroot/pgsql/doc/src/sgml/config.sgml,v retrieving revision 1.282 diff -c -c -r1.282 config.sgml *** doc/src/sgml/config.sgml 22 Jun 2010 02:57:49 - 1.282 --- doc/src/sgml/config.sgml 23 Jun 2010 18:53:26 - *** *** 1463,1469 really guaranteed to be safe against a server crash. (The maximum delay is three times .) Unlike , setting this parameter to off ! does not create any risk of database inconsistency: a crash might result in some recent allegedly-committed transactions being lost, but the database state will be just the same as if those transactions had been aborted cleanly. So, turning synchronous_commit off --- 1463,1470 really guaranteed to be safe against a server crash. (The maximum delay is three times .) Unlike , setting this parameter to off ! does not create any risk of database inconsistency: an operating ! system or database crash crash might result in some recent allegedly-committed transactions being lost, but the database state will be just the same as if those transactions had been aborted cleanly. So, turning synchronous_commit off Index: doc/src/sgml/perform.sgml === RCS file: /cvsroot/pgsql/doc/src/sgml/perform.sgml,v retrieving revision 1.80 diff -c -c -r1.80 perform.sgml *** doc/src/sgml/perform.sgml 29 May 2010 21:08:04 - 1.80 --- doc/src/sgml/perform.sgml 23 Jun 2010 18:53:26 - *** *** 1104,1107 --- 1104,1169 + +Non-Durable Settings + + + non-durable + + + + Durability is a database feature that guarantees the recording of + committed transactions even if if the server crashes or loses + power. However, durability adds significant database overhead, + so if your site does not require such a guarantee, + PostgreSQL can be configured to run + much faster. The following are configuration changes you can make + to improve performance in such cases; they do not invalidate + commit guarantees related to database crashes, only abrupt operating + system stoppage, except as mentioned below: + + + + +Place the database cluster's data directory in a memory-backed +file system (i.e. RAM disk). This eliminates all +database disk I/O, but limits data storage to the amount of +available memory (and perhaps swap). + + + + + +Turn off ; there is no need to flush +data to disk. + + + + + +Turn off ; there is no need +to guard against partial page writes. + + + + + +Increase and ; this reduces the frequency +of checkpoints, but increases the storage requirements of +/pg_xlog. + + + + + +Turn off ; there might be no +need to write the WAL to disk on every +commit. This does affect database crash transaction durability. + + + + + + -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] PostgreSQL as a local in-memory cache
Tom Lane wrote: > Dimitri Fontaine writes: > > Josh Berkus writes: > >> a) Eliminate WAL logging entirely If we elimiate WAL logging, that means a reinstall is required for even a postmaster crash, which is a new non-durable behavior. Also, we just added wal_level = minimal, which might end up being a poor name choice of we want wal_level = off in PG 9.1. Perhaps we should have used wal_level = crash_safe in 9.0. I have added the following TODO: Consider a non-crash-safe wal_level that eliminates WAL activity * http://archives.postgresql.org/pgsql-performance/2010-06/msg00300.php -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + None of us is going to be here forever. + -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] WAL+Os on a single disk
On Wed, Jun 23, 2010 at 3:01 PM, Anj Adu wrote: > I have a situation where we are limited by the chassis on the box (and cost). > > We have a 12 x 600G hot swappable disk system (raid 10) > and 2 internal disk ( 2x 146G) > > We would like to maximize storage on the large disks . > > Does it make sense to put the WAL and OS on the internal disks and use > the 12 large disks only for data or should we put the WAL along with > data and leave the OS on the internal disks. > > On our current systems..everything is on a single RAID 10 volume (and > performance is good) > > We are just considering options now that we have the 2 extra disks to spare. I have 16 disks in a server, 2 hot spares, 2 for OS and WAL and 12 for RAID-10. The RAID-10 array hits 100% utilization long before the 2 in a RAID-1 for OS and WAL do. And we log all modifying SQL statements onto the same disk set. So for us, the WAL and OS and logging on the same data set works well. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
[PERFORM] WAL+Os on a single disk
I have a situation where we are limited by the chassis on the box (and cost). We have a 12 x 600G hot swappable disk system (raid 10) and 2 internal disk ( 2x 146G) We would like to maximize storage on the large disks . Does it make sense to put the WAL and OS on the internal disks and use the 12 large disks only for data or should we put the WAL along with data and leave the OS on the internal disks. On our current systems..everything is on a single RAID 10 volume (and performance is good) We are just considering options now that we have the 2 extra disks to spare. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Aggressive autovacuuming ?
On Wed, Jun 23, 2010 at 2:20 PM, Scott Marlowe wrote: > On Wed, Jun 23, 2010 at 1:58 PM, Robert Haas wrote: >> On Sun, Jun 20, 2010 at 4:13 PM, Scott Marlowe >> wrote: The largest consequence I can see at the moment is that when I get a full vacuum (for preventing transaction-id wraparound) it would be >>> >>> I assume you mean the automatic database wide vacuum. I don't think >>> 8.4 and above need that anymore. I thnk 8.3 does that too, but I'm >>> not 100% sure. >> >> 8.4 (and 9.0) do still need to do vacuums to freeze tuples before >> transaction ID wraparound occurs. This is not to be confused with >> VACUUM FULL, which is something else altogether. > > My point was that modern pgsql doesn't need db wide vacuum to prevent > wrap around anymore, but can vacuum individual relations to prevent > wraparound. Oh, I see. I didn't realize we used to do that. Looks like that change was committed 11/5/2006. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise Postgres Company -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Aggressive autovacuuming ?
On Wed, Jun 23, 2010 at 1:58 PM, Robert Haas wrote: > On Sun, Jun 20, 2010 at 4:13 PM, Scott Marlowe > wrote: >>> The largest consequence I can see at the moment is that when I get a >>> full vacuum (for preventing transaction-id wraparound) it would be >> >> I assume you mean the automatic database wide vacuum. I don't think >> 8.4 and above need that anymore. I thnk 8.3 does that too, but I'm >> not 100% sure. > > 8.4 (and 9.0) do still need to do vacuums to freeze tuples before > transaction ID wraparound occurs. This is not to be confused with > VACUUM FULL, which is something else altogether. My point was that modern pgsql doesn't need db wide vacuum to prevent wrap around anymore, but can vacuum individual relations to prevent wraparound. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Aggressive autovacuuming ?
On Sun, Jun 20, 2010 at 4:13 PM, Scott Marlowe wrote: >> The largest consequence I can see at the moment is that when I get a >> full vacuum (for preventing transaction-id wraparound) it would be > > I assume you mean the automatic database wide vacuum. I don't think > 8.4 and above need that anymore. I thnk 8.3 does that too, but I'm > not 100% sure. 8.4 (and 9.0) do still need to do vacuums to freeze tuples before transaction ID wraparound occurs. This is not to be confused with VACUUM FULL, which is something else altogether. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise Postgres Company -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] cpu bound postgresql setup.
Your response somehow landed in the subject line, apparently truncated. I'll extract that to the message body and reply to what made it through. Rajesh Kumar Mallah wrote: > Firstly many thanks for responding. I am concerned because the > load averages have increased and users complaining of slowness. If performance has gotten worse, then something has changed. It would be helpful to know what. More users? New software? Database growth? Database bloat? (etc.) > I do not change settings frequenly. That doesn't mean your current settings can't be changed to make things better. > I was curious if there is any half dead component in th Have you reviewed what shows up if you run (as a database superuser)?: select * from pg_stat_activity; You might want to review this page: http://wiki.postgresql.org/wiki/SlowQueryQuestions -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
[PERFORM] Re: cpu bound postgresql setup. Firstly many thanks for responding. I am concerned because the load averages have increased and users complaining of slowness. I do not change settings freq
On 6/23/10, Kevin Grittner wrote: > Rajesh Kumar Mallah wrote: >> PasteBin for the vmstat output >> http://pastebin.com/mpHCW9gt >> >> On Wed, Jun 23, 2010 at 8:22 PM, Rajesh Kumar Mallah >> wrote: >>> Dear List , >>> >>> I observe that my postgresql (ver 8.4.2) dedicated server has >>> turned cpu bound and there is a high load average in the server > >>> 50 usually. >>> The server has 2 Quad Core CPUs already and there are 6 or 8 >>> drives in raid 10 , there is negligable i/o wait. There is 32GB >>> ram and no swapping. >>> >>> When i strace processes at random i see lot of lseek >>> (XXX,0,SEEK_END) calls which i feel were not that frequent >>> before. can any pointers be got for investigating the high cpu >>> usage by postgresql processes. > > I'm not clear on what problem you are experiencing. Using a lot of > your hardware's capacity isn't a problem in itself -- are you > getting poor response time? Poor throughput? Some other problem? > Is it continuous, or only when certain queries run? > > One thing that is apparent is that you might want to use a > connection pool, or if you're already using one you might want to > configure it to reduce the maximum number of active queries. With > eight cores and eight drives, your best throughput is going to be at > somewhere around 24 active connections, and you appear to be going > to at least twice that. > > If you can provide a copy of your postgresql.conf settings (without > comments) and an EXPLAIN ANALYZE of a slow query, along with the > schema information for the tables used by the query, you'll probably > get useful advice on how to adjust your configuration, indexing, or > query code to improve performance. > > -Kevin > -- Sent from Gmail for mobile | mobile.google.com -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] slow index lookup
Anj Adu wrote: > The combination index works great. Would adding the combination > index guarantee that the optimizer will choose that index for > these kind of queries involving the columns in the combination. I > verified a couple of times and it picked the right index. Just > wanted to make sure it does that consistently. It's cost based -- as long as it thinks that approach will be faster, it will use it. -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] slow index lookup
The combination index works great. Would adding the combination index guarantee that the optimizer will choose that index for these kind of queries involving the columns in the combination. I verified a couple of times and it picked the right index. Just wanted to make sure it does that consistently. On Tue, Jun 22, 2010 at 7:01 PM, Tom Lane wrote: > Alvaro Herrera writes: >> Excerpts from Anj Adu's message of mar jun 22 17:44:39 -0400 2010: >>> This query seems unreasonable slow on a well-indexed table (13 million >>> rows). Separate indexes are present on guardid_id , from_num and >>> targetprt columns. > >> Maybe you need to vacuum or reindex? > > Rethinking the set of indexes is probably a more appropriate suggestion. > Separate indexes aren't usefully combinable for a case like this --- in > principle the thing could do a BitmapAnd, but the startup time would be > pretty horrid, and the LIMIT 1 is discouraging it from trying that. > If this is an important case to optimize then you need a 3-column index. > > regards, tom lane > -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] cpu bound postgresql setup.
Rajesh Kumar Mallah wrote: > PasteBin for the vmstat output > http://pastebin.com/mpHCW9gt > > On Wed, Jun 23, 2010 at 8:22 PM, Rajesh Kumar Mallah > wrote: >> Dear List , >> >> I observe that my postgresql (ver 8.4.2) dedicated server has >> turned cpu bound and there is a high load average in the server > >> 50 usually. >> The server has 2 Quad Core CPUs already and there are 6 or 8 >> drives in raid 10 , there is negligable i/o wait. There is 32GB >> ram and no swapping. >> >> When i strace processes at random i see lot of lseek >> (XXX,0,SEEK_END) calls which i feel were not that frequent >> before. can any pointers be got for investigating the high cpu >> usage by postgresql processes. I'm not clear on what problem you are experiencing. Using a lot of your hardware's capacity isn't a problem in itself -- are you getting poor response time? Poor throughput? Some other problem? Is it continuous, or only when certain queries run? One thing that is apparent is that you might want to use a connection pool, or if you're already using one you might want to configure it to reduce the maximum number of active queries. With eight cores and eight drives, your best throughput is going to be at somewhere around 24 active connections, and you appear to be going to at least twice that. If you can provide a copy of your postgresql.conf settings (without comments) and an EXPLAIN ANALYZE of a slow query, along with the schema information for the tables used by the query, you'll probably get useful advice on how to adjust your configuration, indexing, or query code to improve performance. -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] cpu bound postgresql setup.
PasteBin for the vmstat output http://pastebin.com/mpHCW9gt On Wed, Jun 23, 2010 at 8:22 PM, Rajesh Kumar Mallah wrote: > Dear List , > > I observe that my postgresql (ver 8.4.2) dedicated server has turned cpu > bound and there is a high load average in the server > 50 usually. > The server has > 2 Quad Core CPUs already and there are 6 or 8 drives in raid 10 , there is > negligable i/o wait. There is 32GB ram and no swapping. > > When i strace processes at random i see lot of lseek (XXX,0,SEEK_END) calls > which i feel were not that frequent before. can any pointers be got > for investigating > the high cpu usage by postgresql processes. > > attached is strace out in strace.txt file (sorry if that was not > allowed, i am not sure) > > vmstat output > > # vmstat 10 > > output. > procs ---memory-- ---swap-- -io --system-- > -cpu--- > r b swpd free buff cache si sobibo in > cs us sy id wa st > 13 2 150876 2694612 4804 2491554010 443 2030 0 > 50 6 39 5 0 > 17 1 150868 3580472 4824 2493131210 1395 803 12951 15403 > 63 11 22 4 0 > 20 5 150868 3369892 4840 2493818000 1948 1827 12691 14542 > 79 13 6 2 0 > 8 0 150868 2771920 4856 2496801600 2680 1254 13890 14329 > 72 11 11 5 0 > 18 2 150864 2454008 4872 2499564000 2530 923 13968 15434 > 63 10 20 7 0 > 45 3 150860 2367760 4888 2501175600 1338 1327 13203 14580 > 71 11 16 3 0 > 5 6 150860 1949212 4904 2503305200 1727 1981 13960 15079 > 73 11 12 5 0 > 27 0 150860 1723104 4920 2504958800 1484 794 13199 13676 > 73 10 13 3 0 > 28 6 150860 1503888 4928 2506972400 1650 981 12625 14867 > 75 9 14 2 0 > 8 3 150860 1807744 4944 2508740400 1521 791 13110 15421 > 69 9 18 4 0 > > Rajesh Kumar Mallah. > Avid/Loyal-PostgreSQL user for (past 10 years) > -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] raid10 write performance
On Wed, Jun 23, 2010 at 6:06 AM, Ivan Voras wrote: > On 06/22/10 16:40, Greg Smith wrote: >> Grzegorz Jaśkiewicz wrote: >>> raid: serveRAID M5014 SAS/SATA controller >>> >> >> Do the "performant servers" have a different RAID card? This one has >> terrible performance, and could alone be the source of your issue. The >> ServeRAID cards are slow in general, and certainly slow running RAID10. > > What are some good RAID10 cards nowadays? LSI, Areca, 3Ware (now LSI I believe) > On the other hand, RAID10 is simple enough that soft-RAID > implementations should be more than adequate - any ideas why a dedicated > card has it "slow"? This is mostly a problem with some older cards that focused on RAID-5 performance, and RAID-10 was an afterthought. On many of these cards (older PERCs for instance) it was faster to either use a bunch of RAID-1 pairs in hardware with RAID-0 in software on top, or put the thing into JBOD mode and do it all in software. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] raid10 write performance
On Wed, Jun 23, 2010 at 8:25 AM, Ivan Voras wrote: > On 06/23/10 14:00, Florian Weimer wrote: >> * Ivan Voras: >> >>> On the other hand, RAID10 is simple enough that soft-RAID >>> implementations should be more than adequate - any ideas why a dedicated >>> card has it "slow"? >> >> Barrier support on RAID10 seems to require some smallish amount of >> non-volatile storage which supports a high number of write operations >> per second, so a software-only solution might not be available. > > If I understand you correctly, this can be said in general for all > spinning-disk usage and is not specific to RAID10. (And in the case of > high, constant TPS, no amount of NVRAM will help you). Not entirely true. Let's say you have enough battery backed cache to hold 10,000 transaction writes in memory at once. The RAID controller can now re-order those writes so that they go from one side of the disk to the other, instead of randomly all over the place. That will most certainly help improve your throughput. -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] raid10 write performance
On Wed, 23 Jun 2010, Ivan Voras wrote: On 06/23/10 14:00, Florian Weimer wrote: Barrier support on RAID10 seems to require some smallish amount of non-volatile storage which supports a high number of write operations per second, so a software-only solution might not be available. If I understand you correctly, this can be said in general for all spinning-disk usage and is not specific to RAID10. (And in the case of high, constant TPS, no amount of NVRAM will help you). No. Write barriers work fine with a single disc, assuming it is set up correctly. The barrier is a command telling the disc to make sure that one piece of data is safe before starting to write another piece of data. However, as soon as you have multiple discs, the individual discs do not have a way of communicating with each other to make sure that the first piece of data is written before the other. That's why you need a little bit of non-volatile storage to mediate that to properly support barriers. Of course, from a performance point of view, yes, you need some NVRAM on any kind of spinning storage to maintain high commit rates. Matthew -- I wouldn't be so paranoid if you weren't all out to get me!! -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] raid10 write performance
On 06/23/10 14:00, Florian Weimer wrote: > * Ivan Voras: > >> On the other hand, RAID10 is simple enough that soft-RAID >> implementations should be more than adequate - any ideas why a dedicated >> card has it "slow"? > > Barrier support on RAID10 seems to require some smallish amount of > non-volatile storage which supports a high number of write operations > per second, so a software-only solution might not be available. If I understand you correctly, this can be said in general for all spinning-disk usage and is not specific to RAID10. (And in the case of high, constant TPS, no amount of NVRAM will help you). -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] raid10 write performance
* Ivan Voras: > On the other hand, RAID10 is simple enough that soft-RAID > implementations should be more than adequate - any ideas why a dedicated > card has it "slow"? Barrier support on RAID10 seems to require some smallish amount of non-volatile storage which supports a high number of write operations per second, so a software-only solution might not be available. -- Florian Weimer BFK edv-consulting GmbH http://www.bfk.de/ Kriegsstraße 100 tel: +49-721-96201-1 D-76133 Karlsruhe fax: +49-721-96201-99 -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] Low perfomance SUM and Group by large databse
Craig, Russel, I appreciate your help. Thanks. 2010/6/22 Russell Smith > On 22/06/10 00:42, Sergio Charpinel Jr. wrote: > > Hi, > > > [snip] > > > > => explain analyze SELECT ip_src, port_src, ip_dst, port_dst, > > tcp_flags, ip_proto,SUM("bytes"),SUM("packets"),SUM("flows") FROM > > "acct_2010_25" WHERE "stamp_inserted">='2010-06-20 10:10' AND > > "stamp_inserted"<'2010-06-21 10:10' GROUP BY ip_src, port_src, ip_dst, > > port_dst, tcp_flags, ip_proto order by SUM(bytes) desc LIMIT 50 OFFSET 0; > > > >QUERY PLAN > > > > > -- > > Limit (cost=3998662.81..3998662.94 rows=50 width=50) (actual > > time=276981.107..276981.133 rows=50 loops=1) > >-> Sort (cost=3998662.81..4001046.07 rows=953305 width=50) > > (actual time=276981.105..276981.107 rows=50 loops=1) > > Sort Key: sum(bytes) > > -> GroupAggregate (cost=3499863.27..3754872.33 rows=953305 > > width=50) (actual time=165468.257..182677.580 rows=8182616 loops=1) > >-> Sort (cost=3499863.27..3523695.89 rows=9533049 > > width=50) (actual time=165468.022..168908.828 rows=9494165 loops=1) > > Sort Key: ip_src, port_src, ip_dst, port_dst, > > tcp_flags, ip_proto > > You are having to sort and aggregate a large number of rows before you > can get the top 50. That's 9 million rows in this case, width 50 = > 400MB+ sort. That's going to be slow as you are going to have to sort > it on disk unless you bump up sort mem to 500Mb (bad idea). So unless > you have really fast storage for temporary tables it's going to take a > while. About 2.5 minutes you are experiencing at the moment is probably > not too bad. > > I'm sure improvements have been made in the area since 8.1 and if you > are able to upgrade to 8.4 which is also offered by Centos5 now, you > might get benefit there. I can't remember the specific benefits, but I > believe sorting speed has improved, your explain analyze will also give > you more information about what's going on with disk/memory sorting. > > > -> Seq Scan on acct_2010_25 > > (cost=0.00..352648.10 rows=9533049 width=50) (actual > > time=0.038..50860.391 rows=9494165 loops=1) > >Filter: ((stamp_inserted >= '2010-06-20 > > 10:10:00'::timestamp without time zone) AND (stamp_inserted < > > '2010-06-21 10:10:00'::timestamp without time zone)) > > Total runtime: 278791.661 ms > > (9 registros) > > > > Another one just summing bytes (still low): > > > > => explain analyze SELECT ip_src, port_src, ip_dst, port_dst, > > tcp_flags, ip_proto,SUM("bytes") FROM "acct_2010_25" WHERE > > "stamp_inserted">='2010-06-20 10:10' AND "stamp_inserted"<'2010-06-21 > > 10:10' GROUP BY ip_src, port_src, ip_dst, port_dst, tcp_flags, > > ip_proto LIMIT 50 OFFSET 0; > > > > QUERY PLAN > > > > > > > Limit (cost=3395202.50..3395213.12 rows=50 width=42) (actual > > time=106261.359..106261.451 rows=50 loops=1) > >-> GroupAggregate (cost=3395202.50..3602225.48 rows=974226 > > width=42) (actual time=106261.357..106261.435 rows=50 loops=1) > > -> Sort (cost=3395202.50..3419558.14 rows=9742258 width=42) > > (actual time=106261.107..106261.169 rows=176 loops=1) > >Sort Key: ip_src, port_src, ip_dst, port_dst, > > tcp_flags, ip_proto > >-> Seq Scan on acct_2010_25 (cost=0.00..367529.72 > > rows=9742258 width=42) (actual time=0.073..8058.598 rows=9494165 loops=1) > > Filter: ((stamp_inserted >= '2010-06-20 > > 10:10:00'::timestamp without time zone) AND (stamp_inserted < > > '2010-06-21 10:10:00'::timestamp without time zone)) > > Total runtime: 109911.882 ms > > (7 registros) > > > > > > The server has 2 Intel(R) Xeon(R) CPU E5430 @ 2.66GHz and 16GB RAM. > > I'm using PostgreSQL 8.1.18 default config from Centos 5.5 (just > > increased checkpoint_segments to 50). > > Checkpoint segments won't help you as the number of segments is about > writing to the database and how fast that can happen. > > > > > What can I change to increase performance? > > Increasing sort-memory (work_mem) will give you speed benefits even > though you are going to disk. I don't know how much spare memory you > have, but trying other values between 8MB and 128MB may be useful just > for the specific query runs. If you can afford 512Mb for each of the > two sorts, go for that, but it's dangerous as mentioned due to the risk > of using more RAM than you have. work_mem allocates that amount of > memory per sort. > > If you are running these queries all the time, a summary table the > produces there reports on a regular basis, m
Re: [PERFORM] raid10 write performance
On 06/22/10 16:40, Greg Smith wrote: > Grzegorz Jaśkiewicz wrote: >> raid: serveRAID M5014 SAS/SATA controller >> > > Do the "performant servers" have a different RAID card? This one has > terrible performance, and could alone be the source of your issue. The > ServeRAID cards are slow in general, and certainly slow running RAID10. What are some good RAID10 cards nowadays? On the other hand, RAID10 is simple enough that soft-RAID implementations should be more than adequate - any ideas why a dedicated card has it "slow"? -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance