Re: [PERFORM] Low Performance for big hospital server ..
Greg Stark wrote: Frank Wiles <[EMAIL PROTECTED]> writes: Now that's rich. I don't think I've ever seen a database perform worse after it was normalized. In fact, I can't even think of a situation where it could! Just remember. All generalisations are false. In general, I would agree. Sincerely, Joshua D. Drake -- Command Prompt, Inc., home of PostgreSQL Replication, and plPHP. Postgresql support, programming shared hosting and dedicated hosting. +1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com Mammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL begin:vcard fn:Joshua D. Drake n:Drake;Joshua D. org:Command Prompt, Inc. adr:;;PO Box 215;Cascade Locks;Oregon;97014;USA email;internet:[EMAIL PROTECTED] title:Consultant tel;work:503-667-4564 tel;fax:503-210-0334 note:Command Prompt, Inc. is the largest and oldest US based commercial PostgreSQL support provider. We provide the only commercially viable integrated PostgreSQL replication solution, but also custom programming, and support. We authored the book Practical PostgreSQL, the procedural language plPHP, and adding trigger capability to plPerl. x-mozilla-html:FALSE url:http://www.commandprompt.com/ version:2.1 end:vcard ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PERFORM] Low Performance for big hospital server ..
Frank Wiles <[EMAIL PROTECTED]> writes: > Now that's rich. I don't think I've ever seen a database perform > worse after it was normalized. In fact, I can't even think of a > situation where it could! Just remember. All generalisations are false. -- greg ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [PERFORM] Low Performance for big hospital server ..
Hi On Thu, Jan 06, 2005 at 12:51:14PM -0500, Rod Taylor wrote: > On Thu, 2005-01-06 at 12:35 -0500, Dave Cramer wrote: > > Reading can be worse for a normalized db, which is likely what the > > developers were concerned about. > > To a point. Once you have enough data that you start running out of > space in memory then normalization starts to rapidly gain ground again > because it's often smaller in size and won't hit the disk as much. Well, in datawarehousing applications you'll often denormalize your entities due to most of the time the access method is a (more or less) simple select. Regards, Yann ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PERFORM] Low Performance for big hospital server ..
In my younger days I denormalized a database for performance reasons and have been paid for it dearly with increased maintenance costs. Adding enhanced capabilities and new functionality will render denormalization worse than useless quickly. --Rick Frank Wiles <[EMAIL PROTECTED]> To: Josh Berkus Sent by: cc: pgsql-performance@postgresql.org [EMAIL PROTECTED]Subject: Re: [PERFORM] Low Performance for big hospital server .. tgresql.org 01/06/2005 12:12 PM On Thu, 6 Jan 2005 09:06:55 -0800 Josh Berkus wrote: > I can't tell you how many times I've seen this sort of thing. And > the developers always tell me "Well, we denormalized for performance > reasons ... " Now that's rich. I don't think I've ever seen a database perform worse after it was normalized. In fact, I can't even think of a situation where it could! - Frank Wiles <[EMAIL PROTECTED]> http://www.wiles.org - ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PERFORM] Low Performance for big hospital server ..
On Thu, 2005-01-06 at 12:35 -0500, Dave Cramer wrote: > Reading can be worse for a normalized db, which is likely what the > developers were concerned about. To a point. Once you have enough data that you start running out of space in memory then normalization starts to rapidly gain ground again because it's often smaller in size and won't hit the disk as much. Moral of the story is don't tune with a smaller database than you expect to have. > Frank Wiles wrote: > > >On Thu, 6 Jan 2005 09:06:55 -0800 > >Josh Berkus wrote: > > > > > > > >>I can't tell you how many times I've seen this sort of thing. And > >>the developers always tell me "Well, we denormalized for performance > >>reasons ... " > >> > >> > > > > Now that's rich. I don't think I've ever seen a database perform > > worse after it was normalized. In fact, I can't even think of a > > situation where it could! > > > > - > > Frank Wiles <[EMAIL PROTECTED]> > > http://www.wiles.org > > - > > > > > >---(end of broadcast)--- > >TIP 3: if posting/reading through Usenet, please send an appropriate > > subscribe-nomail command to [EMAIL PROTECTED] so that your > > message can get through to the mailing list cleanly > > > > > > > > > -- ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PERFORM] Low Performance for big hospital server ..
Reading can be worse for a normalized db, which is likely what the developers were concerned about. One always have to be careful to measure the right thing. Dave Frank Wiles wrote: On Thu, 6 Jan 2005 09:06:55 -0800 Josh Berkus wrote: I can't tell you how many times I've seen this sort of thing. And the developers always tell me "Well, we denormalized for performance reasons ... " Now that's rich. I don't think I've ever seen a database perform worse after it was normalized. In fact, I can't even think of a situation where it could! - Frank Wiles <[EMAIL PROTECTED]> http://www.wiles.org - ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Dave Cramer http://www.postgresintl.com 519 939 0336 ICQ#14675561 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PERFORM] Low Performance for big hospital server ..
Dawid, > Ahh, the huge update. Below are my "hints" I've > found while trying to optimize such updates. > Divide the update, if possible. This way query uses > less memory and you may call VACUUM inbetween > updates. To do this, first SELECT INTO TEMPORARY > table the list of rows to update (their ids or something), > and then loop through it to update the values. There are other ways to deal as well -- one by normalizing the database. Often, I find that massive updates like this are caused by a denormalized database. For example, Lyris stores its "mailing numbers" only as repeated numbers in the recipients table. When a mailing is complete, Lyris updates all of the recipients up to 750,000 rows in the case of my client ... to indicate the completion of the mailing (it's actually a little more complicated than that, but the essential problem is the example) It would be far better for Lyris to use a seperate mailings table, with a status in that table ... which would then require only *one* update row to indicate completion, instead of 750,000. I can't tell you how many times I've seen this sort of thing. And the developers always tell me "Well, we denormalized for performance reasons ... " -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] Low Performance for big hospital server ..
On Thu, 6 Jan 2005 09:06:55 -0800 Josh Berkus wrote: > I can't tell you how many times I've seen this sort of thing. And > the developers always tell me "Well, we denormalized for performance > reasons ... " Now that's rich. I don't think I've ever seen a database perform worse after it was normalized. In fact, I can't even think of a situation where it could! - Frank Wiles <[EMAIL PROTECTED]> http://www.wiles.org - ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] Low Performance for big hospital server ..
> Ahh, the huge update. Below are my "hints" I've > found while trying to optimize such updates. > > First of all, does this update really changes this 'flag'? > Say, you have update: > UPDATE foo SET flag = 4 WHERE [blah]; > are you sure, that flag always is different than 4? > If not, then add: > UPDATE foo SET flag = 4 WHERE flag <> 4 AND [blah]; > This makes sure only tuples which actually need the change will > receive it. [ IIRC mySQL does this, while PgSQL will always perform > UPDATE, regardless if it changes or not ]; > > Divide the update, if possible. This way query uses > less memory and you may call VACUUM inbetween > updates. To do this, first SELECT INTO TEMPORARY > table the list of rows to update (their ids or something), > and then loop through it to update the values. > > I guess the problem with huge updates is that > until the update is finished, the new tuples are > not visible, so the old cannot be freed... Yes, very good point I must try this and I will give you the result , thanks a lot. Amrit Thailand ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PERFORM] Low Performance for big hospital server ..
On Wed, 5 Jan 2005 22:35:42 +0700, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > Now I turn hyperthreading off and readjust the conf . I found the bulb query > that was : > update one flag of the table [8 million records which I think not too much] Ahh, the huge update. Below are my "hints" I've found while trying to optimize such updates. First of all, does this update really changes this 'flag'? Say, you have update: UPDATE foo SET flag = 4 WHERE [blah]; are you sure, that flag always is different than 4? If not, then add: UPDATE foo SET flag = 4 WHERE flag <> 4 AND [blah]; This makes sure only tuples which actually need the change will receive it. [ IIRC mySQL does this, while PgSQL will always perform UPDATE, regardless if it changes or not ]; Divide the update, if possible. This way query uses less memory and you may call VACUUM inbetween updates. To do this, first SELECT INTO TEMPORARY table the list of rows to update (their ids or something), and then loop through it to update the values. I guess the problem with huge updates is that until the update is finished, the new tuples are not visible, so the old cannot be freed... Regards, Dawid ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PERFORM] Low Performance for big hospital server ..
[EMAIL PROTECTED] wrote: Now I turn hyperthreading off and readjust the conf . I found the bulb query that was : update one flag of the table [8 million records which I think not too much] .When I turned this query off everything went fine. I don't know whether update the data is much slower than insert [Postgresql 7.3.2] and how could we improve the update method? UPDATE is expensive. Under a MVCC setup, it's roughtly the equivalent of DELETE + INSERT new record (ie, old record deprecated, new version of record. Updating 8 million records would be very I/O intensive and probably flushes your OS cache so all other queries hit disk versus superfast memory. And if this operation is run multiple times during the day, you may end up with a lot of dead tuples in the table which makes querying it deadly slow. If it's a dead tuples issue, you probably have to increase your freespace map and vacuum analyze that specific table more often. If it's an I/O hit issue, a lazy updating procedure would help if the operation is not time critical (eg. load the record keys that need updating and loop through the records with a time delay.) ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PERFORM] Low Performance for big hospital server ..
Amrit, can you post explain so we can see what it does ? Dave [EMAIL PROTECTED] wrote: Today is the first official day of this weeks and the system run better in serveral points but there are still some points that need to be corrected. Some queries or some tables are very slow. I think the queries inside the programe need to be rewrite. Now I put the sort mem to a little bit bigger: sort mem = 16384 increase the sort mem makes no effect on the slow point eventhough there is little connnection. shared_buffers = 27853 effective cache = 12 If I were you I would upgrade from RH 9 to Fedora Core 2 or 3 after some initial testing. You'll see a huge improvement of speed on the system as a whole. I would try turning hyperthreading off also. Now I turn hyperthreading off and readjust the conf . I found the bulb query that was : update one flag of the table [8 million records which I think not too much] .When I turned this query off everything went fine. I don't know whether update the data is much slower than insert [Postgresql 7.3.2] and how could we improve the update method? Thanks for many helps. Amrit Thailand NB. I would like to give my appreciation to all of the volunteers from many countries who combat with big disaster [Tsunamies] in my country [Thailand]. ---(end of broadcast)--- TIP 8: explain analyze is your friend -- Dave Cramer http://www.postgresintl.com 519 939 0336 ICQ#14675561
Re: [PERFORM] Low Performance for big hospital server ..
> > Today is the first official day of this weeks and the system run > > better in serveral points but there are still some points that need to > > be corrected. Some queries or some tables are very slow. I think the > > queries inside the programe need to be rewrite. > > Now I put the sort mem to a little bit bigger: > > sort mem = 16384 increase the sort mem makes no effect on the > > slow point eventhough there is little connnection. > > shared_buffers = 27853 > > effective cache = 12 > If I were you I would upgrade from RH 9 to Fedora Core 2 or 3 after > some initial testing. You'll see a huge improvement of speed on the > system as a whole. I would try turning hyperthreading off also. Now I turn hyperthreading off and readjust the conf . I found the bulb query that was : update one flag of the table [8 million records which I think not too much] .When I turned this query off everything went fine. I don't know whether update the data is much slower than insert [Postgresql 7.3.2] and how could we improve the update method? Thanks for many helps. Amrit Thailand NB. I would like to give my appreciation to all of the volunteers from many countries who combat with big disaster [Tsunamies] in my country [Thailand]. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PERFORM] Low Performance for big hospital server ..
> > I will put more ram but someone said RH 9.0 had poor recognition on the Ram > > above 4 Gb? > > I think they were refering to 32 bit architectures, not distributions as > such. Sorry for wrong reason , then should I increase more RAM than 4 Gb. on 32 bit Arche.? > > Should I close the hyperthreading ? Would it make any differnce between > open and > > close the hyperthreading? > > Thanks for any comment > > In my experience, the largest performance increases come from intensive > analysis and optimisation of queries. Look at the output of EXPLAIN > ANALYZE for the queries your application is generating and see if they can > be tuned in anyway. More often than not, they can. So what you mean is that the result is the same whether close or open hyperthreading ? Will it be any harm if I open it ? The main point shiuld be adjustment the query , right. > Feel free to ask for assistence on irc at irc.freenode.net #postgresql. > People there help optimise queries all day ;-). How could I contact with those people ;=> which url ? Thanks again. Amrit Thailand ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] Low Performance for big hospital server ..
On Tue, 4 Jan 2005 [EMAIL PROTECTED] wrote: > Today is the first official day of this weeks and the system run better in > serveral points but there are still some points that need to be corrected. > Some > queries or some tables are very slow. I think the queries inside the programe > need to be rewrite. > Now I put the sort mem to a little bit bigger: > sort mem = 16384 increase the sort mem makes no effect on the slow > point > eventhough there is little connnection. > shared_buffers = 27853 > effective cache = 12 Even though others have said otherwise, I've had good results from setting sort_mem higher -- even if that is per query. > > I will put more ram but someone said RH 9.0 had poor recognition on the Ram > above 4 Gb? I think they were refering to 32 bit architectures, not distributions as such. > Should I close the hyperthreading ? Would it make any differnce between open > and > close the hyperthreading? > Thanks for any comment In my experience, the largest performance increases come from intensive analysis and optimisation of queries. Look at the output of EXPLAIN ANALYZE for the queries your application is generating and see if they can be tuned in anyway. More often than not, they can. Feel free to ask for assistence on irc at irc.freenode.net #postgresql. People there help optimise queries all day ;-). > Amrit > Thailand Gavin ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] Low Performance for big hospital server ..
Today is the first official day of this weeks and the system run better in serveral points but there are still some points that need to be corrected. Some queries or some tables are very slow. I think the queries inside the programe need to be rewrite. Now I put the sort mem to a little bit bigger: sort mem = 16384 increase the sort mem makes no effect on the slow point eventhough there is little connnection. shared_buffers = 27853 effective cache = 12 I will put more ram but someone said RH 9.0 had poor recognition on the Ram above 4 Gb? Should I close the hyperthreading ? Would it make any differnce between open and close the hyperthreading? Thanks for any comment Amrit Thailand ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] Low Performance for big hospital server ..
William Yu wrote: Dave Cramer wrote: William Yu wrote: [EMAIL PROTECTED] wrote: I will try to reduce shared buffer to 1536 [1.87 Mb]. 1536 is probaby too low. I've tested a bunch of different settings on my 8GB Opteron server and 10K seems to be the best setting. Be careful here, he is not using opterons which can access physical memory above 4G efficiently. Also he only has 4G the 6-10% rule still applies 10% of 4GB is 400MB. 10K buffers is 80MB. Easily less than the 6-10% rule. Correct, I didn't actually do the math, I refrain from giving actual numbers as every system is different. To figure out your effective cache size, run top and add free+cached. My understanding is that effective cache is the sum of shared buffers, plus kernel buffers, not sure what free + cached gives you? Not true. Effective cache size is the free memory available that the OS can use for caching for Postgres. In a system that runs nothing but Postgres, it's free + cached. You still need to add in the shared buffers as they are part of the "effective cache" Dave ---(end of broadcast)--- TIP 8: explain analyze is your friend -- Dave Cramer http://www.postgresintl.com 519 939 0336 ICQ#14675561 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] Low Performance for big hospital server ..
[EMAIL PROTECTED] wrote: I realize you may be stuck with 7.3.x but you should be aware that 7.4 is considerably faster, and 8.0 appears to be even faster yet. There are a little bit incompatibility between 7.3 -8 , so rather difficult to change. I would seriously consider upgrading, if at all possible. A few more hints. Random page cost is quite conservative if you have reasonably fast disks. Speaking of fast disks, not all disks are created equal, some RAID drives are quite slow (Bonnie++ is your friend here) Sort memory can be set on a per query basis, I'd consider lowering it quite low and only increasing it when necessary. Which brings us to how to find out when it is necessary. Turn logging on and turn on log_pid, and log_duration, then you will need to sort through the logs to find the slow queries. In standard RH 9.0 , if I enable both of the log [pid , duration] , where could I look for the result of the log, and would it make the system to be slower? On a redhat system logging is more or less disabled if you used the rpm you can set syslog=2 in the postgresql.conf and then you will get the logs in messages.log Yes, it will make it slower, but you have to find out which queries are slow. Dave Amrit Thailand ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html -- Dave Cramer http://www.postgresintl.com 519 939 0336 ICQ#14675561 ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PERFORM] Low Performance for big hospital server ..
Dave Cramer wrote: William Yu wrote: [EMAIL PROTECTED] wrote: I will try to reduce shared buffer to 1536 [1.87 Mb]. 1536 is probaby too low. I've tested a bunch of different settings on my 8GB Opteron server and 10K seems to be the best setting. Be careful here, he is not using opterons which can access physical memory above 4G efficiently. Also he only has 4G the 6-10% rule still applies 10% of 4GB is 400MB. 10K buffers is 80MB. Easily less than the 6-10% rule. To figure out your effective cache size, run top and add free+cached. My understanding is that effective cache is the sum of shared buffers, plus kernel buffers, not sure what free + cached gives you? Not true. Effective cache size is the free memory available that the OS can use for caching for Postgres. In a system that runs nothing but Postgres, it's free + cached. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PERFORM] Low Performance for big hospital server ..
On Monday 03 January 2005 10:40, [EMAIL PROTECTED] wrote: > > I realize you may be stuck with 7.3.x but you should be aware that 7.4 > > is considerably faster, and 8.0 appears to be even faster yet. > > There are a little bit incompatibility between 7.3 -8 , so rather difficult > to change. > Sure, but even moving to 7.4 would be a bonus, especially if you use a lot of select * from tab where id in (select ... ) type queries, and the incompataibility is less as well. > > I would seriously consider upgrading, if at all possible. > > > > A few more hints. > > One thing I didn't see mentioned that should have been was to watch for index bloat, which was a real problem on 7.3 machines. You can determine which indexes are bloated by studying vacuum output or by comparing index size on disk to table size on disk. Another thing I didn't see mentioned was to your free space map settings. Make sure these are large enough to hold your data... max_fsm_relations should be larger then the total # of tables you have in your system (check the archives for the exact query needed) and max_fsm_pages needs to be big enough to hold all of the pages you use in a day... this is hard to calculate in 7.3, but if you look at your vacuum output and add the number of pages cleaned up for all tables, this could give you a good number to work with. It would certainly tell you if your setting is too small. -- Robert Treat Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PERFORM] Low Performance for big hospital server ..
> I realize you may be stuck with 7.3.x but you should be aware that 7.4 > is considerably faster, and 8.0 appears to be even faster yet. There are a little bit incompatibility between 7.3 -8 , so rather difficult to change. > I would seriously consider upgrading, if at all possible. > > A few more hints. > > Random page cost is quite conservative if you have reasonably fast disks. > Speaking of fast disks, not all disks are created equal, some RAID > drives are quite slow (Bonnie++ is your friend here) > > Sort memory can be set on a per query basis, I'd consider lowering it > quite low and only increasing it when necessary. > > Which brings us to how to find out when it is necessary. > Turn logging on and turn on log_pid, and log_duration, then you will > need to sort through the logs to find the slow queries. In standard RH 9.0 , if I enable both of the log [pid , duration] , where could I look for the result of the log, and would it make the system to be slower? Amrit Thailand ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PERFORM] Low Performance for big hospital server ..
amrit wrote: > I try to adjust my server for a couple of weeks with some sucess but it > still > slow when the server has stress in the moring from many connection . I > used > postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 > Gb. > Since 1 1/2 yr. when I started to use the database server after optimizing > the > postgresql.conf everything went fine until a couple of weeks ago , my > database > grew up to 3.5 Gb and there were more than 160 concurent connections. > The server seemed to be slower in the rush hour peroid than before . There > is some swap process too. My top and meminfo are shown here below: well, you've hit the 'wall'...your system seems to be more or less at the limit of what 32 bit technology can deliver. If upgrade to Opteron and 64 bit is out of the question, here are a couple of new tactics you can try. Optimizing postgresql.conf can help, but only so much. Optimize queries: One big often looked performance gainer is to use functional indexes to access data from a table. This can save space by making the index smaller and more efficient. This wins on cache and speed at the price of some flexibility. Optimize datums: replace numeric(4) with int2, numeric(6) with int4, etc. This will save a little space on the tuple which will ease up on the cache a bit. Use constraints where necessary to preserve data integrity. Materialized views: These can provide an enormous win if you can deal incorporate them into your application. With normal views, multiple backends can share a query plan. With mat-views, backends can share both the plan and its execution. Merlin ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PERFORM] Low Performance for big hospital server ..
Amrit, I realize you may be stuck with 7.3.x but you should be aware that 7.4 is considerably faster, and 8.0 appears to be even faster yet. I would seriously consider upgrading, if at all possible. A few more hints. Random page cost is quite conservative if you have reasonably fast disks. Speaking of fast disks, not all disks are created equal, some RAID drives are quite slow (Bonnie++ is your friend here) Sort memory can be set on a per query basis, I'd consider lowering it quite low and only increasing it when necessary. Which brings us to how to find out when it is necessary. Turn logging on and turn on log_pid, and log_duration, then you will need to sort through the logs to find the slow queries. There are some special cases where postgresql can be quite slow, and minor adjustments to the query can improve it significantly For instance pre-8.0 select * from foo where id = '1'; where id is a int8 will never use an index even if it exists. Regards, Dave [EMAIL PROTECTED] wrote: The common wisdom of shared buffers is around 6-10% of available memory. Your proposal below is about 50% of memory. I'm not sure what the original numbers actually meant, they are quite large. I will try to reduce shared buffer to 1536 [1.87 Mb]. also effective cache is the sum of kernel buffers + shared_buffers so it should be bigger than shared buffers. also make the effective cache to 2097152 [2 Gb]. I will give you the result , because tomorrow [4/12/05] will be the official day of my hospital [which have more than 1700 OPD patient/day]. Also turning hyperthreading off may help, it is unlikely it is doing any good unless you are running a relatively new (2.6.x) kernel. Why , could you give me the reason? I presume you are vacuuming on a regular basis? Yes , vacuumdb daily. -- Dave Cramer http://www.postgresintl.com 519 939 0336 ICQ#14675561
Re: [PERFORM] Low Performance for big hospital server ..
William Yu wrote: [EMAIL PROTECTED] wrote: I will try to reduce shared buffer to 1536 [1.87 Mb]. 1536 is probaby too low. I've tested a bunch of different settings on my 8GB Opteron server and 10K seems to be the best setting. Be careful here, he is not using opterons which can access physical memory above 4G efficiently. Also he only has 4G the 6-10% rule still applies also effective cache is the sum of kernel buffers + shared_buffers so it should be bigger than shared buffers. also make the effective cache to 2097152 [2 Gb]. I will give you the result , because tomorrow [4/12/05] will be the official day of my hospital [which have more than 1700 OPD patient/day]. To figure out your effective cache size, run top and add free+cached. My understanding is that effective cache is the sum of shared buffers, plus kernel buffers, not sure what free + cached gives you? Also turning hyperthreading off may help, it is unlikely it is doing any good unless you are running a relatively new (2.6.x) kernel. Why , could you give me the reason? Pre 2.6, the kernel does not know the difference between logical and physical CPUs. Hence, in a dual processor system with hyperthreading, it actually sees 4 CPUs. And when assigning processes to CPUs, it may assign to 2 logical CPUs in the same physical CPU. Right, the pre 2.6 kernels don't really know how to handle hyperthreaded CPU's I presume you are vacuuming on a regular basis? Yes , vacuumdb daily. Do you vacuum table by table or the entire DB? I find over time, the system tables can get very bloated and cause a lot of slowdowns just due to schema queries/updates. You might want to try a VACUUM FULL ANALYZE just on the system tables. You may want to try this but regular vacuum analyze should work fine as long as you have the free space map settings correct. Also be aware that pre-7.4.x the free space map is not populated on startup so you should do a vacuum analyze right after startup. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] -- Dave Cramer http://www.postgresintl.com 519 939 0336 ICQ#14675561 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PERFORM] Low Performance for big hospital server ..
Decrease the sort mem too much [8196] make the performance much slower so I use sort_mem = 16384 and leave effective cache to the same value , the result is quite better but I should wait for tomorrow morning [official hour] to see the end result. You could also profile your queries to see where those big sorts come from, and maybe add some indexes to try to replace sorts by index-scans-in-order, which use no temporary memory. Can you give an example of your queries which make use of big sorts like this ? ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PERFORM] Low Performance for big hospital server ..
Amrit -- >-Original Message- >From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] >Sent: Mon 1/3/2005 12:18 AM >To:Mark Kirkwood >Cc:PGsql-performance >Subject: Re: [PERFORM] Low Performance for big hospital server .. >> shared_buffers = 12000 will use 12000*8192 bytes (i.e about 96Mb). It is >> shared, so no matter how many connections you have it will only use 96M. > >Now I use the figure of 27853 > >> > >> >Will the increasing in effective cache size to arround 20 make a >little >> bit >> >improvement ? Do you think so? >> > >Decrease the sort mem too much [8196] make the performance much slower so I >>use >sort_mem = 16384 >and leave effective cache to the same value , the result is quite better but >I >should wait for tomorrow morning [official hour] to see the end result. > >> > >> I would leave it at the figure you proposed (128897), and monitor your >> performance. >> (you can always increase it later and see what the effect is). >Yes , I use this figure. > >If the result still poor , putting more ram "6-8Gb" [also putting more money >too] will solve the problem ? Adding RAM will almost always help, at least for a while. Our small runitme servers have 2 gigs of RAM; the larger ones have 4 gigs; I do anticipate the need to add RAM as we add users. If you have evaluated the queries that are running and verified that they are using indexes properly, etc., and tuned the other parameters for your system and its disks, adding memory helps because it increases the chance that data is already in memory, thus saving the time to fetch it from disk. Studying performance under load with top, vmstat, etc. and detailed analysis of queries can often trade some human time for the money that extra hardware would cost. Sometimes easier to do than getting downtime for a critical server, as well. If you don't have a reliable way of reproducing real loads on a test system, it is best to change things cautiously, and observe the system under load; if you change too many things (ideally only 1 at a time but often that is not possible) you mau actually defeat a good change with a bad one; at the least,m you may not know which change was the most important one if you make several at once. Best of luck, Greg Williamson DBA GlobeXplorer LLC >Thanks , >Amrit >Thailand ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [PERFORM] Low Performance for big hospital server ..
William Yu wrote: [EMAIL PROTECTED] wrote: Yes , vacuumdb daily. Do you vacuum table by table or the entire DB? I find over time, the system tables can get very bloated and cause a lot of slowdowns just due to schema queries/updates. You might want to try a VACUUM FULL ANALYZE just on the system tables. A REINDEX of the system tables in stand-alone mode might also be in order, even for a 7.4.x database: http://www.postgresql.org/docs/7.4/interactive/sql-reindex.html If a dump-reload-analyze cycle yields significant performance improvements then we know it's due to dead-tuple bloat - either heap tuples or index tuples. Mike Mascari ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PERFORM] Low Performance for big hospital server ..
[EMAIL PROTECTED] wrote: I will try to reduce shared buffer to 1536 [1.87 Mb]. 1536 is probaby too low. I've tested a bunch of different settings on my 8GB Opteron server and 10K seems to be the best setting. also effective cache is the sum of kernel buffers + shared_buffers so it should be bigger than shared buffers. also make the effective cache to 2097152 [2 Gb]. I will give you the result , because tomorrow [4/12/05] will be the official day of my hospital [which have more than 1700 OPD patient/day]. To figure out your effective cache size, run top and add free+cached. Also turning hyperthreading off may help, it is unlikely it is doing any good unless you are running a relatively new (2.6.x) kernel. Why , could you give me the reason? Pre 2.6, the kernel does not know the difference between logical and physical CPUs. Hence, in a dual processor system with hyperthreading, it actually sees 4 CPUs. And when assigning processes to CPUs, it may assign to 2 logical CPUs in the same physical CPU. I presume you are vacuuming on a regular basis? Yes , vacuumdb daily. Do you vacuum table by table or the entire DB? I find over time, the system tables can get very bloated and cause a lot of slowdowns just due to schema queries/updates. You might want to try a VACUUM FULL ANALYZE just on the system tables. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PERFORM] Low Performance for big hospital server ..
> shared_buffers = 12000 will use 12000*8192 bytes (i.e about 96Mb). It is > shared, so no matter how many connections you have it will only use 96M. Now I use the figure of 27853 > > > >Will the increasing in effective cache size to arround 20 make a little > bit > >improvement ? Do you think so? > > Decrease the sort mem too much [8196] make the performance much slower so I use sort_mem = 16384 and leave effective cache to the same value , the result is quite better but I should wait for tomorrow morning [official hour] to see the end result. > > > I would leave it at the figure you proposed (128897), and monitor your > performance. > (you can always increase it later and see what the effect is). Yes , I use this figure. If the result still poor , putting more ram "6-8Gb" [also putting more money too] will solve the problem ? Thanks , Amrit Thailand ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] Low Performance for big hospital server ..
[EMAIL PROTECTED] wrote: max_connections = 160 shared_buffers = 2048[Total = 2.5 Gb.] sort_mem = 8192 [Total = 1280 Mb.] vacuum_mem = 16384 effective_cache_size = 128897 [= 1007 Mb. = 1 Gb. ] Will it be more suitable for my server than before? I would keep shared_buffers in the 1->2 range, as this is allocated *once* into shared memory, so only uses 80->160 Mb in *total*. You mean that if I increase the share buffer to arround 12000 [160 comnnections ] , this will not affect the mem. usage ? shared_buffers = 12000 will use 12000*8192 bytes (i.e about 96Mb). It is shared, so no matter how many connections you have it will only use 96M. The lower sort_mem will help reduce memory pressure (as this is allocated for every backend connection) and this will help performance - *unless* you have lots of queries that need to sort large datasets. If so, then these will hammer your i/o subsystem, possibly canceling any gain from freeing up more memory. So there is a need to understand what sort of workload you have! Will the increasing in effective cache size to arround 20 make a little bit improvement ? Do you think so? I would leave it at the figure you proposed (128897), and monitor your performance. (you can always increase it later and see what the effect is). regards Mark ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PERFORM] Low Performance for big hospital server ..
> >max_connections = 160 > >shared_buffers = 2048 [Total = 2.5 Gb.] > >sort_mem = 8192 [Total = 1280 Mb.] > >vacuum_mem = 16384 > >effective_cache_size = 128897 [= 1007 Mb. = 1 Gb. ] > >Will it be more suitable for my server than before? > > > > > > > > > I would keep shared_buffers in the 1->2 range, as this is > allocated *once* into shared memory, so only uses 80->160 Mb in *total*. You mean that if I increase the share buffer to arround 12000 [160 comnnections ] , this will not affect the mem. usage ? > The lower sort_mem will help reduce memory pressure (as this is > allocated for every backend connection) and this will help performance - > *unless* you have lots of queries that need to sort large datasets. If > so, then these will hammer your i/o subsystem, possibly canceling any > gain from freeing up more memory. So there is a need to understand what > sort of workload you have! Will the increasing in effective cache size to arround 20 make a little bit improvement ? Do you think so? Any comment please , thanks. Amrit Thailand ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] Low Performance for big hospital server ..
[EMAIL PROTECTED] wrote: max_connections = 160 shared_buffers = 2048[Total = 2.5 Gb.] sort_mem = 8192 [Total = 1280 Mb.] vacuum_mem = 16384 effective_cache_size = 128897 [= 1007 Mb. = 1 Gb. ] Will it be more suitable for my server than before? I would keep shared_buffers in the 1->2 range, as this is allocated *once* into shared memory, so only uses 80->160 Mb in *total*. The lower sort_mem will help reduce memory pressure (as this is allocated for every backend connection) and this will help performance - *unless* you have lots of queries that need to sort large datasets. If so, then these will hammer your i/o subsystem, possibly canceling any gain from freeing up more memory. So there is a need to understand what sort of workload you have! best wishes Mark ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PERFORM] Low Performance for big hospital server ..
> The common wisdom of shared buffers is around 6-10% of available memory. > Your proposal below is about 50% of memory. > > I'm not sure what the original numbers actually meant, they are quite large. > I will try to reduce shared buffer to 1536 [1.87 Mb]. > also effective cache is the sum of kernel buffers + shared_buffers so it > should be bigger than shared buffers. also make the effective cache to 2097152 [2 Gb]. I will give you the result , because tomorrow [4/12/05] will be the official day of my hospital [which have more than 1700 OPD patient/day]. > Also turning hyperthreading off may help, it is unlikely it is doing any > good unless you are running a relatively new (2.6.x) kernel. Why , could you give me the reason? > I presume you are vacuuming on a regular basis? Yes , vacuumdb daily. ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PERFORM] Low Performance for big hospital server ..
The common wisdom of shared buffers is around 6-10% of available memory. Your proposal below is about 50% of memory. I'm not sure what the original numbers actually meant, they are quite large. also effective cache is the sum of kernel buffers + shared_buffers so it should be bigger than shared buffers. Also turning hyperthreading off may help, it is unlikely it is doing any good unless you are running a relatively new (2.6.x) kernel. I presume you are vacuuming on a regular basis? [EMAIL PROTECTED] wrote: postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 Gb. You may want to try disabling hyperthreading, if you don't mind rebooting. Can you give me an idea why should I use the SMP kernel instead of Bigmen kernel [turn off the hyperthreading]? Will it be better to turn off ? grew up to 3.5 Gb and there were more than 160 concurent connections. Looks like your growing dataset won't fit in your OS disk cache any longer. Isolate your most problematic queries and check out their query plans. I bet you have some sequential scans that used to read from cache but now need to read the disk. An index may help you. More RAM wouldn't hurt. =) I think so that there may be some query load on our programe and I try to locate it. But if I reduce the config to : max_connections = 160 shared_buffers = 2048[Total = 2.5 Gb.] sort_mem = 8192 [Total = 1280 Mb.] vacuum_mem = 16384 effective_cache_size = 128897 [= 1007 Mb. = 1 Gb. ] Will it be more suitable for my server than before? Thanks for all comment. Amrit Thailand ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly -- Dave Cramer http://www.postgresintl.com 519 939 0336 ICQ#14675561 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PERFORM] Low Performance for big hospital server ..
> > postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 > Gb. > > You may want to try disabling hyperthreading, if you don't mind > rebooting. Can you give me an idea why should I use the SMP kernel instead of Bigmen kernel [turn off the hyperthreading]? Will it be better to turn off ? > > grew up to 3.5 Gb and there were more than 160 concurent connections. > > Looks like your growing dataset won't fit in your OS disk cache any > longer. Isolate your most problematic queries and check out their > query plans. I bet you have some sequential scans that used to read > from cache but now need to read the disk. An index may help you. > > More RAM wouldn't hurt. =) I think so that there may be some query load on our programe and I try to locate it. But if I reduce the config to : max_connections = 160 shared_buffers = 2048[Total = 2.5 Gb.] sort_mem = 8192 [Total = 1280 Mb.] vacuum_mem = 16384 effective_cache_size = 128897 [= 1007 Mb. = 1 Gb. ] Will it be more suitable for my server than before? Thanks for all comment. Amrit Thailand ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] Low Performance for big hospital server ..
On Sun, Jan 02, 2005 at 09:54:32AM +0700, [EMAIL PROTECTED] wrote: > postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 Gb. You may want to try disabling hyperthreading, if you don't mind rebooting. > grew up to 3.5 Gb and there were more than 160 concurent connections. Looks like your growing dataset won't fit in your OS disk cache any longer. Isolate your most problematic queries and check out their query plans. I bet you have some sequential scans that used to read from cache but now need to read the disk. An index may help you. More RAM wouldn't hurt. =) -Mike Adler ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PERFORM] Low Performance for big hospital server ..
[EMAIL PROTECTED] wrote: I try to adjust my server for a couple of weeks with some sucess but it still slow when the server has stress in the moring from many connection . I used postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 Gb. Since 1 1/2 yr. when I started to use the database server after optimizing the postgresql.conf everything went fine until a couple of weeks ago , my database grew up to 3.5 Gb and there were more than 160 concurent connections. The server seemed to be slower in the rush hour peroid than before . There is some swap process too. My top and meminfo are shown here below: You might just be running low on ram - your sort_mem setting means that 160 connections need about 3.1G. Add to that the 256M for your shared_buffers and there may not be much left for the os to use effectively (this could explain the fact that some swap is being used). Is reducing sort_mem an option ? regards Mark ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]