Re: [PERFORM] Beginner optimization questions, esp. regarding Tsearch2
The relevant portion of my sysctl.conf file looks like this: kernel.shmall = 2097152 kernel.shmmax = 2147483648 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 65536 I understood it was a good idea to set shmmax to half of available memory (2GB in this case). I assume that I need to set shared_buffers slightly lower than 2GB for postgresql to start successfully. Carl On 8/15/06, Jim C. Nasby [EMAIL PROTECTED] wrote: On Tue, Aug 15, 2006 at 12:47:54PM -0600, Carl Youngblood wrote: I tried setting it to 2GB and postgres wouldn't start. Didn't investigate in much greater detail as to why it wouldn't start, but after switching it back to 1GB it started fine. Most likely because you didn't set the kernel's shared memory settings high enough. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PERFORM] Beginner optimization questions, esp. regarding Tsearch2
I tried setting it to 2GB and postgres wouldn't start. Didn't investigate in much greater detail as to why it wouldn't start, but after switching it back to 1GB it started fine. On 8/15/06, Jim C. Nasby [EMAIL PROTECTED] wrote: See the recent thread about how old rules of thumb for shared_buffers are now completely bunk. With 4G of memory, setting shared_buffers to 2G could easily be reasonable. The OP really needs to test several different values with their actual workload and see what works best. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] Beginner optimization questions, esp. regarding Tsearch2
By the way, can you please post a link to that thread? On 8/15/06, Jim C. Nasby [EMAIL PROTECTED] wrote: See the recent thread about how old rules of thumb for shared_buffers are now completely bunk. With 4G of memory, setting shared_buffers to 2G could easily be reasonable. The OP really needs to test several different values with their actual workload and see what works best. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [PERFORM] Beginner optimization questions, esp. regarding Tsearch2
Thanks a lot for the advice Richard. I will try those things out and report back to the list. Carl On 8/10/06, Richard Huxton dev@archonet.com wrote: From your figures, you're allocating about 64MB to work_mem, which is per sort. So, a complex query could use several times that amount. If you don't have many concurrent queries that might be what you want. Also, you've allocated 1GB to your shared_buffers which is more than I'd use as a starting point. You've only mentioned one main table with 100,000 rows, so presumably you're going to cache the entire DB in RAM. So, you'll want to increase effective_cache_size and reduce random_page_cost. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[PERFORM] Beginner optimization questions, esp. regarding Tsearch2 configuration
I'm trying to optimize a resume search engine that is using Tsearch2 indexes. It's running on a dual-opteron 165 system with 4GB of ram and a raid1 3Gb/sec SATA array. Each text entry is about 2-3K of text, and there are about 23,000 rows in the search table, with a goal of reaching about 100,000 rows eventually. I'm running Ubuntu 6.06 amd64 server edition. The raid array is a software-based linux array with LVM on top of it and the file system for the database mount point is XFS. The only optimization I've done so far is to put the following in /etc/sysctl.conf: kernel.shmall = 2097152 kernel.shmmax = 2147483648 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 65536 And in postgresql.conf I set the following parameters: shared_buffers = 131072 work_mem = 65536 max_stack_depth = 4096 max_fsm_pages = 4 max_fsm_relations = 2000 These probably aren't ideal but I was hoping they would perform a little better than the defaults. I got the following results from a pgbench script I picked up off the web: CHECKPOINT = sync == 10 concurrent users... transaction type: TPC-B (sort of) scaling factor: 10 number of clients: 10 number of transactions per client: 100 number of transactions actually processed: 1000/1000 tps = 632.146016 (including connections establishing) tps = 710.474526 (excluding connections establishing) Once again I don't know if these results are good or not for my hardware. I have a couple of questions: - Does anyone have some good advice for optimizing postgres for tsearch2 queries? - I noticed that there are six different postmaster daemons running. Only one of them is taking up a lot of RAM (1076m virtual and 584m resident). The second one is using 181m resident while the others are less than 20m each. Is it normal to have multiple postmaster processes? Even the biggest process doesn't seem to be using near as much RAM as I have on this machine. Is that bad? What percentage of my physical memory should I expect postgres to use for itself? How can I encourage it to cache more query results in memory? Thanks in advance for your time. Carl Youngblood ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match