Re: [PERFORM] Regression: 8.3 2 seconds -> 8.4 100+ seconds

2010-11-08 Thread Francisco Reyes
Robert Haas writes: This part looks really strange to me. Here we have a nested loop whose outer side is estimated to produce 33 rows and whose outer side is estimated to produce 2 rows. We have retained someone to help us troubleshoot the issue. Once the issue has been resolved I will make s

Re: [PERFORM] How does PG know if data is in memory?

2010-10-27 Thread Francisco Reyes
Greg Smith writes: heard privately from two people who have done similar experiments on Linux and found closer to 8GB to be the point where performance started So on a machine with 72GB is 8GB still the recommended value? Usually have only 10 to 20 connections. -- Sent via pgsql-performanc

[PERFORM] Regression: 8.3 2 seconds -> 8.4 100+ seconds

2010-10-27 Thread Francisco Reyes
CentOS 5.4 and 5.5 Query SELECT sum(usramt) as givensum, sum (case when usedid > 0 then usramt else 0 end) as usedsum FROM argrades WHERE userstatus in (5) and membid in (select distinct members.membid from members, cards where members.membid =

Re: [PERFORM] 3ware vs. MegaRAID

2010-03-31 Thread Francisco Reyes
Ireneusz Pluta writes: I am waiting for an ordered machine dedicated to PostgresSQL. It was expected to have 3ware 9650SE 16 port controller. However, the vendor wants to replace this controller with MegaRAID SAS 84016E, because, as I have had better luck getting 3ware management tools to wo

Re: [PERFORM] 10K vs 15k rpm for analytics

2010-03-03 Thread Francisco Reyes
Yeb Havinga writes: controllers. Also, I am not sure if it is wise to put the WAL on the same logical disk as the indexes, If I only have two controllers would it then be better to put WAL on the first along with all the data and the indexes on the external? Specially since the external encl

Re: [PERFORM] 10K vs 15k rpm for analytics

2010-03-02 Thread Francisco Reyes
Greg Smith writes: http://www.3ware.com/KB/Article.aspx?id=15383 I consider them still a useful vendor for SATA controllers, but would never buy a SAS solution from them again until this is resolved. Who are you using for SAS? One thing I like about 3ware is their management utility works u

Re: [PERFORM] 10K vs 15k rpm for analytics

2010-03-02 Thread Francisco Reyes
Scott Marlowe writes: While 16x15k older drives doing 500Meg seems only a little slow, the 24x10k drives getting only 400MB/s seems way slow. I'd expect a RAID-10 of those to read at somewhere in or just past the gig per Talked to the vendor. The likely issue is the card. They used a single c

Re: [PERFORM] 10K vs 15k rpm for analytics

2010-03-02 Thread Francisco Reyes
da...@lang.hm writes: what filesystem is being used. There is a thread on the linux-kernel mailing list right now showing that ext4 seems to top out at ~360MB/sec while XFS is able to go to 500MB/sec+ EXT3 on Centos 5.4 Plan to try and see if I have time with the new machines to try FreeBSD+

Re: [PERFORM] 10K vs 15k rpm for analytics

2010-03-02 Thread Francisco Reyes
Scott Marlowe writes: Have you tried short stroking the drives to see how they compare then? Or is the reduced primary storage not a valid path here? No, have not tried it. By the time I got the machine we needed it in production so could not test anything. When the 2 new machines come I

Re: [PERFORM] 10K vs 15k rpm for analytics

2010-03-02 Thread Francisco Reyes
Greg Smith writes: in a RAID10, given proper read-ahead adjustment. I get over 200MB/s out of the 3-disk RAID0 on my home server without even trying hard. Can you Any links/suggested reading on "read-ahead adjustment". I understand this may be OS specific, but any info would be helpfull.

Re: [PERFORM] 10K vs 15k rpm for analytics

2010-03-02 Thread Francisco Reyes
Greg Smith writes: in a RAID10, given proper read-ahead adjustment. I get over 200MB/s out of the 3-disk RAID0 Any links/suggested reads on read-ahead adjustment? It will probably be OS dependant, but any info would be usefull. -- Sent via pgsql-performance mailing list (pgsql-performance

Re: [PERFORM] 10K vs 15k rpm for analytics

2010-03-02 Thread Francisco Reyes
da...@lang.hm writes: With sequential scans you may be better off with the large SATA drives as they fit more data per track and so give great sequential read rates. I lean more towards SAS because of writes. One common thing we do is create temp tables.. so a typical pass may be: * sequential

Re: [PERFORM] 10K vs 15k rpm for analytics

2010-03-02 Thread Francisco Reyes
Scott Marlowe writes: Then the real thing to compare is the speed of the drives for throughput not rpm. In a machine, simmilar to what I plan to buy, already in house 24 x 10K rpm gives me about 400MB/sec while 16 x 15K rpm (2 to 3 year old drives) gives me about 500MB/sec -- Sent via pgs

Re: [PERFORM] 10K vs 15k rpm for analytics

2010-03-02 Thread Francisco Reyes
Yeb Havinga writes: With 24 drives it'll probably be the controller that is the limiting factor of bandwidth. Going with a 3Ware SAS controller. Our HP SAN controller with 28 15K drives delivers 170MB/s at maximum with raid 0 and about 155MB/s with raid 1+0. Already have simmilar machine

[PERFORM] 10K vs 15k rpm for analytics

2010-03-02 Thread Francisco Reyes
Anyone has any experience doing analytics with postgres. In particular if 10K rpm drives are good enough vs using 15K rpm, over 24 drives. Price difference is $3,000. Rarely ever have more than 2 or 3 connections to the machine. So far from what I have seen throughput is more important than TP

Re: [PERFORM] A guide/tutorial to performance monitoring and tuning

2008-07-21 Thread Francisco Reyes
On 2:59 pm 06/29/08 Greg Smith <[EMAIL PROTECTED]> wrote: > Right now I'm working with a few other people to put together a more > straightforward single intro guide that should address some of the > vagueness you point out here, Was that ever completed? -- Sent via pgsql-performance mailing li

Re: [PERFORM] An "obvious" index not being used

2008-07-18 Thread Francisco Reyes
Daniele Varrazzo writes: I suspect the foo.account_id statistical data are not used at all in query: the query planner can only estimate the number of accounts to look for, not You mentioned you bumped your default_statistics_target. What did you increase it to? My data sets are so "strange"

Re: [PERFORM] 3ware vs Areca

2008-07-18 Thread Francisco Reyes
Jeffrey Baker writes: Their firmware is, frankly, garbage. In more than one instance we have had the card panic when a disk fails, which is obviously counter to the entire purpose of a RAID. I have had simmilar problems with 3ware 9550 and 9650 cards. Undre FreeBSD I have seen constant crashe

Re: [PERFORM] RAID controllers for Postgresql on large setups

2008-05-13 Thread Francisco Reyes
PFC writes: You say that like you don't mind having PCI in a server whose job is to perform massive query over large data sets. I am in my 4th week at a new job. Trying to figure what I am working with. From what I see I will likely get as much improvement from new hardware as from re-doing

Re: [PERFORM] RAID controllers for Postgresql on large setups

2008-05-12 Thread Francisco Reyes
Joshua D. Drake writes: Most likely you have a scsi onboard as well I am guessing. Will check. shouldn't bother with the 2120. My tests show it is a horrible controller for random writes. Thanks for the feedback.. Comparing software raid on an LSI onboard for an IBM 345 versus a 2120s u

[PERFORM] RAID controllers for Postgresql on large setups

2008-05-12 Thread Francisco Reyes
Inheritted a number of servers and I am starting to look into the hardware. So far what I know from a few of the servers Redhat servers. 15K rpm disks, 12GB to 32GB of RAM. Adaptec 2120 SCSI controller (64MB of cache). The servers have mostly have 12 drives in RAID 10. We are going to redo one m

Re: [PERFORM] Vacuum statistics

2008-05-03 Thread Francisco Reyes
[EMAIL PROTECTED] writes: What version of Postgres you are running ? 8.2 If you are using 8.3, you can use pg_stat_all_tables.If Not you can use http://www.postgresql.org/docs/current/static/pgstattuple.html>http:// www.postgresql.org/docs/current/static/pgstattuple.html pgstattuple is al

[PERFORM] Vacuum statistics

2008-04-29 Thread Francisco Reyes
I recall reading posts in the past where one could query the stat tables and see how well autovacuum was performing. Not finding the posts. I found this query: SELECT relname, relkind, reltuples, relpages FROM pg_class where relkind = 'r'; From the output how can I tell the number of dead tu

Re: [PERFORM] Group by more efficient than distinct?

2008-04-20 Thread Francisco Reyes
PFC writes: - If you process up to some percentage of your RAM worth of data, hashing is going to be a lot faster Thanks for the excellent breakdown and explanation. I will try and get sizes of the tables in question and how much memory the machines have. - If you need DISTINCT ON

Re: [PERFORM] Group by more efficient than distinct?

2008-04-20 Thread Francisco Reyes
Gregory Stark writes: HashAggregate needs to store more values in memory at the same time so it's not a good plan if you have a lot of distinct values. So far the resulting number of rows where in the thousands and the source data were in there hundreds of thousands and the group by was faste

[PERFORM] Group by more efficient than distinct?

2008-04-17 Thread Francisco Reyes
I am trying to get a distinct set of rows from 2 tables. After looking at someone else's query I noticed they were doing a group by to obtain the unique list. After comparing on multiple machines with several tables, it seems using group by to obtain a distinct list is substantially faster tha

Re: [PERFORM] WALL on controller without battery?

2007-07-11 Thread Francisco Reyes
Greg Smith writes: During peak operation there will be about 5 to 20 updates per second with a handfull of reads. There really is no reason you need to be concerned about WAL from a performance perspective if this is your expected workload. I was able to get the second controller with batt

Re: [PERFORM] WALL on controller without battery?

2007-07-11 Thread Francisco Reyes
Joshua D. Drake writes: Without a BBU you are guaranteed at some point to have catastrophic failure unless you turn off write cache, which would then destroy your performance. I am re-working the specs of the machine to try and get a 4port 3ware to have the battery backup.

Re: [PERFORM] WALL on controller without battery?

2007-07-11 Thread Francisco Reyes
Alan Hodgson writes: I would spring for a 4-port with a BBU, though, and then put the WAL on the drives with the OS. The machine is already over budget. :-( I will check the price difference but unlikely I will get approval. ---(end of broadcast)---

[PERFORM] WALL on controller without battery?

2007-07-11 Thread Francisco Reyes
Setting spec for a postgresql server. The hard drive distribution is going to be 8 x 750GB Seagate on a 3ware 9650SE RAID 6 2 x 160GB Seagate on a 3ware 2 port The question is, would I be better off putting WAL on the second, OS, controller or in the 8 port controller? Specially since the 2 port

Re: [PERFORM] Hardware suggestions

2007-06-22 Thread Francisco Reyes
Greg Smith writes: Unfortunately the existance of the RAID-6 capable Adaptec 2820SA proves this isn't always the case. For sata 3ware and Areca seem to perform well with raid 6 (from the few posts I have read on the subject). Don't know of SCSI controllers though.

Re: [PERFORM] Database-wide VACUUM ANALYZE

2007-06-21 Thread Francisco Reyes
Steven Flatt writes: Can someone explain what is going on here?  I can't quite figure it out based on the docs. Are you on FreeBSD by any chance? I think the FreeBSD port by default installs a script that does a daily vacuum. If using another OS, perhaps you want to see if you used some sor

Re: [PERFORM] Hardware suggestions

2007-06-21 Thread Francisco Reyes
Scott Marlowe writes: and a bit more resiliant to drive failure, RAID-5 can give you a lot of storage and very good read performance, so it works well for reporting / New controllers now also have Raid 6, which from the few reports I have seen seems to have a good compromise of performance a

Re: [PERFORM] Performance query about large tables, lots of concurrent access

2007-06-20 Thread Francisco Reyes
Karl Wright writes: Okay - I started a VACUUM with the 8.1 database yesterday morning, having the database remain under load. As of 12:30 today (~27 hours), the original VACUUM was still running. At that point: I don't recall if you said it already, but what is your maintenance_work_mem?

Re: [PERFORM] PostgreSQL Configuration Tool for Dummies

2007-06-19 Thread Francisco Reyes
Campbell, Lance writes: max_connections Shouldn't that come straight from the user? ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate

Re: [PERFORM] Best use of second controller with faster disks?

2007-06-19 Thread Francisco Reyes
Vivek Khera writes: no file will ever be larger than 1Gb I didn't need to make any adjustments to the newfs parameters. You should consider using "newfs -i 65536" for partitions to be used for postgresql. You will get more usable space and will still have lots of free inodes. For my next

Re: [PERFORM] PostgreSQL Configuration Tool for Dummies

2007-06-19 Thread Francisco Reyes
Campbell, Lance writes: For the "6) Are your searches:" How about having "many simple" ---(end of broadcast)--- TIP 6: explain analyze is your friend

Re: [PERFORM] Performance query about large tables, lots of concurrent access

2007-06-19 Thread Francisco Reyes
Heikki Linnakangas writes: On a serious note, the index vacuum improvements in 8.2 might help you to cut that down. You seem to be happy with your setup, but I thought I'd mention it.. I am really, really trying.. to go to 8.2. I have a thread on "general" going on for about a week. I am unab

Re: [PERFORM] PostgreSQL Configuration Tool for Dummies

2007-06-19 Thread Francisco Reyes
Campbell, Lance writes: Francisco and Richard, Why ask about disk or raid? How would that impact any settings in postgresql.conf? If the user has 2 disks and says that he will do a lot of updates he could put pg_xlog in the second disk. ---(end of broadcast)---

Re: [PERFORM] PostgreSQL Configuration Tool for Dummies

2007-06-19 Thread Francisco Reyes
Campbell, Lance writes: 3) I suggested JavaScript because most people that get started with PostgreSQL will go to the web in order to find out about issues relating Why not c? It could then go into contrib. Anyways.. language is likely the least important issue.. As someone mentioned.. once t

Re: [PERFORM] Performance query about large tables, lots of concurrent access

2007-06-19 Thread Francisco Reyes
Karl Wright writes: I'm not writing off autovacuum - just the concept that the large tables aren't the ones that are changing. Unfortunately, they *are* the most dynamically updated. Would be possible for you to partition the tables? By date or some other fashion to try to have some tables n

Re: [PERFORM] Performance query about large tables, lots of concurrent access

2007-06-19 Thread Francisco Reyes
Alvaro Herrera writes: How large is the database? I must admit I have never seen a database that took 4 days to vacuum. This could mean that your database is humongous, or that the vacuum strategy is wrong for some reason. Specially with 16GB of RAM. I have a setup with several databases (t

Re: [PERFORM] Performance query about large tables, lots of concurrent access

2007-06-19 Thread Francisco Reyes
Gregory Stark writes: VACUUM doesn't require shutting down the system, it doesn't lock any tables or otherwise prevent other jobs from making progress. It does add extra i/o but In addition to what Gregory pointed out, you may want to also consider using Autovacuum. That may also help.

Re: [PERFORM] Hardware suggestions

2007-06-19 Thread Francisco Reyes
[EMAIL PROTECTED] writes: sizes etc., I wondered about the hardware. Most things were about the I/O of harddisks, RAM and file system. Is the filesystem that relevant? Because wo want to stay at Ubuntu because of the software support, espacially for the GIS-Systems. I think we need at least ab

Re: [PERFORM] Database size

2007-06-15 Thread Francisco Reyes
choksi writes: I had a database which uses to hold some 50 Mill records and disk space used was 103 GB. I deleted around 34 Mill records but still the disk size is same. Can some on please shed some light on this. When records are deleted they are only marked in the database. When you run vacu

Re: [PERFORM] Best use of second controller with faster disks?

2007-06-13 Thread Francisco Reyes
Vivek Khera writes: FreeBSD, indeed. The vendor, Partners Data Systems, did a wonderful This one? http://www.partnersdata.com job ensuring that everything integrated well to the point of talking with various FreeBSD developers, LSI engineers, etc., and sent me a fully tested system end-

Re: [PERFORM] Best use of second controller with faster disks?

2007-06-12 Thread Francisco Reyes
Vivek Khera writes: what raid card have you got? 2 3ware cards. I believe both are 9550SX i'm playing with an external enclosure which has an areca sata raid in it and connects to the host via fibre channel. What is the OS? FreeBSD? One of the reasons I stick with 3ware is that it is

[PERFORM] Best use of second controller with faster disks?

2007-06-11 Thread Francisco Reyes
Configuration OS: FreeBSD 6.1 Stable Postgresql: 8.1.4 RAID card 1 with 8 drives. 7200 RPM SATA RAID10 RAID card 2 with 4 drives. 10K RPM SATA RAID10 Besides having pg_xlog in the 10K RPM drives what else can I do to best use those drives other than putting some data in them? Iostat shows the

Re: [PERFORM] Vacuums on large busy databases

2006-09-18 Thread Francisco Reyes
Jim C. Nasby writes: BTW, on some good raid controllers (with battery backup and write-caching), putting pg_xlog on a seperate partition doesn't really help, so you might want to try combining everything. Planning to put a busy database on second raid or perhaps some index files. So far the se

Re: [PERFORM] Vacuums on large busy databases

2006-09-14 Thread Francisco Reyes
Michael Stone writes: On Thu, Sep 14, 2006 at 04:30:46PM -0400, Francisco Reyes wrote: Right now adding up from ps the memory I have about 2GB. That's not how you find out how much memory you have. Try "free" or somesuch. Wasn't trying to get an accurate value, ju

Re: [PERFORM] Vacuums on large busy databases

2006-09-14 Thread Francisco Reyes
Dave Cramer writes: personally, I'd set this to about 6G. This doesn't actually consume memory it is just a setting to tell postgresql how much memory is being used for cache and kernel buffers Gotcha. Will increase further. regarding shared buffers I'd make this much bigger, like 2GB

Re: [PERFORM] Vacuums on large busy databases

2006-09-14 Thread Francisco Reyes
Jeff Davis writes: shared_buffers = 1 Why so low? My initial research was not thorough enough with regards to how to compute how many to use. You have a lot of memory, and shared_buffers are an important performance setting. I have a machine with 4GB of RAM, and I found my best perf

Re: [PERFORM] Vacuums on large busy databases

2006-09-14 Thread Francisco Reyes
Dave Cramer writes: What is a reasonable number? I estimate I have at least 1 to 2 GB free of memory. You are using 6G of memory for something else ? Right now adding up from ps the memory I have about 2GB. Have an occassional program which uses up to 2GB. Then I want to give some breathing

Re: [PERFORM] Vacuums on large busy databases

2006-09-14 Thread Francisco Reyes
Dave Cramer writes: What is effective_cache set to ? Increasing this seems to have helped significantly a web app. Load times seem magnitudes faster. Increased it to effective_cache_size = 12288 # 96MB What is a reasonable number? I estimate I have at least 1 to 2 GB free of memory. Don't

Re: [PERFORM] Vacuums on large busy databases

2006-09-14 Thread Francisco Reyes
Dave Cramer writes: What is effective_cache set to ? Default of 1000. Was just reading about this parameter. Will try increasing it to 8192 (8192 * 8K = 64MB) why not just let autovac do it's thing ? Have been playing with decresing the autovac values. With 100GB+ tables even 1% in autov

[PERFORM] Vacuums on large busy databases

2006-09-14 Thread Francisco Reyes
My setup: Freebsd 6.1 Postgresql 8.1.4 Memory: 8GB SATA Disks Raid 1 10 spindles (2 as hot spares) 500GB disks (16MB buffer), 7200 rpm Raid 10 Raid 2 4 spindles 150GB 10K rpm disks Raid 10 shared_buffers = 1 temp_buffers = 1500 work_mem = 32768# 32MB maintenance_work_mem =

Re: [PERFORM] how to partition disks

2006-09-04 Thread Francisco Reyes
hubert depesz lubaczewski writes: On 6/14/06, Sven Geisler raid 10 is of course not questionable. but are you sure that it will work faster than for example: 2 discs (raid 1) for xlog 6 discs (raid 10) for tables 6 discs (raid 10) for indices? Caching up on the performance list. Although th

Re: [PERFORM] Adding and filling new column on big table

2006-05-30 Thread Francisco Reyes
Jonathan Blitz writes: I suppose I could do but I need to install PostgreSQL there and then copy over the database. Maybe I will give it a try. I really think that is your best bet. If for whatever reason that will not be an option perhaps you can just let the process run over the weekend.. p

Re: [PERFORM] Adding and filling new column on big table

2006-05-30 Thread Francisco Reyes
Jonathan Blitz writes: Nope. Didn't think it would make any difference. May be worth a try. I am using a laptop :). Pentium 4 (not 4M) with 1GB of memory - 2 MHZ Most laptop drives are only 5,400 RPM which would make a transaction like you are describing likely take a while. Must do it

Re: [PERFORM] Adding and filling new column on big table

2006-05-30 Thread Francisco Reyes
Jonathan Blitz writes: I just gave up in the end and left it with NULL as the default value. Could you do the updates in batches instead of trying to do them all at once? Have you done a vacuum full on this table ever? There were, in fact, over 2 million rows in the table rather than 1/4

Re: [PERFORM] Selects query stats?

2006-05-29 Thread Francisco Reyes
I am not sure if this is what the original poster was refering to, but I have used an application called mtop that shows how many queries per second mysql is doing. In my case this is helpfull because we have a number of machines running postfix and each incoming mail generates about 7 queries

Re: [PERFORM] Adding and filling new column on big table

2006-05-29 Thread Francisco Reyes
Jonathan Blitz writes: So, I have tried to run the following command. The command never finishes (I gave up after about and hour and a half!). Did you ever find what was the problem? Perhaps you needed to run a vacuum full on the table? ---(end of broadcast)--

Re: [PERFORM] Inserts optimization?

2006-04-15 Thread Francisco Reyes
Tom Lane writes: Also, increasing checkpoint_segments and possibly wal_buffers helps a lot for write-intensive loads. Following up on those two recomendations from Tom. Tom mentioned in a different message that if the inserst are small that increasing wal_buffers would not help. How about c

Re: [PERFORM] Inserts optimization?

2006-04-15 Thread Francisco Reyes
Gábriel Ákos writes: you are right. raid5 is definitely not suitable for database activities. That is not entirely true. :-) Right now the new server is not ready and the ONLY place I could put the DB for Bacula was a machine with RAID 5. So far it is holding fine. HOWEVER... only one bacula

Re: [PERFORM] Inserts optimization?

2006-04-15 Thread Francisco Reyes
Gábriel Ákos writes: RAID 10 needs pairs.. so we can either have no spares or 2 spares. hm, interesting. I have recently set up a HP machine with smartarray 6i controller, and it is able to handle 4 disks in raid10 plus 1 as spare. :-) Ok so let me be a bit more clear... We have 6 disks in

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Francisco Reyes
Scott Marlowe writes: Spares are placed in service one at a time. Ah.. that's your point. I know that. :-) You don't need 2 spares for RAID 10, trust me. We bought the machine with 8 drives. At one point we were considering RAID 5, then we decided to give RAID 10 a try. We have a simmila

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Francisco Reyes
Michael Stone writes: I still don't follow that. Why would the RAID level matter? IOW, are you actually wanting 2 spares, or are you just stick with that because you need a factor of two disks for your mirrors? RAID 10 needs pairs.. so we can either have no spares or 2 spares. Mmm, it's a bi

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Francisco Reyes
Michael Stone writes: I guess the first question is why 2 hot spares? Because we are using RAID 10 larger array with more spindles with outperform a smaller one with fewer, regardless of RAID level (assuming a decent battery-backed cache). Based on what I have read RAID 10 is supposed to

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Francisco Reyes
Tom Lane writes: That will help not at all, if the problem is too-short transactions as it sounds to be. How about commit_delay? You really need to pester the authors of bacula to try to wrap multiple inserts per transaction. Like any volunteer project I am sure it's more an issue of res

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Francisco Reyes
Marc Cousin writes: If I remember correctly (I allready discussed this with Kern Sibbald a while ago), bacula does each insert in its own transaction : that's how the program is done Thanks for the info. For now, I only could get good performance with bacula and postgresql when disabling fs

Re: [PERFORM] Inserts optimization?

2006-04-14 Thread Francisco Reyes
Jim C. Nasby writes: On Thu, Apr 13, 2006 at 02:59:23PM -0400, Francisco Reyes wrote: In RAID 10 would it matter that WALL is in the same RAID set? Would it be better: 4 disks in RAID10 Data 2 disks RAID 1 WALL 2 hot spares Well, benchmark it with your app and find out, but generally

Re: [PERFORM] Inserts optimization?

2006-04-13 Thread Francisco Reyes
Tom Lane writes: Or at least try to do multiple inserts per transaction. Will see if the program has an option like that. Also, increasing checkpoint_segments and possibly wal_buffers helps a Will try those. Try to get the WAL onto a separate disk spindle if you can. (These things don'

Re: [PERFORM] Inserts optimization?

2006-04-13 Thread Francisco Reyes
Chris writes: If you can, use copy instead: http://www.postgresql.org/docs/8.1/interactive/sql-copy.html I am familiar with copy. Can't use it in this scenario. The data is coming from a program called Bacula (Backup server). It is not static data. ---(end of broadcas

[PERFORM] Inserts optimization?

2006-04-12 Thread Francisco Reyes
Doing my first write heavy database. What settings will help improve inserts? Only a handfull of connections, but each doing up to 30 inserts/second. Plan to have 2 to 3 clients which most of the time will not run at the same time, but ocasionaly it's possible two of them may bump into each othe

Re: [PERFORM] Small table or partial index?

2005-12-13 Thread Francisco Reyes
Jim C. Nasby writes: On Fri, Dec 02, 2005 at 06:28:09PM -0500, Francisco Reyes wrote: I am in the process of designing a new system. There will be a long list of words such as -word table word_id integer word varchar special boolean Some "special" words are used to determine if so

[PERFORM] Small table or partial index?

2005-12-02 Thread Francisco Reyes
I am in the process of designing a new system. There will be a long list of words such as -word table word_id integer word varchar special boolean Some "special" words are used to determine if some work is to be done and will be what we care the most for one type of operation. Will it be more

Re: [PERFORM] 15,000 tables

2005-12-02 Thread Francisco Reyes
Michael Riess writes: Sorry, I should have included that info in the initial post. You're right in that most of these tables have a similar structure. But they are independent and can be customized by the users. How about creating 50 databases and give each it's own tablespace? It's not onl

Re: [PERFORM] Which plattform do you recommend I run PostgreSQL for

2004-10-12 Thread Francisco Reyes
On Tue, 12 Oct 2004 [EMAIL PROTECTED] wrote: In the MySQL manual it says that MySQL performs best with Linux 2.4 with ReiserFS on x86. Can anyone official, or in the know, give similar information regarding PostgreSQL? Don't know which OS/filesystem PostgreSQL runs best on, but you should test on

Re: [PERFORM] Understanding explains

2004-10-11 Thread Francisco Reyes
On Mon, 11 Oct 2004, Rosser Schwarz wrote: In general, it's best to let the planner make the appropriate choice without any artificial constraints. As someone suggested ran with Explain analyze. With seqscan_off was better. Ran a vacuum analyze this afternoon so the stats were up to date. Although

Re: [PERFORM] why my query is not using index??

2004-10-11 Thread Francisco Reyes
On Mon, 11 Oct 2004, John Meinel wrote: Postgres believes that it will cost 382 to do a sequential scan, versus 490 for an indexed scan. Hence why it prefers to do the sequential scan. Try running explain analyze to see if how accurate it is. With explain analyze I have with sequential scan on So

[PERFORM] Understanding explains

2004-10-11 Thread Francisco Reyes
Is there a tutorial or reference to the different terms that appear on the explain output? Items such as "Nested Loop", "Hash".. Also is there a way to easily tell which of two explains is "worse". Example I am running a query with "set enable_seqscan to off;" and i see the explain now shows in

Re: [PERFORM] why my query is not using index??

2004-10-11 Thread Francisco Reyes
On Mon, 11 Oct 2004, Janning Vygen wrote: postgres uses a seq scan if its faster. In your case postgres seems to know that most of your rows have a date < 2004-01-01 and so doesn't need to consult the index if it has to read every page anyway. seq scan can be faster on small tables. try (in psql) "

[PERFORM] How number of columns affects performance

2003-08-01 Thread Francisco Reyes
If a table which will be heavily used has numerous fields, yet only a handfull of them will be used heavily, would it make sense performance wise to split it? Example Table 1 Field 1 Field 100 Table 2 References Field 1 of table1 . Table n References Field 1 of table 1 So table 1 basi