Title: Re: [PERFORM] hardare config question
Erik,
I think you have a mismatch in your Linux driver and firmware for your 3Ware card. Download a matched Linux driver and firmware from www.3ware.com and your problems should disappear.
- Luke
On 4/28/06 8:37 AM, "Erik Myllymaki" <[EMAIL PROTE
At 10:39 06/04/29, Tom Lane wrote:
K C Lau <[EMAIL PROTECTED]> writes:
> Without knowing the internals, I have this simplistic idea: if Postgres
> maintains the current lowest transaction ID for all active
transactions, it
> probably could recycle dead tuples on the fly.
[ yawn... ] Yes, we'
K C Lau <[EMAIL PROTECTED]> writes:
> Without knowing the internals, I have this simplistic idea: if Postgres
> maintains the current lowest transaction ID for all active transactions, it
> probably could recycle dead tuples on the fly.
[ yawn... ] Yes, we've heard that before. The hard part i
At 03:00 06/04/29, Bruno Wolff III wrote:
On Fri, Apr 28, 2006 at 17:37:30 +,
Bealach-na Bo <[EMAIL PROTECTED]> wrote:
> >The above shows that the indexes contained 10M rows and 160M of dead
> >space each. That means you weren't vacuuming nearly enough.
>
> How is it that a row in the tabl
On Fri, Apr 28, 2006 at 17:37:30 +,
Bealach-na Bo <[EMAIL PROTECTED]> wrote:
> >The above shows that the indexes contained 10M rows and 160M of dead
> >space each. That means you weren't vacuuming nearly enough.
>
> How is it that a row in the table can grow to a size far exceeding the sum
The best of all worlds is to use a HW RAID card with battery backed cache.
Then you can have both high performance and high reliability.
Benches suggest that the best such cards currently are the Areca cards which
support up to 2GB of battery backed cache.
Ron
-Original Message-
>From:
It's also possible that the single SATA drive you were testing (or the
controller it was attached to) is lying about fsync and performing write
caching behind your back, whereas your new controller and drives are
not.
You'll find a lot more info on the archives of this list about it, but
basically
On April 28, 2006 10:31 am, "Bealach-na Bo" <[EMAIL PROTECTED]>
wrote:
> The exclusive lock is going to cause problems for me since the table is
> very active. Is there a way of getting around that or do I need to
> schedule the application that accesses this table?
If you don't need access to th
The above shows that the indexes contained 10M rows and 160M of dead
space each. That means you weren't vacuuming nearly enough.
How is it that a row in the table can grow to a size far exceeding the sum
of the maximum sized of the fields it consists of?
13M dead rows, and the table is 1.4M
On Apr 28, 2006, at 11:37 AM, Erik Myllymaki wrote:
When I had this installed on a single SATA drive running from the
PE1800's on-board SATA interface, this operation took anywhere from
65-80 seconds.
With my new RAID card and drives, this operation took 272 seconds!?
switch it to RAID10
> INFO: index "job_log_id_pkey" now contains 10496152 row versions in
> 59665 pages
See the 10496152 above? That means you have 10496152 rows of data in your
table. If those, only 365000 are alive. That means you have basically
never vacuumed this table before, correct?
Almost correct :| I
> Something seems to have truncated your EXPLAIN output, but anyway we
> can see where the problem is:
I copied it from pgAdmin in 640x480 screen resolution in XP
Maybe pgAdmin bug ?
> The planner is expecting to get one row from "dok" passing the filter
> condition, and hence chooses a plan tha
This is a question that I also posted on Dell hardware forums, and I realize it
probably belongs there more than here. But I am thinking someone might have
some anecdotal information that could help me and this post may help someone
else down the road.
My PowerEdge 1800 (dual 3ghz Xeon, 3GB ra
Markus Schaber <[EMAIL PROTECTED]> writes:
> However, the query planner seems to refuse to make index scans even with
> 8.1:
> testdb=# EXPLAIN SELECT * from streets WHERE link_id = ANY(ARRAY[1,2,3]);
Yup, that was just done in HEAD a couple months ago.
regards, tom lane
On April 28, 2006 04:41 am, "Bealach-na Bo" <[EMAIL PROTECTED]>
wrote:
> INFO: index "job_log_id_pkey" now contains 10496152 row versions in
> 59665 pages
See the 10496152 above? That means you have 10496152 rows of data in your
table. If those, only 365000 are alive. That means you have ba
On Fri, Apr 28, 2006 at 11:41:06AM +, Bealach-na Bo wrote:
> OK, here is a much more detailed output. I still don't quite
> understand why simple queries like counting the number of rows in a
> table should take minutes to complete. Surely, any performance
> enhancement to be had by vacuuming i
"Andrus" <[EMAIL PROTECTED]> writes:
> Here it is running in my local computer. I'm expecting run time no more 1
> second
Something seems to have truncated your EXPLAIN output, but anyway we
can see where the problem is:
> " -> Seq Scan on dok (cost=0.00..787.80 rows=1 width=39)
>
Hello,
I'm searching for a comfortable way to get a variable-size bunch of user
specified Objects via a single prepared statement, so I wanted to submit
an ARRAY.
However, the query planner seems to refuse to make index scans even with
8.1:
testdb=# EXPLAIN SELECT * from streets WHERE link_id =
OK, here is a much more detailed output. I still don't quite
understand why simple queries like counting the number of rows in a
table should take minutes to complete. Surely, any performance
enhancement to be had by vacuuming is closely related to indexes
which, in turn, are closely related to so
Hi, I have a performance problem with Postgresql version 8.1 installed on a Fedora Core release 4 (Stentz) with kernel version 2.6.11. The machine I am working on has 512MB of RAM and Pentium III 800 MHz CPU. I have only one table in the database which consists of 256 columns and 1 rows. Eac
> You have ANALYZEd all these tables recently, I hope? The planner
> certainly doesn't think this query will take very long.
I have autovacuum running so I expect it takes care of ANALYZE, isn't it ?
I ran also analyze command before running explain analyze.
> To find out what's wrong, you're g
Hk, Guoping,
Guoping Zhang wrote:
> a) The tests consists of ten thousands very small transactions, which are
> not grouped, that is why so slow with compare to set fsync off.
If those transactions are submitted by concurrent applications over
several simulataneous connections, playing with comm
a) I have absolutely no idea regarding price tags when it comes to SUN
hardware, last time I worked with SUN gear was in ´01 so you'll have to check
with your local (SUN-)supplier for uptodate prices.
b) Same here (no idea). But I'd be surprised if UFS (and ZFS) was unable to
take advantage of
Hi all,
Vivek Khera schrieb:
> On Apr 25, 2006, at 2:14 PM, Bill Moran wrote:
>> Where I'm stuck is in deciding whether we want to go with dual-core
>> pentiums with 2M cache, or with HT pentiums with 8M cache.
>
> In order of preference:
>
> Opterons (dual core or single core)
> Xeon with HT *di
24 matches
Mail list logo