On Wed, 2008-03-19 at 07:44 +0900, Craig Ringer wrote:
> Gregory Youngblood wrote:
> > Also, a very informative read:
> > http://research.google.com/archive/disk_failures.pdf
> > In short, best thing to do is watch SMART and be prepared to try and
> > swap a drive out before it fails completely. :
So my question is this: Shouldn’t VACUUM FULL clean Table C and reclaim
all its space?
You've got concepts mixed up.
TRUNCATE deletes all of the data from a particular table (and works in
all dbms's).
http://www.postgresql.org/docs/8.3/interactive/sql-truncate.html
VACUUM FULL is a p
Ron Mayer wrote:
Tom Lane wrote:
Ron Mayer <[EMAIL PROTECTED]> writes:
Would another possible condition for considering
Cartesian joins be be:
* Consider Cartesian joins when a unique constraint can prove
that at most one row will be pulled from one of the tables
that would be part
Gregory Youngblood wrote:
In my way of thinking, and what I was referring to above, was using
those error conditions to identify drives to change before the reported
complete failures. Yes, that will mean changing drives before SMART
actually says there is a full failure, and you may have to figh
[EMAIL PROTECTED] wrote:
> just duplicating the Raid 4 or 5 pairity information will not help you
> if the parity drive is not one of the drives that fail.
Good point - and no doubt why nothing supports extra disks worth of
parity on RAID 5, which would be entirely useless (still only protecting
<[EMAIL PROTECTED]> writes:
> On Tue, 18 Mar 2008, Gregory Stark wrote:
>
>> You can have as many parity drives as you want with RAID 5 too.
>
> you can? I've never seen a raid 5 setup with more then a single parity dirve
> (or even the option of having more then one drives worth of redundancy).
At 00:24 08/03/19, Matthew wrote:
On Tue, 18 Mar 2008, Chris Kratz wrote:
In moderately complex to very complex ad hoc queries in our system,
we were consistently having the system massively underestimate the
number of rows coming out of join at a low level making these
queries very slow and i
On Wed, 19 Mar 2008, Craig Ringer wrote:
[EMAIL PROTECTED] wrote:
you can? I've never seen a raid 5 setup with more then a single parity
dirve (or even the option of having more then one drives worth of
redundancy). you can have hot-spare drives, but thats a different thing.
With RAID 4, whe
[EMAIL PROTECTED] wrote:
you can? I've never seen a raid 5 setup with more then a single parity
dirve (or even the option of having more then one drives worth of
redundancy). you can have hot-spare drives, but thats a different thing.
With RAID 4, where the "parity drives" are in fact dedicat
Gregory Youngblood wrote:
Also, a very informative read:
http://research.google.com/archive/disk_failures.pdf
In short, best thing to do is watch SMART and be prepared to try and
swap a drive out before it fails completely. :)
I currently have four brand new 1TB disks (7200RPM SATA - they're f
Scott Marlowe wrote
> On Tue, Mar 18, 2008 at 9:58 AM, Chris Kratz
> <[EMAIL PROTECTED]> wrote:
> > Y, turning nested loops off in specific cases has increased
> performance
> > greatly. It didn't fix the planner mis-estimation, just
> the plan it chose.
> > It's certainly not a panacea, but it
Hi folks,
We are running Postgres 8.2.5.
I have 3 tables, call them A, B, and C
Table A houses info on all emails that have ever been created for the
purpose of being delivered to our end customers.
Big table. About 23 million rows.
Table B, the 'holding' table is populated with Table A
<[EMAIL PROTECTED]> writes:
> On Wed, 19 Mar 2008, Andrej Ricnik-Bay wrote:
>
>>
>> On 18/03/2008, Craig Ringer <[EMAIL PROTECTED]> wrote:
>>> Isn't a 10 or 24 spindle RAID 5 array awfully likely to encounter a
>>> double disk failure (such as during the load imposed by rebuild onto a
>>> spare)
On Tue, 18 Mar 2008, Gregory Stark wrote:
<[EMAIL PROTECTED]> writes:
On Wed, 19 Mar 2008, Andrej Ricnik-Bay wrote:
On 18/03/2008, Craig Ringer <[EMAIL PROTECTED]> wrote:
Isn't a 10 or 24 spindle RAID 5 array awfully likely to encounter a
double disk failure (such as during the load impos
On Tue, Mar 18, 2008 at 9:58 AM, Chris Kratz <[EMAIL PROTECTED]> wrote:
> Y, turning nested loops off in specific cases has increased performance
> greatly. It didn't fix the planner mis-estimation, just the plan it chose.
> It's certainly not a panacea, but it's something we now try early on when
On Wed, 19 Mar 2008, Andrej Ricnik-Bay wrote:
On 18/03/2008, Craig Ringer <[EMAIL PROTECTED]> wrote:
Isn't a 10 or 24 spindle RAID 5 array awfully likely to encounter a
double disk failure (such as during the load imposed by rebuild onto a
spare) ?
that's why you should use raid6 (allowing
On 18/03/2008, Craig Ringer <[EMAIL PROTECTED]> wrote:
> Isn't a 10 or 24 spindle RAID 5 array awfully likely to encounter a
> double disk failure (such as during the load imposed by rebuild onto a
> spare) ?
I never said that we actually USED that set-up. I just said
I did extensive testing wi
Tom Lane wrote:
Ron Mayer <[EMAIL PROTECTED]> writes:
Would another possible condition for considering
Cartesian joins be be:
* Consider Cartesian joins when a unique constraint can prove
that at most one row will be pulled from one of the tables
that would be part of this join?
On Tue, 18 Mar 2008, Chris Kratz wrote:
In moderately complex to very complex ad hoc queries in our system, we
were consistently having the system massively underestimate the number
of rows coming out of join at a low level making these queries very slow
and inefficient.
I have long thought t
Y, turning nested loops off in specific cases has increased performance
greatly. It didn't fix the planner mis-estimation, just the plan it chose.
It's certainly not a panacea, but it's something we now try early on when
trying to speed up a query that matches these characteristics.
-Chris
On 3/
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Tue, 18 Mar 2008 11:35:08 -0400
"Chris Kratz" <[EMAIL PROTECTED]> wrote:
> Nondefault settings of interest from postgresql.conf
>
>
> shared_buffers = 1024MB # min 128kB or
> max_connections*16kB work_mem = 256MB
> # min 64kB main
A number of weeks ago, I had posted a request for help regarding join
estimates in pg 8.2.6. In moderately complex to very complex ad hoc queries
in our system, we were consistently having the system massively
underestimate the number of rows coming out of join at a low level making
these queries
On Sun, Mar 16, 2008 at 12:04:44PM -0700, Craig James wrote:
Just out of curiosity: Last time I did research, the word seemed to be that
xfs was better than ext2 or ext3. Is that not true? Why use ext2/3 at all
if xfs is faster for Postgres?
For the WAL, the filesystem is largely irrelevant.
Ron Mayer <[EMAIL PROTECTED]> writes:
> Would another possible condition for considering
> Cartesian joins be be:
>* Consider Cartesian joins when a unique constraint can prove
> that at most one row will be pulled from one of the tables
> that would be part of this join?
What for?
Andrej Ricnik-Bay wrote:
> In my testing I found that once you hit 10 spindles in a RAID5 the
> differences between it and a RAID10 started to become negligible
> (around 6% slower on writes average with 10 runs of bonnie++ on
> 10 spindles) while the read speed (if you're doing similar amounts
>
On 18/03/2008, Peter Koczan <[EMAIL PROTECTED]> wrote:
> available, and RAID 5 and RAID 6 are just huge pains and terribly slow
> for writes.
RAID 5 and RAID 6 are just huge pains and terribly slow for writes
with small numbers of spindles ;}
In my testing I found that once you hit 10 spi
[EMAIL PROTECTED] wrote:
WAL is on a RAID 0 drive along with the OS
Isn't that just as unsafe as having the whole lot on RAID0?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-perform
27 matches
Mail list logo