Greg Smith <[EMAIL PROTECTED]> writes:
> On Fri, 6 Apr 2007, Tom Lane wrote:
>> It seems hard to believe that the vendors themselves wouldn't burn in
>> the drives for half a day, if that's all it takes to eliminate a large
>> fraction of infant mortality.
> I've read that much of the damage that
"Jonathan Ellis" <[EMAIL PROTECTED]> writes:
> On 4/5/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>>> Is this a regression, or a "feature" of 8.2?
>>
>> Hard to say without EXPLAIN ANALYZE output to compare.
> To my eye they are identical other than the speed but perhaps I am
> missing something.
Ye
On Fri, 6 Apr 2007, Tom Lane wrote:
It seems hard to believe that the vendors themselves wouldn't burn in
the drives for half a day, if that's all it takes to eliminate a large
fraction of infant mortality.
I've read that much of the damage that causes hard drive infant mortality
is related t
[EMAIL PROTECTED] writes:
> On Thu, 5 Apr 2007, Ron wrote:
>> Yep. Folks should google "bath tub curve of statistical failure" or
>> similar.
>> Basically, always burn in your drives for at least 1/2 a day before using
>> them in a production or mission critical role.
> for this and your first
On 4/5/07, Tom Lane <[EMAIL PROTECTED]> wrote:
"Jonathan Ellis" <[EMAIL PROTECTED]> writes:
> I have a query hitting a view that takes 0.15s on 8.1 but 6s on 8.2.3:
> ...
> Is this a regression, or a "feature" of 8.2?
Hard to say without EXPLAIN ANALYZE output to compare.
To my eye they are id
"John Allgood" <[EMAIL PROTECTED]> writes:
> ... The other configuration was RHEL3 and Postgres 7.4.13 and Redhat
> Cluster Suite. The application seemed to run much faster on the older
> equipment.
While I agree with the other comments that you should think about moving
to something newer than 7
On Fri, 6 Apr 2007, Ron wrote:
Bear in mind that Google was and is notorious for pushing their environmental
factors to the limit while using the cheapest "PoS" HW they can get their
hands on.
Let's just say I'm fairly sure every piece of HW they were using for those
studies was operating outs
"Jonathan Ellis" <[EMAIL PROTECTED]> writes:
> I have a query hitting a view that takes 0.15s on 8.1 but 6s on 8.2.3:
> ...
> Is this a regression, or a "feature" of 8.2?
Hard to say without EXPLAIN ANALYZE output to compare.
regards, tom lane
---(
At 11:40 PM 4/5/2007, [EMAIL PROTECTED] wrote:
On Thu, 5 Apr 2007, Ron wrote:
At 10:07 PM 4/5/2007, [EMAIL PROTECTED] wrote:
On Thu, 5 Apr 2007, Scott Marlowe wrote:
> Server class drives are designed with a longer lifespan in mind.
> > Server class hard drives are rated at higher temperatures
On Thu, 5 Apr 2007, Scott Marlowe wrote:
On Thu, 2007-04-05 at 14:30, James Mansion wrote:
Can you cite any statistical evidence for this?
Logic?
OK, everyone who hasn't already needs to read the Google and CMU papers.
I'll even provide links for you:
http://www.cs.cmu.edu/~bianca/fast07.
On Thu, 5 Apr 2007 [EMAIL PROTECTED] wrote:
> On Thu, 5 Apr 2007, Ron wrote:
> > At 10:07 PM 4/5/2007, [EMAIL PROTECTED] wrote:
> >> On Thu, 5 Apr 2007, Scott Marlowe wrote:
> >>
> >> > Server class drives are designed with a longer lifespan in mind.
> >> >
> >> > Server class hard drives are rate
On Thu, 5 Apr 2007, Erik Jones wrote:
On Apr 5, 2007, at 3:33 PM, [EMAIL PROTECTED] wrote:
On Thu, 5 Apr 2007, Xiaoning Ding wrote:
> >
> > To the best of my knowledge, Postgres itself does not have a direct IO
> > option (although it would be a good addition). So, in order to use
> > d
On Thu, 5 Apr 2007, Ron wrote:
At 10:07 PM 4/5/2007, [EMAIL PROTECTED] wrote:
On Thu, 5 Apr 2007, Scott Marlowe wrote:
> Server class drives are designed with a longer lifespan in mind.
>
> Server class hard drives are rated at higher temperatures than desktop
> drives.
these two I question
At 10:07 PM 4/5/2007, [EMAIL PROTECTED] wrote:
On Thu, 5 Apr 2007, Scott Marlowe wrote:
Server class drives are designed with a longer lifespan in mind.
Server class hard drives are rated at higher temperatures than desktop
drives.
these two I question.
David Lang
Both statements are the li
On Thu, 5 Apr 2007, Scott Marlowe wrote:
On Thu, 2007-04-05 at 14:30, James Mansion wrote:
Server drives are generally more tolerant of higher temperatures. I.e.
the failure rate for consumer and server class HDs may be about the same
at 40 degrees C, but by the time the internal case temps ge
On Thu, 5 Apr 2007, Xiaoning Ding wrote:
> Xiaoning
> Looks like it. I just did a cursory search of the archives and it seems
> that others have looked at this before so you'll probably want to start
> there if your up to it.
>
Linux used to have (still does?) a RAW interface which might
On Friday 30 March 2007 16:34 Dave Dutcher wrote:
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of
> > Vincenzo Romano
> >
> > Is there any "workaround"?
> >
> > In my opinion the later the query planner decisions are taken the more
> > effective they can be.
> > It could be a
On Apr 5, 2007, at 3:33 PM, [EMAIL PROTECTED] wrote:
On Thu, 5 Apr 2007, Xiaoning Ding wrote:
To the best of my knowledge, Postgres itself does not have a
direct IO
option (although it would be a good addition). So, in order to
use direct
IO with postgres you'll need to consult your
On Thu, 2007-04-05 at 14:30, James Mansion wrote:
> >Server drives are generally more tolerant of higher temperatures. I.e.
> >the failure rate for consumer and server class HDs may be about the same
> >at 40 degrees C, but by the time the internal case temps get up to 60-70
> >degrees C, the cons
On 4/5/07, Xiaoning Ding <[EMAIL PROTECTED]> wrote:
Alex Deucher wrote:
> On 4/5/07, Erik Jones <[EMAIL PROTECTED]> wrote:
>>
>> On Apr 5, 2007, at 1:22 PM, Xiaoning Ding wrote:
>>
>> Erik Jones wrote:
>> On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:
>> Hi,
>>
>> A page may be double buffered
Alex Deucher wrote:
On 4/5/07, Erik Jones <[EMAIL PROTECTED]> wrote:
On Apr 5, 2007, at 1:22 PM, Xiaoning Ding wrote:
Erik Jones wrote:
On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:
Hi,
A page may be double buffered in PG's buffer pool and in OS's buffer
cache.
Other DBMS like DB2 and Or
On Thu, 5 Apr 2007, Xiaoning Ding wrote:
To the best of my knowledge, Postgres itself does not have a direct IO
option (although it would be a good addition). So, in order to use direct
IO with postgres you'll need to consult your filesystem docs for how to
set the forcedirectio mount opti
The problem with this is that it doesn't leverage shared buffers and
kernel buffers well.
Anyways, my bet is that your SAN isn't performing as you expect on
the new hardware.
Dave
On 5-Apr-07, at 4:13 PM, John Allgood wrote:
We run multiple postmasters because we can shutdown one postmaste
We run multiple postmasters because we can shutdown one postmaster/database
without affecting the other postmasters/databases. Each database is a
division in our company. If we had everything under one postmaster if
something happened to the one the whole company would be down.
-Original Messa
On Apr 5, 2007, at 2:56 PM, Mark Lewis wrote:
...
[snipped for brevity]
...
Not to hijack this thread, but has anybody here tested the behavior
of
PG on a file system with OS-level caching disabled via forcedirectio
or
by using an inherently non-caching file system such as ocfs2?
I've been
On Thu, 5 Apr 2007, Ron wrote:
At 11:19 AM 4/5/2007, Scott Marlowe wrote:
On Thu, 2007-04-05 at 00:32, Tom Lane wrote:
> "James Mansion" <[EMAIL PROTECTED]> writes:
> > > Right --- the point is not the interface, but whether the drive is
> > > built
> > > for reliability or to hit a low p
On Thu, 5 Apr 2007, [EMAIL PROTECTED] wrote:
I'm curious to know why you're on xfs (i've been too chicken to stray from
ext3).
better support for large files (although postgres does tend to try and
keep the file size down by going with multiple files) and also for more
files
the multiple
On 5-Apr-07, at 3:33 PM, John Allgood wrote:
The hard thing about running multiple postmasters is that you have
to tune
each one separate. Most of the databases I have limited the max-
connections
to 30-50 depending on the database. What would reasonable values for
effective_cache_size and r
...
[snipped for brevity]
...
>
> > Not to hijack this thread, but has anybody here tested the behavior
> > of
> > PG on a file system with OS-level caching disabled via forcedirectio
> > or
> > by using an inherently non-caching file system such as ocfs2?
> >
> >
> > I've been thinking about tr
On Thu, 5 Apr 2007, John Allgood wrote:
The hard thing about running multiple postmasters is that you have to tune
each one separate. Most of the databases I have limited the max-connections
to 30-50 depending on the database. What would reasonable values for
effective_cache_size and random_page
The hard thing about running multiple postmasters is that you have to tune
each one separate. Most of the databases I have limited the max-connections
to 30-50 depending on the database. What would reasonable values for
effective_cache_size and random_page_cost. I think I have these default.
Also
>Server drives are generally more tolerant of higher temperatures. I.e.
>the failure rate for consumer and server class HDs may be about the same
>at 40 degrees C, but by the time the internal case temps get up to 60-70
>degrees C, the consumer grade drives will likely be failing at a much
>higher
John Allgood wrote:
Hello All
I sent this message to the admin list and it never got through so I
am trying the performance list.
We moved our application to a new machine last night. It is a Dell
PowerEdge 6950 2X Dual Core. AMD Opteron 8214 2.2Ghz. 8GB Memory. Th
On Thu, 5 Apr 2007, John Allgood wrote:
Hello All
I sent this message to the admin list and it never got through so I
am trying the performance list.
We moved our application to a new machine last night. It is a Dell
PowerEdge 6950 2X Dual Core. AMD Opteron 8214 2.2Ghz. 8GB Memo
Hello All
I sent this message to the admin list and it never got through so I
am trying the performance list.
We moved our application to a new machine last night. It is a Dell
PowerEdge 6950 2X Dual Core. AMD Opteron 8214 2.2Ghz. 8GB Memory. The
machine is running R
Wow, it's excellent! :))
probably the next step is:
ALTER TABLE CACHE ON/OFF;
just to force keeping any table in the cache. What do you think?...
Rgds,
-Dimitri
On 4/5/07, Josh Berkus wrote:
Dimitri,
> Probably another helpful solution may be to implement:
>
>ALTER TABLE LOGGING OFF
On Apr 5, 2007, at 1:27 PM, Mark Lewis wrote:
On Thu, 2007-04-05 at 13:09 -0500, Erik Jones wrote:
On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:
Hi,
A page may be double buffered in PG's buffer pool and in OS's buffer
cache.
Other DBMS like DB2 and Oracle has provided Direct I/O option
On 4/5/07, Erik Jones <[EMAIL PROTECTED]> wrote:
On Apr 5, 2007, at 1:22 PM, Xiaoning Ding wrote:
Erik Jones wrote:
On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:
Hi,
A page may be double buffered in PG's buffer pool and in OS's buffer cache.
Other DBMS like DB2 and Oracle has provided Dire
On Apr 5, 2007, at 1:22 PM, Xiaoning Ding wrote:
Erik Jones wrote:
On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:
Hi,
A page may be double buffered in PG's buffer pool and in OS's
buffer cache.
Other DBMS like DB2 and Oracle has provided Direct I/O option to
eliminate
double buffering
I have a query hitting a view that takes 0.15s on 8.1 but 6s on 8.2.3:
select party_id from clan_members_v cm, clans c
where cm.clan_id = c.id
and c.type = 'standard'
The problem seems to be that clan_members_v contains a call to an
expensive function:
create or replace view clan_member
Not to hijack this thread, but has anybody here tested the behavior of
PG on a file system with OS-level caching disabled via forcedirectio or
by using an inherently non-caching file system such as ocfs2?
I've been thinking about trying this setup to avoid double-caching now
that the 8.x series sc
Erik Jones wrote:
On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:
Hi,
A page may be double buffered in PG's buffer pool and in OS's buffer
cache.
Other DBMS like DB2 and Oracle has provided Direct I/O option to eliminate
double buffering. I noticed there were discusses on the list. But
I c
On Apr 5, 2007, at 12:09 PM, Xiaoning Ding wrote:
Hi,
A page may be double buffered in PG's buffer pool and in OS's
buffer cache.
Other DBMS like DB2 and Oracle has provided Direct I/O option to
eliminate
double buffering. I noticed there were discusses on the list. But
I can not find simi
Michael Dengler wrote:
Hi,
Ive looked through the README in the contrib/adminpack dir but it
doesn't really say what these functions do and how to use them
It is for use with Pgadmin.
J
Any help?
Thanks
miguel
--
=== The PostgreSQL Company: Command Prompt, Inc. ===
Sales/
Dimitri,
> Probably another helpful solution may be to implement:
>
>ALTER TABLE LOGGING OFF/ON;
>
> just to disable/enable WAL?
Actually, a patch similar to this is currently in the queue for 8.3.
--
Josh Berkus
PostgreSQL @ Sun
San Francisco
---(end of broadcast
Hi,
Ive looked through the README in the contrib/adminpack dir but it doesn't
really say what these functions do and how to use them
Any help?
Thanks
miguel
Hi,
A page may be double buffered in PG's buffer pool and in OS's buffer cache.
Other DBMS like DB2 and Oracle has provided Direct I/O option to eliminate
double buffering. I noticed there were discusses on the list. But
I can not find similar option in PG. Does PG support direct I/O now?
The tu
On 5-4-2007 17:58 [EMAIL PROTECTED] wrote:
On Apr 5, 2007, at 4:09 AM, Ron wrote:
BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!
Thanks - I received similar private emails with the same advice. I will
change the controller to a LSI MegaRAID SAS 8408E -- any feedback on
this one?
We ha
At 11:19 AM 4/5/2007, Scott Marlowe wrote:
On Thu, 2007-04-05 at 00:32, Tom Lane wrote:
> "James Mansion" <[EMAIL PROTECTED]> writes:
> >> Right --- the point is not the interface, but whether the drive is built
> >> for reliability or to hit a low price point.
>
> > Personally I take the marketi
On 4/5/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
On Apr 5, 2007, at 4:09 AM, Ron wrote:
> BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!
Thanks - I received similar private emails with the same advice. I
will change the controller to a LSI MegaRAID SAS 8408E -- any
feedback on this o
[EMAIL PROTECTED] wrote:
On Apr 5, 2007, at 4:09 AM, Ron wrote:
BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!
Thanks - I received similar private emails with the same advice. I will
change the controller to a LSI MegaRAID SAS 8408E -- any feedback on
this one?
LSI makes a good contro
On Apr 5, 2007, at 4:09 AM, Ron wrote:
BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!
Thanks - I received similar private emails with the same advice. I
will change the controller to a LSI MegaRAID SAS 8408E -- any
feedback on this one?
IME, they are usually the worst of the commod
On Apr 5, 2007, at 8:21 AM, Jeff Frost wrote:
I noticed this behavior on the last Areca based 8 disk Raptor
system I built. Putting pg_xlog on a separate partition on the same
logical volume was faster than putting it on the large volume. It
was also faster to have 8xRAID10 for OS+data+pg
Ok, well, I dropped the DB and reloaded it and now all seems to be
fine and performing well. I'm not sure what was going on before.
Thanks for everyone's help!
Alex
On 4/3/07, Alex Deucher <[EMAIL PROTECTED]> wrote:
On 4/3/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Alex Deucher" <[EMAIL PROTEC
On Thu, 5 Apr 2007, Scott Marlowe wrote:
I've read some recent contrary advice. Specifically advising the
sharing of all files (pg_xlogs, indices, etc..) on a huge raid array
and letting the drives load balance by brute force.
The other, at first almost counter-intuitive result was that puttin
On Thu, 2007-04-05 at 00:32, Tom Lane wrote:
> "James Mansion" <[EMAIL PROTECTED]> writes:
> >> Right --- the point is not the interface, but whether the drive is built
> >> for reliability or to hit a low price point.
>
> > Personally I take the marketing mublings about the enterprise drives
> >
On Wed, 2007-04-04 at 09:12, [EMAIL PROTECTED] wrote:
> On Apr 3, 2007, at 6:54 PM, Geoff Tolley wrote:
>
> > But what's likely to make the largest difference in the OP's case
> > (many inserts) is write caching, and a battery-backed cache would
> > be needed for this. This will help mask wri
BE VERY WARY OF USING AN ADAPTEC RAID CONTROLLER!
IME, they are usually the worst of the commodity RAID controllers available.
I've often seen SW RAID outperform them.
If you are going to use this config, Tyan's n3600M (AKA S2932) MB has
a variant that comes with 8 SAS + 6 SATA II connectors.
[EMAIL PROTECTED] wrote:
In a perhaps fitting compromise, I have decide to go with a hybrid
solution:
8*73GB 15k SAS drives hooked up to Adaptec 4800SAS
PLUS
6*150GB SATA II drives hooked up to mobo (for now)
All wrapped in a 16bay 3U server. My reasoning is that the extra SATA
drives are pra
If the 3U case has a SAS-expander in its backplane (which it probably
has?) you should be able to connect all drives to the Adaptec
controller, depending on the casing's exact architecture etc. That's
another two advantages of SAS, you don't need a controller port for each
harddisk (we have a D
60 matches
Mail list logo