I originally posted the question below back in Dec 2006, and many
helpful suggestions resulted. Unfortunately, since this was a closet
effort, my official duties pushed further exploration to the back
burner, then I lost my original test environment. So while I can no
longer compare to BigDBM
On Wed, 10 Jan 2007, Jim C. Nasby wrote:
RAID1 for those of you who have been wondering if the BBU write back cache
mitigates the need for separate WAL (at least on this workload). Those
are
the fastest times for each config, but ext2 WAL was always faster than the
other two options. I didn't
On Tue, Jan 09, 2007 at 09:10:51AM -0800, Jeff Frost wrote:
> On Tue, 9 Jan 2007, Jim C. Nasby wrote:
>
> >On Thu, Dec 28, 2006 at 02:15:31PM -0800, Jeff Frost wrote:
> >>When benchmarking various options for a new PG server at one of my
> >>clients,
> >>I tried ext2 and ext3 (data=writeback) for
On Tue, 9 Jan 2007, Jim C. Nasby wrote:
On Thu, Dec 28, 2006 at 02:15:31PM -0800, Jeff Frost wrote:
When benchmarking various options for a new PG server at one of my clients,
I tried ext2 and ext3 (data=writeback) for the WAL and it appeared to be
fastest to have ext2 for the WAL. The winning
On Sun, Jan 07, 2007 at 11:26:01PM -0500, Guy Rouillier wrote:
> Ok, I ran with the settings below, but with
>
> shared_buffers=768MB
> effective_cache_size=2048MB
> fsync=on
>
> This run took 29000 seconds. I'm beginning to think configuration
> changes are not going to buy significant additio
On Thu, Dec 28, 2006 at 02:15:31PM -0800, Jeff Frost wrote:
> When benchmarking various options for a new PG server at one of my clients,
> I tried ext2 and ext3 (data=writeback) for the WAL and it appeared to be
> fastest to have ext2 for the WAL. The winning time was 157m46.713s for
> ext2, 1
Craig A. James wrote:
> Postgres functions like count() and max() are "plug ins" which has huge
> architectural advantages. But in pre-8.1 releases, there was a big
> speed penalty for this: functions like count() were very, very slow,
> requiring a full table scan. I think this is vastly improve
otal runtime: 0.112 ms
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Joshua D.
Drake
Sent: Sunday, January 07, 2007 9:10 PM
To: Adam Rich
Cc: 'Craig A. James'; 'Guy Rouillier'; 'PostgreSQL Performance'
Subject: Re: [PERFORM] Hi
Ron wrote:
C= What file system are you using? Unlike BigDBMS, pg does not have its
own native one, so you have to choose the one that best suits your
needs. For update heavy applications involving lots of small updates
jfs and XFS should both be seriously considered.
Ron, thanks for your id
x27;; 'PostgreSQL Performance'
Subject: Re: [PERFORM] High update activity, PostgreSQL vs BigDBMS
"Adam Rich" <[EMAIL PROTECTED]> writes:
> I'm using 8.2 and using order by & limit is still faster than MAX()
> even though MAX() now seems to rewrite to an almos
Dave Cramer wrote:
On 6-Jan-07, at 11:32 PM, Guy Rouillier wrote:
Dave Cramer wrote:
The box has 3 GB of memory. I would think that BigDBMS would be
hurt by this more than PG. Here are the settings I've modified in
postgresql.conf:
As I said you need to set shared_buffers to at least 75
On Sun, 2007-01-07 at 20:26 -0600, Adam Rich wrote:
> I'm using 8.2 and using order by & limit is still faster than MAX()
> even though MAX() now seems to rewrite to an almost identical plan
> internally.
Gonna need you to back that up :) Can we get an explain analyze?
> Count(*) still seems to
"Adam Rich" <[EMAIL PROTECTED]> writes:
> I'm using 8.2 and using order by & limit is still faster than MAX()
> even though MAX() now seems to rewrite to an almost identical plan
> internally.
Care to quantify that? AFAICT any difference is within measurement
noise, at least for the case of separ
while
PG takes longer ther 700ms. Luckily we can design around this issue.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Craig A.
James
Sent: Sunday, January 07, 2007 5:57 PM
To: Guy Rouillier; PostgreSQL Performance
Subject: Re: [PERFORM] High update
Craig A. James wrote:
The "idiom" to replace count() was
"select col from tbl order by col desc limit 1". It worked miracles for
my app.
Sorry, I meant to write, "the idiom to replace MAX()", not count()... MAX()
was the function that was killing me, 'tho count() also gave me problems.
Cra
On 6-Jan-07, at 11:32 PM, Guy Rouillier wrote:
Dave Cramer wrote:
The box has 3 GB of memory. I would think that BigDBMS would be
hurt by this more than PG. Here are the settings I've modified
in postgresql.conf:
As I said you need to set shared_buffers to at least 750MB this is
the
Guy Rouillier wrote:
The application is fairly straightforward, but as you say, what is
working okay with BigDBMS isn't working as well under PG. I'm going to
try other configuration suggestions made by others before I attempt
logic changes. The core logic is unchangeable; millions of rows of
Guy,
The application is fairly straightforward, but as you say, what is
working okay with BigDBMS isn't working as well under PG. I'm going to
try other configuration suggestions made by others before I attempt
logic changes. The core logic is unchangeable; millions of rows of data
in a sin
> Regarding shared_buffers=750MB, the last discussions I remember on this
> subject said that anything over 10,000 (8K buffers = 80 MB) had unproven
> benefits. So I'm surprised to see such a large value suggested. I'll
> certainly give it a try and see what happens.
>
That is old news :) A
Dave Cramer wrote:
The box has 3 GB of memory. I would think that BigDBMS would be hurt
by this more than PG. Here are the settings I've modified in
postgresql.conf:
As I said you need to set shared_buffers to at least 750MB this is the
starting point, it can actually go higher. Addition
Craig A. James wrote:
I don't know if you have access to the application's SQL, or the time to
experiment a bit, but unless your schema is trival and your SQL is
boneheaded simple, you're not going to get equal performance from
Postgres until you do some analysis of your application under real-
On 5-Jan-07, at 9:51 PM, Guy Rouillier wrote:
I've got back access to my test system. I ran another test run
with the same input data set. This time I put pg_xlog on a
different RAID volume (the unused one that I suspect is a software
RAID), and I turned fsync=off in postgresql.conf. I
Guy Rouillier wrote:
I've got back access to my test system. I ran another test run with the
same input data set. This time I put pg_xlog on a different RAID volume
(the unused one that I suspect is a software RAID), and I turned
fsync=off in postgresql.conf. I left the rest of the configura
I've got back access to my test system. I ran another test run with the
same input data set. This time I put pg_xlog on a different RAID volume
(the unused one that I suspect is a software RAID), and I turned
fsync=off in postgresql.conf. I left the rest of the configuration
alone (all forei
On Fri, 2006-12-29 at 07:52 -0500, Ron wrote:
> A= go through each query and see what work_mem needs to be for that
> query to be as RAM resident as possible. If you have enough RAM, set
> work_mem for that query that large. Remember that work_mem is =per
> query=, so queries running in parall
More specifically, you should set the noatime,data=writeback options in
fstab on ext3 partitions for best performance. Correct?
> it doesn't really belong here but ext3 has
> data journaled (data and meta data)
> ordered (meta data journald but data written before meta data (default))
> journald
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 2 Jan 2007, at 14:54, Ragnar wrote:
On þri, 2007-01-02 at 09:04 -0500, Geoffrey wrote:
Alvaro Herrera wrote:
Actually it has been suggested that a combination of ext2 (for
WAL) and
ext3 (for data, with data journalling disabled) is a good
On þri, 2007-01-02 at 09:04 -0500, Geoffrey wrote:
> Alvaro Herrera wrote:
> >
> > Actually it has been suggested that a combination of ext2 (for WAL) and
> > ext3 (for data, with data journalling disabled) is a good performer.
> > AFAIK you don't want the overhead of journalling for the WAL parti
Alvaro Herrera wrote:
Ron wrote:
C= What file system are you using? Unlike BigDBMS, pg does not have
its own native one, so you have to choose the one that best suits
your needs. For update heavy applications involving lots of small
updates jfs and XFS should both be seriously considered.
Tom Lane wrote:
"Shoaib Mir" <[EMAIL PROTECTED]> writes:
Here are my few recommendations that might help you:
[ snip good advice ]
Another thing to look at is whether you are doing inserts/updates as
individual transactions, and if so see if you can "batch" them to
reduce the per-transaction o
You should search the archives for Luke Lonegran's posting about how IO in
Postgresql is significantly bottlenecked because it's not async. A 12 disk
array is going to max out Postgresql's max theoretical write capacity to
disk, and therefore BigRDBMS is always going to win in such a config. You
On Fri, 29 Dec 2006, Alvaro Herrera wrote:
Ron wrote:
C= What file system are you using? Unlike BigDBMS, pg does not have
its own native one, so you have to choose the one that best suits
your needs. For update heavy applications involving lots of small
updates jfs and XFS should both be ser
Ron wrote:
> C= What file system are you using? Unlike BigDBMS, pg does not have
> its own native one, so you have to choose the one that best suits
> your needs. For update heavy applications involving lots of small
> updates jfs and XFS should both be seriously considered.
Actually it has
Hi all,
A= go through each query and see what work_mem needs to be for that
query to be as RAM resident as possible. If you have enough RAM, set
work_mem for that query that large. Remember that work_mem is =per
query=, so queries running in parallel eat the sum of each of their
work_mem's.
At 12:46 AM 12/28/2006, Guy Rouillier wrote:
I don't want to violate any license agreement by discussing
performance, so I'll refer to a large, commercial
PostgreSQL-compatible DBMS only as BigDBMS here.
I'm trying to convince my employer to replace BigDBMS with
PostgreSQL for at least some o
"Shoaib Mir" <[EMAIL PROTECTED]> writes:
> Here are my few recommendations that might help you:
> [ snip good advice ]
Another thing to look at is whether you are doing inserts/updates as
individual transactions, and if so see if you can "batch" them to
reduce the per-transaction overhead.
Guy,
Did you tune postgresql ? How much memory does the box have? Have you
tuned postgresql ?
Dave
On 28-Dec-06, at 12:46 AM, Guy Rouillier wrote:
I don't want to violate any license agreement by discussing
performance, so I'll refer to a large, commercial PostgreSQL-
compatible DBMS only
Here are my few recommendations that might help you:
- You will need to do table partitioning (
http://www.postgresql.org/docs/current/static/ddl-partitioning.html) as you
are storing quite a lot of data in one table per day.
- You are using a RAID5 setup which is something that can also affect
I don't want to violate any license agreement by discussing performance,
so I'll refer to a large, commercial PostgreSQL-compatible DBMS only as
BigDBMS here.
I'm trying to convince my employer to replace BigDBMS with PostgreSQL
for at least some of our Java applications. As a proof of concep
39 matches
Mail list logo