Dan Langille wrote:
I'm using PostgreSQL 8.1.4 and I'm trying to force the planner to use
an index. With the index, I get executions times of 0.5 seconds.
Without, it's closer to 2.5 seconds.
Compare these two sets of results (also provided at
http://rafb.net/paste/results/ywcOZP66.html
sho
Subbiah, Stalin wrote:
Actually these servers will be upgraded to 8.1.4 in couple of months.
even so, you could get some bad data in there.
http://www.postgresql.org/docs/8.0/static/release.html . Go through the
old release notes and you'll find various race conditions, crashes etc.
Here yo
Actually these servers will be upgraded to 8.1.4 in couple of months.
Here you go with explain analyze.
# explain analyze SELECT *
FROM EVENTLOG
WHERE EVENTTIME>'07/23/06 16:00:00' AND EVENTTIME<'08/22/06 16:00:00'
AND (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA'
OR OBJID='tzRh39d0d91
Michael Fuhr <[EMAIL PROTECTED]> writes:
> On Tue, Aug 22, 2006 at 06:16:54PM -0700, Craig A. James wrote:
>> Is there a way to move a tablespace to a new location without a
>> dump/restore?
> The last paragraph of the Tablespaces documentation might be helpful:
> http://www.postgresql.org/docs/8
Subbiah, Stalin wrote:
Hello All,
This query runs forever and ever. Nature of this table being lots of
inserts/deletes/query, I vacuum it every half hour to keep the holes
reusable and nightly once vacuum analyze to update the optimizer. We've
got index on eventtime only. Running it for current
On Tue, Aug 22, 2006 at 06:16:54PM -0700, Craig A. James wrote:
> Is there a way to move a tablespace to a new location without a
> dump/restore? I, er, this hypothetical guy, knows he can move it and put a
> symbolic link in for /disk2, but this is somewhat unsatisfactory since
> "/disk2" woul
Suppose, hypothetically of course, someone lacked foresight, and put a tablespace somewhere with a
dumb name, like "/disk2", instead of using a symbolic link with a more descriptive name.
And then /disk2 needs to be renamed, say to "/postgres_data", and this (hypothetical)
DBA realizes he has
Hello All,
This query runs forever and ever. Nature of this table being lots of
inserts/deletes/query, I vacuum it every half hour to keep the holes
reusable and nightly once vacuum analyze to update the optimizer. We've
got index on eventtime only. Running it for current day uses index range
scan
On Tue, 2006-08-22 at 17:56 -0400, Bucky Jordan wrote:
> Hi Jeff,
>
> My experience with the 2950 seemed to indicate that RAID10x6 disks did
> not perform as well as RAID5x6. I believe I posted some numbers to
> illustrate this in the post you mentioned.
>
Very interesting. I always hear that p
On Tue, 2006-08-22 at 20:10 +0200, Marinos Yannikos wrote:
> Hello,
>
> we're looking into the reason why we are getting warnings about
> transaction ID wraparound despite a daily "vaccumdb -qaz". Someone is
> claiming that VACUUM without FULL cannot reassign XIDs properly when
> max_fsm_pages
Hi Jeff,
My experience with the 2950 seemed to indicate that RAID10x6 disks did
not perform as well as RAID5x6. I believe I posted some numbers to
illustrate this in the post you mentioned.
If I remember correctly, the numbers were pretty close, but I was
expecting RAID10 to significantly beat R
This question is related to the thread:
http://archives.postgresql.org/pgsql-performance/2006-08/msg00152.php
but I had some questions.
I am looking at setting up two general-purpose database servers,
replicated with Slony. Each server I'm looking at has the following
specs:
Dell PowerEdge 2950
-
Ulrich Habel <[EMAIL PROTECTED]> writes:
> Anythings speeks against this hack?
Only that it was done years ago.
As Alvaro mentions, if you are using a non-C locale then you need
non-default index opclasses to get it to work. Non-C locales usually
have index sort orders that don't play nice with
Hi all,
I'm really glad to see all the test results people are posting here. In
fact, I used info from the archives to put together our first "big"
database host:
-Tyan dual-core/dual-cpu mainboard (
-One Opteron 270 2.0GHz (although our vendor gave us two for some reason)
-Chenbro 3U case (
Here is, it's first time I got tps > 400
10 clients:
[EMAIL PROTECTED]:/pgsql/database]pgbench -c 10 -t 1 -v -d pgbench
2>/dev/null
pghost: pgport: (null) nclients: 10 nxacts: 1 dbName: pgbench
transaction type: TPC-B (sort of)
scaling factor: 1
number of clients: 10
number of transactio
As I mentioned, I haven't changed the defaults at all yet:
Fsync is still on...
shared_buffers = 1000
max_fsm_pages = 2
max_connections = 40
work_mem = 1024
effective_cache_size = 1000
random_page_cost = 4
I'm not sure how much the dual core woodcrests and faster memory are
helping my system
Marty Jia wrote:
Bucky
My best result is around 380. I believe your hardware is more efficient,
because no matter how I change the conf parameters, no improvement can
be obtained. I even turned fsync off.
Do you stay constant if you use 40 clients versus 20?
What is your values for the foll
Bucky
My best result is around 380. I believe your hardware is more efficient,
because no matter how I change the conf parameters, no improvement can
be obtained. I even turned fsync off.
What is your values for the following parameters?
shared_buffers = 8
max_fsm_pages = 35
max_connecti
I would guess that you are not running vacuumdb as a user with permission to
vacuum the postgres or template1 databases. Try telling vacuumdb to log in
as postgres or whatever your superuser account is called.
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On
Marty,
Here's pgbench results from a stock FreeBSD 6.1 amd64/PG 8.1.4 install
on a Dell Poweredge 2950 with 8gb ram, 2x3.0 dual-core woodcrest (4MB
cache/socket) with 6x300GB 10k SAS drives:
pgbench -c 10 -t 1 -d bench 2>/dev/null
pghost: pgport: (null) nclients: 10 nxacts: 1 dbName: ben
Marinos Yannikos wrote:
> Hello,
>
> we're looking into the reason why we are getting warnings about
> transaction ID wraparound despite a daily "vaccumdb -qaz". Someone is
> claiming that VACUUM without FULL cannot reassign XIDs properly when
> max_fsm_pages was set too low (it says so here to
Thomas Samson wrote:
> On 8/22/06, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> >Ulrich Habel wrote:
> >> Hello all,
> >> had an idea of optimizing a query that may work generally.
> >>
> >> In case a 'column' is indexed, following two alterations could be done
> >> I think:
> >>
> >> A)
> >>
> >>
Hello,
we're looking into the reason why we are getting warnings about
transaction ID wraparound despite a daily "vaccumdb -qaz". Someone is
claiming that VACUUM without FULL cannot reassign XIDs properly when
max_fsm_pages was set too low (it says so here too, but this is rather
old: http://
On 8/22/06, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
Ulrich Habel wrote:
> Hello all,
> had an idea of optimizing a query that may work generally.
>
> In case a 'column' is indexed, following two alterations could be done
> I think:
>
> A)
>
> select ... where column ~ '^Foo' --> Seq Scan
Ulrich Habel wrote:
> Hello all,
> had an idea of optimizing a query that may work generally.
>
> In case a 'column' is indexed, following two alterations could be done
> I think:
>
> A)
>
> select ... where column ~ '^Foo' --> Seq Scan
This is not true. You can make this query use an i
Hello all,
had an idea of optimizing a query that may work generally.
In case a 'column' is indexed, following two alterations could be done
I think:
A)
select ... where column ~ '^Foo' --> Seq Scan
into that:
select ... where column BETWEEN 'Foo' AND 'FooZ' --> Index Scan
of co
Marty Jia wrote:
Here is iostat when running pgbench:
avg-cpu: %user %nice%sys %iowait %idle
26.170.008.25 23.17 42.42
You are are a little io bound and fairly cpu bound. I would be curious
if your performance goes down if you increase the number of connection
Ron
Here is our hardware
Dual Intel Xeon 2.8GHz
6GB RAM
Linux 2.4 kernel
RedHat Enterprise Linux AS 3
200GB for PGDATA on 3Par, ext3
50GB for WAL on 3Par, ext3
RAID 10, using 3Par virtual volume technology across ~200 physical FC
disks. 4 virtual disks for PGDATA, striped with LVM into one volu
At 04:45 PM 8/21/2006, Marty Jia wrote:
I'm exhausted to try all performance tuning ideas, like following
parameters
shared_buffers
fsync
max_fsm_pages
max_connections
shared_buffers
work_mem
max_fsm_pages
effective_cache_size
random_page_cost
All of this comes =after= the Get the Correct HW (
Here is iostat when running pgbench:
avg-cpu: %user %nice
%sys %iowait
%idle
26.17 0.00 8.25
23.17 42.42
Device:
tps Blk_read/s Blk_wrtn/s
Blk_read
Blk_wrtnsda
0.00
0.00
0.00
0
0sda1
--- Here is vmstat
procs
memory
swap
io
system cpu r
b swpd free buff cache
si so bi bo
in cs us sy id wa 0 1 15416
18156 73372 4348488 1
1 3 2
4 1 2 1 2 2
--- Here
The scaling factor is 20
I used -v and 2>/dev/null, now I got
tps = 389.796376 (excluding connections establishing)
This is best so far I can get
Thanks
-Original Message-
From: Mark Lewis [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 22, 2006 10:32 AM
To: Marty Jia
Cc: Joshua D. D
Oh - and it's usefull to know if you are CPU bound, or IO bound. Check top or vmstat to get an idea of thatAlexOn 8/22/06, Alex Turner <
[EMAIL PROTECTED]> wrote:First things first, run a bonnie++ benchmark, and post the numbers. That will give a good indication of raw IO performance, and is ofte
First things first, run a bonnie++ benchmark, and post the numbers. That will give a good indication of raw IO performance, and is often the first inidication of problems separate from the DB. We have seen pretty bad performance from SANs in the past. How many FC lines do you have running to you
On Tue, 2006-08-22 at 08:16, Marty Jia wrote:
> Hi, Mark
>
> Thanks, here is our hardware info:
>
> RAID 10, using 3Par virtual volume technology across ~200 physical FC
> disks. 4 virtual disks for PGDATA, striped with LVM into one volume, 2
> virtual disks for WAL, also striped. SAN attached
Hi Joshua,
Thanks for the info...but, what I already have the backend id. I was
trying to get the process id of the client application. The client is
using libpq and running on the same workstation. We have approximately
22 different clients running and it would help to isolate the client
prog
Well, at least on my test machines running gnome-terminal, my pgbench
runs tend to get throttled by gnome-terminal's lousy performance to no
more than 300 tps or so. Running with 2>/dev/null to throw away all the
detailed logging gives me 2-3x improvement in scores. Caveat: in my
case the db is o
"Ravindran G - TLS, Chennai." <[EMAIL PROTECTED]> writes:
> We are using PostgreSQL 7.1 cygwin installed on Windows 2000 (2 GB Memory,
> P4).
Egad :-(
If you are worried about performance, get off 7.1. Even if you are not
worried about performance, get off 7.1. It *will* eat your data someday.
Joshua,
Here is
shared_buffers = 8
fsync = on
max_fsm_pages = 35
max_connections = 1000
work_mem = 65536
effective_cache_size = 61
random_page_cost = 3
Here is pgbench I used:
pgbench -c 10 -t 1 -d HQDB
Thanks
Marty
-Original Message-
From: Joshua D. Drake [mailto:[
Is there a reason you are not upgrading to PostgreSQL 8.1? it will run natively on Windoze, and will give you much better performance. 7.1 is way out of date, and has a lot of bad issues in it.Upgrading will most likely fix this issue.
ChrisOn 8/22/06, Ravindran G - TLS, Chennai. <[EMAIL PROTECTE
Hi, Mark
Thanks, here is our hardware info:
RAID 10, using 3Par virtual volume technology across ~200 physical FC
disks. 4 virtual disks for PGDATA, striped with LVM into one volume, 2
virtual disks for WAL, also striped. SAN attached with Qlogic SAN
surfer multipathing to load balance each LUN
Hi,
We are using PostgreSQL 7.1 cygwin installed on Windows 2000 (2 GB Memory,
P4).
We understand that the maximum connections that can be set is 64 in
Postgresql 7.1 version.
The performance is very slow and some time the database is not getting
connected from our application because of this.
On Mon, Aug 21, 2006 at 02:50:51PM -0700, Jeff Davis wrote:
the NetApp over NFS, so I am not sure what performance to expect. Any
suggestions about using network storage like this for the database?
Don't. Unless you're using a very small (toy-scale) database, the netapp
storage is way too expe
I'm using PostgreSQL 8.1.4 and I'm trying to force the planner to use
an index. With the index, I get executions times of 0.5 seconds.
Without, it's closer to 2.5 seconds.
Compare these two sets of results (also provided at
http://rafb.net/paste/results/ywcOZP66.html
should it appear poorly for
44 matches
Mail list logo