On Tue, 2005-06-07 at 23:50 -0400, Tom Lane wrote:
Regarding 2GB memory allocation, though, we *could* really use support for
work_mem and maintenance_mem of 2GB.
Again, let's see some evidence that it's worth putting effort into that.
(Offhand it seems this is probably an easier fix
Hi,
I've installed the same installation of my reiser-fs-postgres-8.0.1
with xfs.
Now my pgbench shows the following results:
[EMAIL PROTECTED]:~ pgbench -h 127.0.0.1 -p 5432 -c150 -t5 pgbench
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
number of clients: 150
Hi Everyone,
Im having a performance issue with version 7.3.4 which i first thought was Disk
IO
related, however now it seems like the problem is caused by really slow
commits, this
is running on Redhat 8.
Basically im taking a .sql file with insert of about 15,000 lines and 'ing
straight
Hi,
I'm having problems with the query optimizer and FULL OUTER JOIN on
PostgreSQL 7.4. I cannot get it to use my indexes with full outer joins.
I might be naive, but I think that it should be possible?
I have two BIG tables (virtually identical) with 3 NOT NULL columns
Station_id, TimeObs,
On Tue, Jun 07, 2005 at 11:50:33PM -0400, Tom Lane wrote:
Again, let's see some evidence that it's worth putting effort into that.
(Offhand it seems this is probably an easier fix than changing the
shared-memory allocation code; but conventional wisdom is that really
large values of work_mem are
On Wed, Jun 08, 2005 at 09:36:31AM +0200, Martin Fandel wrote:
I've installed the same installation of my reiser-fs-postgres-8.0.1
with xfs.
Do you have pg_xlog on a seperate partition? I've noticed that ext2
seems to have better performance than xfs for the pg_xlog workload (with
all the
On Wed, Jun 08, 2005 at 11:37:40 +0200,
Kim Bisgaard [EMAIL PROTECTED] wrote:
Hi,
I'm having problems with the query optimizer and FULL OUTER JOIN on
PostgreSQL 7.4. I cannot get it to use my indexes with full outer joins.
I might be naive, but I think that it should be possible?
I
Hi,
ah you're right. :) I forgot to symlink the pg_xlog-dir to another
partition. Now it's a bit faster than before. But not faster than
the same installation with reiserfs:
[EMAIL PROTECTED]:~ pgbench -h 127.0.0.1 -p 5432 -c150 -t5 pgbench
starting vacuum...end.
transaction type: TPC-B (sort
Hi Bruno,
Thanks for the moral support! I feel so too - but I am confident it will
show up soon.
W.r.t. your rewrite of the query, I get this ERROR: could not devise a
query plan for the given query but no further details - I will try google
Regards,
Kim.
Bruno Wolff III wrote:
On Wed,
On 6/2/05, K C Lau [EMAIL PROTECTED] wrote:
...
select DISTINCT ON (PlayerID) PlayerID,AtDate from Player where
PlayerID='0' order by PlayerID desc, AtDate desc;
The Player table has primary key (PlayerID, AtDate) representing data over
time and the query gets the latest data for a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Martin Fandel wrote:
|
| I've tested dump's and copy's with the xfs-installation. It's
| faster than before. But the transactions-query's are still slower
| than the reiserfs-installation.
|
| Are any fstab-/mount-options recommended for xfs?
|
Kim Bisgaard [EMAIL PROTECTED] writes:
SELECT station_id, timeobs,temp_grass, temp_dry_at_2m
FROM temp_dry_at_2m a
FULL OUTER JOIN temp_grass b
USING (station_id, timeobs)
WHERE station_id = 52981
AND timeobs = '2004-1-1 0:0:0'
explain analyse
On 6/8/05, K C Lau [EMAIL PROTECTED] wrote:
Both keys are text fields. Does it make any difference if PlayerID were
integer?
It can make a difference in speed and integrity. If the column is an
integer, the storage on disk could be smaller for the column and the
related indexes. If the the
It seems that Postgres is estimating that all rows in a 50k row table
will be returned, but only one should match. The query runs slow because
of the seqscan. When I set enable_seqscan to off, then it does an index
scan and it runs quickly.
I've set the statistics target on the index to 100 and
Kim Bisgaard [EMAIL PROTECTED] writes:
W.r.t. your rewrite of the query, I get this ERROR: could not devise a
query plan for the given query but no further details - I will try google
Which PG version are you using again? That should be fixed in 7.4.3
and later.
I'm tasked with specifying a new machine to run a web application
prototype. The machine will be serving web pages with a Postgresql
backend; we will be making extensive use of plpgsql functions. No
database tables are likely to go over a million rows during the
prototype period.
We are
Hi,
Rory Campbell-Lange wrote:
We are considering two RAID1 system disks, and two RAID1 data disks.
We've avoided buying Xeons. The machine we are looking at looks like
this:
Rackmount Chassis - 500W PSU / 4 x SATA Disk Drive Bays
S2882-D - Dual Opteron / AMD 8111 Chipset / 5 x PCI
I just puhsd 8.0.3 to production on Sunday, and haven't had a time to
really monitor it under load, so I can't tell if it's helped the context
switch problem yet or not.
Neil Conway wrote:
Tom Arthurs wrote:
Yes, shared buffers in postgres are not used for caching
Shared buffers in
Three options:
9500-4LP with Raptor drives 10k rpm, raid 1 + raid 1
9500-8LP with Raptor drives 10k rpm, raid 10 + raid 1
Go for SCSI (LSI Megaraid or ICP Vortex) and take 10k drives
If you are going with Raptor drives use the LSI 150-6 SATA RAID
with the BBU.
Sincerely,
Joshua D. Drake
Hi,
I just puhsd 8.0.3 to production on Sunday, and haven't had a time to
really monitor it under load, so I can't tell if it's helped the context
switch problem yet or not.
Attached is a vmstat 5 output from one of our machines. This is a dual
Xeon 3,2 Ghz with EM64T and 8 GB RAM, running
Allan Wang [EMAIL PROTECTED] writes:
It seems that Postgres is estimating that all rows in a 50k row table
will be returned, but only one should match.
I think this is the same issue fixed here:
2005-04-03 21:43 tgl
* src/backend/optimizer/path/: costsize.c (REL7_4_STABLE),
Quoting Tom Lane [EMAIL PROTECTED]:
Kim Bisgaard [EMAIL PROTECTED] writes:
W.r.t. your rewrite of the query, I get this ERROR: could not devise a
query plan for the given query but no further details - I will try google
Which PG version are you using again? That should be fixed in 7.4.3
[Junaili Lie - Wed at 12:34:32PM -0700]
select f.p_id, max(f.id) from person p, food f where p.id=f.p_id group
by f.p_id will work.
But I understand this is not the most efficient way. Is there another
way to rewrite this query? (maybe one that involves order by desc
limit 1)
eventually, try
We have had four databases serving our web site but due to licensing issues we have had to take two out of production and we are looking to bring those two onto PostgreSQL very quickly with an eye toward moving everything in the longer term. The central web DBs are all copies of
Joshua D. Drake wrote:
Yes - we have seen with oracle 64 bit that there can be as much as a
10% hit moving
from 32 - but we make it up big time with large db-buffer sizes that
drastically
Well for Opteron you should also gain from the very high memory
bandwidth and the fact that it has I
Hi,
The suggested query below took forever when I tried it.
In addition, as suggested by Tobias, I also tried to create index on
food(p_id, id), but still no goal (same query plan).
Here is the explain:
TEST1=# explain select f.p_id, max(f.id) from Food f, Person p where
(f.p_id = p.id) group by
We are considering two RAID1 system disks, and two RAID1 data disks.
We've avoided buying Xeons. The machine we are looking at looks like
this:
Rackmount Chassis - 500W PSU / 4 x SATA Disk Drive Bays
S2882-D - Dual Opteron / AMD 8111 Chipset / 5 x PCI Slots
2x - (Dual) AMD Opteron
Tom Arthurs wrote:
I just puhsd 8.0.3 to production on Sunday, and haven't had a time to
really monitor it under load, so I can't tell if it's helped the context
switch problem yet or not.
8.0 is unlikely to make a significant difference -- by current sources
I meant the current CVS HEAD
How about
SELECT p_id, f_id
FROM
person as p
LEFT JOIN
(SELECT f.p_id, max(f.id), f_item
FROM food) as f
ON p.p_id = f.p_id
Create an index on Food (p_id, seq #)
This may not gain any performance, but worth a try. I don't have any
data
On Wed, Jun 08, 2005 at 15:48:27 -0700,
Junaili Lie [EMAIL PROTECTED] wrote:
Hi,
The suggested query below took forever when I tried it.
In addition, as suggested by Tobias, I also tried to create index on
food(p_id, id), but still no goal (same query plan).
Here is the explain:
TEST1=#
This is a pattern which I've seen many of times. I call it a best
choice query -- you can easily match a row from one table against any
of a number of rows in another, the trick is to pick the one that
matters most. I've generally found that I want the query results to
show more than the columns
As a follow up to this ive installed on another test Rehat 8 machine
with
7.3.4 and slow inserts are present, however on another machine with ES3
the same 15,000 inserts is about 20 times faster, anyone know of a
change
that would effect this, kernel or rehat release ?
Steve
-Original
Dear Group!
Thank you for all the support you all have been
providing from time to time. I have a small question: How do I find the
actual size of the Database? Awaiting you replies,
Shan.
---(end of broadcast)---
TIP 9:
contrib/dbsize in the postgresql distribution.
Shanmugasundaram Doraisamy wrote:
Dear Group!
Thank you for all the support you all have been
providing from time to time. I have a small question: How do I find the
actual size of the Database? Awaiting you replies,
34 matches
Mail list logo