On Fri, Jan 05, 2007 at 12:47:08PM +0100, Mikael Carneholm wrote:
Take a look at the set of partitioning functions I wrote shortly
after
the 8.1 release:
http://www.studenter.hb.se/~arch/files/part_functions.sql
You could probably work something out using those functions
Take a look at the set of partitioning functions I wrote shortly after
the 8.1 release:
http://www.studenter.hb.se/~arch/files/part_functions.sql
You could probably work something out using those functions (as-is, or
as inspiration) together with pgAgent
That query is generated by hibernate, right?
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Pallav
Kalva
Sent: den 15 september 2006 17:10
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Optimize SQL
Hi,
Is there anyway we can optimize this
WRT seek performance, we're doing 2500 seeks per second on the
Sun/Thumper on 36 disks.
Luke,
Have you had time to run benchmarksql against it yet? I'm just curious
about the IO seeks/s vs. transactions/minute correlation...
/Mikael
---(end of
PROTECTED]
Sent: den 28 juli 2006 11:17
To: Mikael Carneholm; Kjell Tore Fossbakk;
pgsql-performance@postgresql.org
Subject: RE: [PERFORM] Performance with 2 AMD/Opteron 2.6Ghz and 8gig
Mikael,
-Original Message-
From: Mikael Carneholm [mailto:[EMAIL PROTECTED]
Sent: Friday, July 28
This is a relatively low end HBA with 1 4Gb FC on it. Max sustained
IO on it is going to be ~320MBps. Or ~ enough for an 8 HD RAID 10 set
made of 75MBps ASTR HD's.
Looking at http://h30094.www3.hp.com/product.asp?sku=2260908extended=1,
I notice that the controller has a Ultra160 SCSI interface
Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Michael
Stone
Sent: den 17 juli 2006 02:04
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] RAID stripe size question
On Mon, Jul 17, 2006 at 12:52:17AM +0200, Mikael Carneholm wrote:
I have finally gotten my hands
I think the main difference is that the WAL activity is mostly linear,
where the normal data activity is rather random access.
That was what I was expecting, and after reading
http://www.pcguide.com/ref/hdd/perf/raid/concepts/perfStripe-c.html I
figured that a different stripe size for the WAL
This is something I'd also would like to test, as a common
best-practice these days is to go for a SAME (stripe all, mirror
everything) setup.
From a development perspective it's easier to use SAME as the
developers won't have to think about physical location for new
tables/indices, so if
Unless I'm missing something, the only FC or SCSI HDs of ~147GB capacity are
15K, not 10K.
In the spec we got from HP, they are listed as model 286716-B22
(http://www.dealtime.com/xPF-Compaq_HP_146_8_GB_286716_B22) which seems to run
at 10K. Don't know how old those are, but that's what we got
Title: RAID stripe size question
I have finally gotten my hands on the MSA1500 that we ordered some time ago. It has 28 x 10K 146Gb drives, currently grouped as 10 (for wal) 18 (for data). There's only one controller (an emulex), but I hope performance won't suffer too much from that. Raid
Do you really need to create one *DB* per client - that is, is one
schema (in the same DB) per client out of the question? If not, I would
look into moving all reference tables (read-only data, constants and
such) into a common schema (with read permission granted to each
client/role), that way
For my application there is very little info I can share. Maybe less
than 10 on 100 actually so I not sure it worth it ...
Ok, so 90% of the tables are being written to - this either means that
your application uses very little constants, or that it has access to
constans that are stored
We've seen similar results with our EMC CX200 (fully equipped) when
compared to a single (1) SCSI disk machine. For sequential reads/writes
(import, export, updates on 5-10 30M+ row tables), performance is
downright awful. A big DB update took 5-6h in pre-prod (single SCSI),
and 10-14?h (don't
What filesystem are you using - ext2/etx3/xfs/jfs/...? Does the SCSI
controller have a battery backed cache? For ext3, mounting it with
data=writeback should give you quite a boost in write performance.
What benchmark tool are you using - is it by any chance BenchmarkSQL?
(since you mention that
FWIW, my own experiments with tests like this suggest that PG is at
worst about 2x slower than mysql for trivial queries. If you'd reported
a result in that ballpark I'd have accepted it as probably real. 6x I
don't believe though ...
OTOH, my tests using BenchmarkSQL
My server is the HP DL585 (quad, dual-core Opteron, 16GB RAM) with 4 HD
bays run by a HP SmartArray 5i controller. I have 15 10K 300GB drives
and 1 15K 150GB drive (don't ask how that happened).
Our server will be a DL385 (dual, dual-core Opteron, 16Gb RAM), and the
28 disks(10K 146Gb)in the
of battery backed write cache...
Regards,
Mikael
-Original Message-
From: Guoping Zhang [mailto:[EMAIL PROTECTED]
Sent: den 28 april 2006 07:35
To: Mikael Carneholm; pgsql-performance@postgresql.org
Cc: Guoping Zhang (E-mail)
Subject: RE: [PERFORM] how unsafe (or worst scenarios) when
Get a SCSI controller with a battery backed cache, and mount the disks
with data=writeback (if you use ext3). If you loose power in the middle
of a transaction, the battery will ensure that the write operation still
completes. With asynch writing setup like this, fsync operations will
return
If I'm reading the original post correctly, the biggest issue is
likely to be that the 14 disks on each 2Gbit fibre channel will be
throttled to 200Mb/s by the channel , when in fact you could expect
(in RAID 10
arrangement) to get about 7 * 70 Mb/s = 490 Mb/s.
The two controllers and
2006 17:25
To: Mikael Carneholm
Cc: Pgsql performance
Subject: Re: [PERFORM] Hardware: HP StorageWorks MSA 1500
On Thu, 20 Apr 2006, Mikael Carneholm wrote:
We're going to get one for evaluation next week (equipped with dual
2Gbit HBA:s and 2x14 disks, iirc). Anyone with experience from them
We're going to get one for evaluation next week (equipped with dual
2Gbit HBA:s and 2x14 disks, iirc). Anyone with experience from them,
performance wise?
Regards,
Mikael
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please
This should be fixed by the changes I made recently in
choose_bitmap_and
--- it wasn't being aggressive about pruning overlapping AND conditions
when a sub-OR was involved. It's possible the new coding is *too*
aggressive, and will reject indexes that it'd be profitable to include;
but at least
Since my first post in this series, data has been successfully exported
from the proprietary db (in CSV format) and imported into postgres
(PostgreSQL 8.1.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.4
20050721 (Red Hat 3.4.4-2)) using
COPY. The tablespace holding the tables+indexes is
This is where a last_vacuumed (and last_analyzed) column in
pg_statistic(?) would come in handy. Each time vacuum or analyze has
finished, update the row for the specific table that was
vacuumed/analyzed with a timestamp in the last_vacuumed/last_analyzed
column. No more guessing maybe I haven't
can spare some time.
- Mikael
-Original Message-
From: Matthew T. O'Connor [mailto:[EMAIL PROTECTED]
Sent: den 28 mars 2006 00:43
To: Mikael Carneholm
Cc: Postgresql Performance
Subject: Re: [PERFORM] count(*) performance
Mikael Carneholm wrote:
This is where a last_vacuumed
On Mon, 2006-03-20 at 15:59 +0100, Mikael Carneholm wrote:
This gives that 10Gb takes ~380s = ~27Mb/s (with fsync=off), compared to
the raw dd result (~75.5Mb/s).
I assume this difference is due to:
- simultaneous WAL write activity (assumed: for each byte written to the
table, at least
To: Mikael Carneholm
Cc: Simon Riggs; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Migration study, step 1: bulk write
performanceoptimization
On Wed, Mar 22, 2006 at 10:04:49AM +0100, Mikael Carneholm wrote:
It does (LOG: checkpoints are occurring too frequently (2 seconds apart
Assuming you are joining on Table 1.id = Table 2.id - do you have indexes
on both columns? Have you analyzed your tables + indexes (are there statistics
available?) If not those criterias are met, it is unlikely that postgres will
choose an index scan.
-Original Message-
From: [EMAIL
Ok, here's the deal:
I am responisble for an exciting project of evaluating migration of a
medium/large application for a well-known swedish cartruck manufacturer from a
proprietary DB to Postgres. The size of the database is currently about 50Gb,
annual growth depending on sales, but probably
30 matches
Mail list logo