Hello,
I've found strange problem in my database (8.4.4, but also 9.0beta3,
default postgresql.conf, shared_buffers raised to 256MB).
EXPLAIN ANALYZE SELECT ...
Total runtime: 4.782 ms
Time: 25,970 ms
SELECT ...
...
(21 rows)
Time: 23,042 ms
Test done in psql connected by socket to server
W dniu 26 lipca 2010 10:35 użytkownik Piotr Gasidło
qua...@barbara.eu.org napisał:
\d+ ocache_2010_12
Table public.ocache_2010_12
Indexes:
(...)
Missed index in listing:
ocache_2010_12_ukey UNIQUE, btree (oc_date_from, oc_date_to,
oc_h_id, oc_transport,
There is a partitioned table with 2 partitions:
drop table if exists p cascade;
create table p (
id bigint not null primary key,
ts timestamp);
create table p_actual ( check (ts is null) ) inherits (p);
create table p_historical ( check (ts is not null) ) inherits (p);
-- I skipped the
On 26/07/10 16:35, Piotr Gasidło wrote:
Hello,
I've found strange problem in my database (8.4.4, but also 9.0beta3,
default postgresql.conf, shared_buffers raised to 256MB).
EXPLAIN ANALYZE SELECT ...
Total runtime: 4.782 ms
Time: 25,970 ms
SELECT ...
...
(21 rows)
Time: 23,042
Yeb Havinga wrote:
Greg Smith wrote:
Put it on ext3, toggle on noatime, and move on to testing. The
overhead of the metadata writes is the least of the problems when
doing write-heavy stuff on Linux.
I ran a pgbench run and power failure test during pgbench with a 3
year old computer
On
26.07.10 12:15, Craig Ringer написав(ла):
On 26/07/10 16:35, Piotr Gasidło wrote:
Hello,
I've found strange problem in my database (8.4.4, but also 9.0beta3,
default postgresql.conf, shared_buffers raised to 256MB).
EXPLAIN ANALYZE SELECT ...
Total runtime: 4.782 ms
Time: 25,970 ms
On Sun, 25 Jul 2010, Yeb Havinga wrote:
Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at
http://tinypic.com/r/x5e846/3
Does your latency graph really have milliseconds as the y axis? If so,
this device is really slow - some requests have a latency of more than a
second!
Matthew
On 26/07/10 17:25, Vitalii Tymchyshyn wrote:
26.07.10 12:15, Craig Ringer написав(ла):
On 26/07/10 16:35, Piotr Gasidło wrote:
Hello,
I've found strange problem in my database (8.4.4, but also 9.0beta3,
default postgresql.conf, shared_buffers raised to 256MB).
EXPLAIN ANALYZE SELECT
2010/7/26 Vitalii Tymchyshyn tiv...@gmail.com:
26.07.10 12:15, Craig Ringer написав(ла):
(...)
Piotr: You can try preparing your statement and then analyzing execute time
to check if this is planning time.
You are right.
I've done simple PREPARE (without params, etc).
REPARE query as
Matthew Wakeling wrote:
On Sun, 25 Jul 2010, Yeb Havinga wrote:
Graph of TPS at http://tinypic.com/r/b96aup/3 and latency at
http://tinypic.com/r/x5e846/3
Does your latency graph really have milliseconds as the y axis?
Yes
If so, this device is really slow - some requests have a latency of
Matthew Wakeling wrote:
Does your latency graph really have milliseconds as the y axis? If so,
this device is really slow - some requests have a latency of more than
a second!
Have you tried that yourself? If you generate one of those with
standard hard drives and a BBWC under Linux, I
Yeb Havinga wrote:
Please remember that particular graphs are from a read/write pgbench
run on a bigger than RAM database that ran for some time (so with
checkpoints), on a *single* $435 50GB drive without BBU raid controller.
To get similar *average* performance results you'd need to put
On Mon, 26 Jul 2010, Greg Smith wrote:
Matthew Wakeling wrote:
Does your latency graph really have milliseconds as the y axis? If so, this
device is really slow - some requests have a latency of more than a second!
Have you tried that yourself? If you generate one of those with standard
Matthew Wakeling wrote:
Apologies, I was interpreting the graph as the latency of the device,
not all the layers in-between as well. There isn't any indication in
the email with the graph as to what the test conditions or software are.
That info was in the email preceding the graph mail, but I
On Mon, Jul 26, 2010 at 10:26 AM, Yeb Havinga yebhavi...@gmail.com wrote:
Matthew Wakeling wrote:
Apologies, I was interpreting the graph as the latency of the device, not
all the layers in-between as well. There isn't any indication in the email
with the graph as to what the test conditions
Matthew Wakeling wrote:
Yeb also made the point - there are far too many points on that graph
to really tell what the average latency is. It'd be instructive to
have a few figures, like only x% of requests took longer than y.
Average latency is the inverse of TPS. So if the result is, say,
Greg Spiegelberg wrote:
Speaking of the layers in-between, has this test been done with the
ext3 journal on a different device? Maybe the purpose is wrong for
the SSD. Use the SSD for the ext3 journal and the spindled drives for
filesystem?
The main disk bottleneck on PostgreSQL
Greg Smith g...@2ndquadrant.com wrote:
Yeb's data is showing that a single SSD is competitive with a
small array on average, but with better worst-case behavior than
I'm used to seeing.
So, how long before someone benchmarks a small array of SSDs? :-)
-Kevin
--
Sent via
Greg Smith wrote:
Yeb Havinga wrote:
Please remember that particular graphs are from a read/write pgbench
run on a bigger than RAM database that ran for some time (so with
checkpoints), on a *single* $435 50GB drive without BBU raid controller.
To get similar *average* performance results
Yeb Havinga wrote:
To get similar *average* performance results you'd need to put about
4 drives and a BBU into a server. The
Please forget this question, I now see it in the mail i'm replying to.
Sorry for the spam!
-- Yeb
--
Sent via pgsql-performance mailing list
Yeb Havinga wrote:
I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory
read/write test. (scale 300) No real winners or losers, though ext2
isn't really faster and the manual need for fix (y) during boot makes
it impractical in its standard configuration.
That's what
On Mon, Jul 26, 2010 at 12:40 PM, Greg Smith g...@2ndquadrant.com wrote:
Greg Spiegelberg wrote:
Speaking of the layers in-between, has this test been done with the ext3
journal on a different device? Maybe the purpose is wrong for the SSD. Use
the SSD for the ext3 journal and the spindled
On Mon, Jul 26, 2010 at 1:45 PM, Greg Smith g...@2ndquadrant.com wrote:
Yeb Havinga wrote:
I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory
read/write test. (scale 300) No real winners or losers, though ext2 isn't
really faster and the manual need for fix (y) during
Joshua D. Drake wrote:
On Fri, 2010-07-23 at 15:03 -0700, Gerald Fontenay wrote:
The PostgreSQL partitioning system is aimed to support perhaps a
hundred inherited tables. You can expect to get poor performance on
queries if you create 1000 of them.
Hi,
Why is that you would
On Mon, Jul 26, 2010 at 03:23:20PM -0600, Greg Spiegelberg wrote:
On Mon, Jul 26, 2010 at 1:45 PM, Greg Smith g...@2ndquadrant.com wrote:
Yeb Havinga wrote:
I did some ext3,ext4,xfs,jfs and also ext2 tests on the just-in-memory
read/write test. (scale 300) No real winners or losers, though
Greg Spiegelberg wrote:
I know I'm talking development now but is there a case for a pg_xlog
block device to remove the file system overhead and guaranteeing your
data is written sequentially every time?
It's possible to set the PostgreSQL wal_sync_method parameter in the
database to
=?UTF-8?Q?Piotr_Gasid=C5=82o?= qua...@barbara.eu.org writes:
Ok. Is there any way to tune postgresql, to shorten planning time for
such queries?
You've got a ridiculously large number of partitions. Use fewer.
regards, tom lane
--
Sent via pgsql-performance mailing
I'm involved with the setup of replacement hardware for one of our
systems. It is going to be using Ubuntu Lucid Server (kernel 2.6.32 I
believe). The issue of filesystems has raised its head again.
I note that ext4 is now the default for Lucid, what do folks think about
using it: stable
28 matches
Mail list logo