Hi Dave,
thank you for your answers! Here some comments:
Dave Crooke:
> > * The table just has 5 unused int columns, a timestamp,
> > OIDs, and the bytea column, *no indices*; the bytea storage
> > type is 'extended', the 16 MB are compressed to approx. the
> > half.
> >
>
> Why no indices?
Si
Hi all,
my posting on 2010-01-14 about the performance when writing
bytea to disk caused a longer discussion. While the fact
still holds that the overall postgresql write performance is
roughly 25% of the serial I/O disk performance this was
compensated for my special use case here by doing some o
Scott Carey:
> > (2) The tests:
> >
> > Note: The standard speed was about 800MB/40s, so 20MB/s.
> >
> >
> > a)
> > What I changed: fsync=off
> > Result: 35s, so 5s faster.
> >
> >
> > b) like a) but:
> > checkpoint_segments=128 (was 3)
> > autovacuum=off
> >
> > Result: 35s (no change...?!)
Scott Carey:
> Well, something is causing the system to alternate between
> CPU and disk bound here. (see below).
> It would be useful to see what affect the index has.
Ok, I simply deleted the index and repeated the test: I did
not notice any difference. This is probably so because in
fact I am
Ivan Voras:
> [I just skimmed this thread - did you increase the number of WAL logs to
> something very large, like 128?]
Yes, I tried even more.
I will be writing data quite constantly in the real scenario
later. So I wonder if increasing WAL logs will have a
positive effect or not: AFAIK when
Pierre Frédéric Caillaud:
> I don't remember if you used TOAST compression or not.
I use 'bytea' and SET STORAGE EXTERNAL for this column.
AFAIK this switches off the compression.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscriptio
Scott Carey:
> You are CPU bound.
>
> 30% of 4 cores is greater than 25%. 25% is one core fully
> used.
I have measured the cores separately. Some of them reached
30%. I am not CPU bound here.
> The postgres compression of data in TOAST is
> probably the problem. I'm assuming its doing Gzip,
fka...@googlemail.com:
> I'll try to execute these tests on a SSD
> and/or Raid system.
FYI:
On a sata raid-0 (mid range hardware) and recent 2x 1.5 TB
disks with a write performance of 100 MB/s (worst, to 200
MB/s max), I get a performance of 18.2 MB/s. Before, with
other disk 43
fka...@googlemail.com:
> I'll try to execute these tests on a SSD
> and/or Raid system.
FYI:
On a recent but mid-range performing SSD (128 MB size,
serially writing 80-140 MB, 100 MB average) the situation
was worse for some reason. No difference by fsync=on/off.
Felix
--
Sen
Matthew Wakeling:
> The data needs to be written first to the WAL, in order to provide
> crash-safety. So you're actually writing 1600MB, not 800.
I come back again to saving WAL to another disk. Now, after
all, I wonder: Doesn't the server wait anyway until WAL was
written to disk?
So, if true
Hannu Krosing:
> did you also test this with fsync=off ?
Yes. No significant difference.
> I suspect that what you are seeing is the effect of randomly writing to
> the index files. While sequential write performance can be up to
> 80MB/sec on modern drives, sequential writes are an order of ma
Dave Crooke:
> If you don't need that level of consistency for your 8MB blobs, write them
> to plain files named with some kind of id, and put the id in the database
> instead of the blob.
The problem here is that I later need to give access to the
database via TCP to an external client. This cli
Hannu Krosing:
> On Thu, 2010-01-14 at 21:14 +0100, fka...@googlemail.com wrote:
> > Thanks a lot for your reply.
> >
> > Hannu Krosing:
> >
> > > > 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.
> > >
> > > try inserting the same data u
Hello Pierre,
thank You for these useful test commands.
Here is what I did:
Pierre Frédéric Caillaud:
> Try this :
>
> CREATE TABLE test AS SELECT * FROM yourtable;
>
> This will test write speed, and TOAST compression speed.
> Then try this:
(1)
Setting:
* pg_xlog sym'l
Pierre Frédéric Caillaud:
> At install, or use a symlink (they exist on windows too !...)
>
> http://stackoverflow.com/questions/1901405/postgresql-wal-on-windows
Very interesting! Did not help much though (see other
posting).
Thank You
Felix
--
Sent via pgsql-performance mailin
Matthew Wakeling:
> On Thu, 14 Jan 2010, fka...@googlemail.com wrote:
> > Nevertheless: If your explanation covers all what can be
> > said about it then replacing the hard disk by a faster one
> > should increase the performance here (I'll try to check that
> >
Pierre Frédéric Caillaud:
> > 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.
>
> Big CPU and slow disk...
>
> You should add another disk just for the WAL -- disks are pretty cheap
> these days.
> Writing the WAL on a second disk is the first thing to do on a
> configuration like
Florian Weimer:
> > Do you have any further idea why 16MB/s seems to be the limit here?
>
> BYTEA deserialization is very slow, and this could be a factor here.
> Have you checked that you are in fact I/O bound?
Could you elaborate that a bit? It sounds interesting but I
do not get what you mean
Aidan Van Dyk:
> So, your SATA disk can do 43MB/s of sequential writes, but you're example
> is doing:
> 1) Sequential writes to WAL
> 2) Random writes to your index
> 3) Sequential writes to table heap
> 4) Sequential writes to table' toast heap
> 5) Any other OS-based FS overhead
Ok, I see. Tha
Thanks a lot for your reply.
Hannu Krosing:
> > 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.
>
> try inserting the same data using 4 parallel connections or even 8
> parallel ones.
Interesting idea -- I forgot to mention though that 2-3
cores will be occupied soon with other tasks.
Felix
--
Thanks a lot for the detailed reply.
Matthew Wakeling:
> On Thu, 14 Jan 2010, fka...@googlemail.com wrote:
> > This takes about 50s, so, 800MB/50s = 16MB/s.
> >
> > However the harddisk (sata) could write 43 MB/s in the worst
> > case! Why is write performance limit
Thank You for your reply.
Ivan Voras:
> Are you doing it locally or over a network? If you are accessing the
> server over a network then it could be the location of the bottleneck.
All is done locally (for now).
Felix
--
Sent via pgsql-performance mailing list (pgsql-performance@postgre
Hello together,
I need to increase the write performance when inserting
bytea of 8MB. I am using 8.2.4 on windows with libpq.
The test setting is simple:
I write 100x times a byte array (bytea) of 8 MB random data
into a table having a binary column (and oids and 3 other
int columns, oids are in
23 matches
Mail list logo