On 8/5/09 7:12 AM, "Merlin Moncure" wrote:
> On Tue, Aug 4, 2009 at 4:40 PM, Tom Lane wrote:
>> Scott Carey writes:
>>> There are a handful of other compression algorithms very similar to LZO in
>>> performance / compression level under various licenses.
>>> LZO is just the best known and mos
On Tue, Aug 4, 2009 at 4:40 PM, Tom Lane wrote:
> Scott Carey writes:
>> There are a handful of other compression algorithms very similar to LZO in
>> performance / compression level under various licenses.
>> LZO is just the best known and most widely used.
>
> And after we get done with the lice
Scott Carey writes:
> There are a handful of other compression algorithms very similar to LZO in
> performance / compression level under various licenses.
> LZO is just the best known and most widely used.
And after we get done with the license question, we need to ask about
patents. The compres
On 8/4/09 8:30 AM, "Alvaro Herrera" wrote:
> Merlin Moncure escribió:
>> On Mon, Aug 3, 2009 at 5:30 PM, PFC wrote:
>>>
lzo is much, much, (much) faster than zlib. Note, I've tried several
>>>
>>> decompression speed is even more awesome...
>>>
times to contact the author to get cl
Merlin Moncure escribió:
> On Mon, Aug 3, 2009 at 5:30 PM, PFC wrote:
> >
> >> lzo is much, much, (much) faster than zlib. Note, I've tried several
> >
> > decompression speed is even more awesome...
> >
> >> times to contact the author to get clarification on licensing terms
> >> and have been un
On Mon, Aug 3, 2009 at 5:30 PM, PFC wrote:
>
>> lzo is much, much, (much) faster than zlib. Note, I've tried several
>
> decompression speed is even more awesome...
>
>> times to contact the author to get clarification on licensing terms
>> and have been unable to get a response.
>
> lzop and the
lzo is much, much, (much) faster than zlib. Note, I've tried several
decompression speed is even more awesome...
times to contact the author to get clarification on licensing terms
and have been unable to get a response.
lzop and the LZO library are distributed under the terms of the GNU
On Mon, Aug 3, 2009 at 2:56 PM, Tom Lane wrote:
> I don't see anything very contradictory here. What you're demonstrating
> is that it's nice to be able to throw a third CPU at the compression
> part of the problem. That's likely to remain true if we shift to a
> different compression algorithm.
I get very different (contradictory) behavior. Server with fast RAID,
32GB
RAM, 2 x 4 core 3.16Ghz Xeon 54xx CPUs. CentOS 5.2
8.3.6
That's a very different serup from my (much less powerful) box, so that
would explain it...
No disk wait time during any test. One test beforehand was
On 8/3/09 11:56 AM, "Tom Lane" wrote:
> Scott Carey writes:
>> I get very different (contradictory) behavior. Server with fast RAID, 32GB
>> RAM, 2 x 4 core 3.16Ghz Xeon 54xx CPUs. CentOS 5.2
>> 8.3.6
>> No disk wait time during any test. One test beforehand was used to prime
>> the disk c
Scott Carey writes:
> I get very different (contradictory) behavior. Server with fast RAID, 32GB
> RAM, 2 x 4 core 3.16Ghz Xeon 54xx CPUs. CentOS 5.2
> 8.3.6
> No disk wait time during any test. One test beforehand was used to prime
> the disk cache.
> 100% CPU in the below means one core fully
On 7/31/09 4:01 PM, "PFC" wrote:
> On Fri, 31 Jul 2009 19:04:52 +0200, Tom Lane wrote:
>
>> Greg Stark writes:
>>> On Thu, Jul 30, 2009 at 11:30 PM, Tom Lane wrote:
I did some tracing and verified that pg_dump passes data to deflate()
one table row at a time. I'm not sure about the
On Fri, 31 Jul 2009 19:04:52 +0200, Tom Lane wrote:
Greg Stark writes:
On Thu, Jul 30, 2009 at 11:30 PM, Tom Lane wrote:
I did some tracing and verified that pg_dump passes data to deflate()
one table row at a time. I'm not sure about the performance
implications of that, but it does seem l
Greg Stark writes:
> On Thu, Jul 30, 2009 at 11:30 PM, Tom Lane wrote:
>> I did some tracing and verified that pg_dump passes data to deflate()
>> one table row at a time. I'm not sure about the performance
>> implications of that, but it does seem like it might be something to
>> look into.
> I
On Thu, Jul 30, 2009 at 10:07 AM, Rauan Maemirov wrote:
> Unfortunately had to downgrade back to 8.3. Now having troubles with
> that and still solving them.
>
> For future upgrade, what is the basic steps?
>
>>Was the database analyzed recently?
> Hm... there was smth like auto analyzer in serverl
That's true. I tried to lower work_mem from 192 to 64, and it caused
total slowdown.
By the way, is there any performance tips for tuning joins? I noticed,
that my joins on 8.4 slowed down, on 8.3 it was faster a bit.
2009/7/31 Scott Marlowe :
> On Thu, Jul 30, 2009 at 10:10 PM, Greg Smith wrote:
Scott Carey wrote:
On 7/30/09 11:24 AM, "Stefan Kaltenbrunner" wrote:
Kevin Grittner wrote:
Tom Lane wrote:
"Kevin Grittner" writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use
this technique is clearl
On Thu, Jul 30, 2009 at 10:10 PM, Greg Smith wrote:
> On Thu, 30 Jul 2009, Rauan Maemirov wrote:
>
>> maintenance_work_mem = 1GB
>> work_mem = 192MB
>> shared_buffers = 7680MB
>> max_connections = 80
>> My box is Nehalem 2xQuad 2.8 with RAM 32Gb
>
> While it looks like you sorted out your issue dow
On Thu, 30 Jul 2009, Rauan Maemirov wrote:
maintenance_work_mem = 1GB
work_mem = 192MB
shared_buffers = 7680MB
max_connections = 80
My box is Nehalem 2xQuad 2.8 with RAM 32Gb
While it looks like you sorted out your issue downthread, I wanted to
point out that your setting for work_mem could b
Greg Stark wrote:
> I suspect if this was a problem the zlib people would have added
> internal buffering ages ago. I find it hard to believe we're not the
> first application to use it this way.
I think that most uses of this library are on entire files or streams.
They may have felt that ad
On Thu, Jul 30, 2009 at 11:30 PM, Tom Lane wrote:
> I did some tracing and verified that pg_dump passes data to deflate()
> one table row at a time. I'm not sure about the performance
> implications of that, but it does seem like it might be something to
> look into.
I suspect if this was a probl
Scott Carey writes:
> On 7/30/09 2:53 PM, "Tom Lane" wrote:
>> Scott Carey writes:
>>> Gzip does have some quirky performance behavior depending on the chunk size
>>> of data you stream into it.
>>
>> Can you enlarge on that comment? I'm not sure that pg_dump is aware
>> that there's anything
On 7/30/09 2:53 PM, "Tom Lane" wrote:
> Scott Carey writes:
>> Gzip does have some quirky performance behavior depending on the chunk size
>> of data you stream into it.
>
> Can you enlarge on that comment? I'm not sure that pg_dump is aware
> that there's anything to worry about there.
>
>
Hey guyz, thanks for help. I solved the problems. The reason was in
bad query, that i've accidentally committed right after upgrading.
PostgreSQL 8.4 is perfect! Analyze works like a charm, and MUCH better
than in 8.3.
2009/7/31 Kevin Grittner :
> Tom Lane wrote:
>> Scott Carey writes:
>>> Gzip
Tom Lane wrote:
> Scott Carey writes:
>> Gzip does have some quirky performance behavior depending on the
>> chunk size of data you stream into it.
>
> Can you enlarge on that comment? I'm not sure that pg_dump is aware
> that there's anything to worry about there.
If the library used here i
Scott Carey writes:
> Gzip does have some quirky performance behavior depending on the chunk size
> of data you stream into it.
Can you enlarge on that comment? I'm not sure that pg_dump is aware
that there's anything to worry about there.
regards, tom lane
--
Sent via
"Kevin Grittner" writes:
> One thing I've been wondering about is what, exactly, is compressed in
> custom format. Is it like a .tar.gz file, where the compression is a
> layer over the top, or are individual entries compressed?
Individual entries. Eyeball examination of a dump file shows that
Scott Carey wrote:
> Gzip does have some quirky performance behavior depending on the
> chunk size of data you stream into it.
Yeah, I've run into that before. If we're sending each individual
datum to a gzip function rather than waiting until we've got a
decent-size buffer, that could explai
On 7/30/09 1:58 PM, "Kevin Grittner" wrote:
> Scott Carey wrote:
>
>> Now, what needs to be known with the pg_dump is not just how fast
>> compression can go (assuming its gzip) but also what the duty cycle
>> time of the compression is. If it is single threaded, there is all
>> the network a
Scott Carey wrote:
> Now, what needs to be known with the pg_dump is not just how fast
> compression can go (assuming its gzip) but also what the duty cycle
> time of the compression is. If it is single threaded, there is all
> the network and disk time to cut out of this, as well as all the CP
On 7/30/09 1:15 PM, "Tom Lane" wrote:
> Scott Carey writes:
>> Dump needs to be parallelized or at least pipelined to use more cores. COPY
>> on one thread, compression on another?
>
> We already do that (since compression happens on the pg_dump side).
>
> regards, to
On 7/30/09 11:58 AM, "Kevin Grittner" wrote:
> Scott Carey wrote:
>
>> Max GZIP compression speed with the newest Intel CPU's is something
>> like 50MB/sec (it is data dependant, obviously -- it is usually
>> closer to 30MB/sec).
>
> Applying 30MB/sec to the 70GB accounts for 40 minutes. I
Scott Carey writes:
> Dump needs to be parallelized or at least pipelined to use more cores. COPY
> on one thread, compression on another?
We already do that (since compression happens on the pg_dump side).
regards, tom lane
--
Sent via pgsql-performance mailing list (
On 30-7-2009 20:46 Scott Carey wrote:
Of course Compression has a HUGE effect if your I/O system is half-decent.
Max GZIP compression speed with the newest Intel CPU's is something like
50MB/sec (it is data dependant, obviously -- it is usually closer to
30MB/sec). Max gzip decompression ranges
On 7/30/09 11:24 AM, "Stefan Kaltenbrunner" wrote:
> Kevin Grittner wrote:
>> Tom Lane wrote:
>>> "Kevin Grittner" writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use
this technique is clearl
On 7/30/09 11:14 AM, "Stefan Kaltenbrunner" wrote:
> Tom Lane wrote:
>> "Kevin Grittner" writes:
>>> Since the dump to custom format ran longer than the full pg_dump
>>> piped directly to psql would have taken, the overall time to use this
>>> technique is clearly longer for our databases on ou
Scott Carey wrote:
> Max GZIP compression speed with the newest Intel CPU's is something
> like 50MB/sec (it is data dependant, obviously -- it is usually
> closer to 30MB/sec).
Applying 30MB/sec to the 70GB accounts for 40 minutes. If those
numbers are good, there's something else at play he
On 7/30/09 11:14 AM, "Kevin Grittner" wrote:
> Tom Lane wrote:
>> "Kevin Grittner" writes:
>>> Since the dump to custom format ran longer than the full pg_dump
>>> piped directly to psql would have taken, the overall time to use
>>> this technique is clearly longer for our databases on our ha
Kevin Grittner wrote:
Tom Lane wrote:
"Kevin Grittner" writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use
this technique is clearly longer for our databases on our hardware.
Hmmm ... AFAIR there isn't a go
Tom Lane wrote:
"Kevin Grittner" writes:
Since the dump to custom format ran longer than the full pg_dump
piped directly to psql would have taken, the overall time to use this
technique is clearly longer for our databases on our hardware.
Hmmm ... AFAIR there isn't a good reason for dump to c
Tom Lane wrote:
> "Kevin Grittner" writes:
>> Since the dump to custom format ran longer than the full pg_dump
>> piped directly to psql would have taken, the overall time to use
>> this technique is clearly longer for our databases on our hardware.
>
> Hmmm ... AFAIR there isn't a good reason
Tom Lane wrote:
> Hmmm ... AFAIR there isn't a good reason for dump to custom format
> to take longer than plain text dump, except for applying
> compression. Maybe -Z0 would be worth testing? Or is the problem
> that you have to write the data to a disk file rather than just
> piping it?
I
"Kevin Grittner" writes:
> Since the dump to custom format ran longer than the full pg_dump
> piped directly to psql would have taken, the overall time to use this
> technique is clearly longer for our databases on our hardware.
Hmmm ... AFAIR there isn't a good reason for dump to custom format t
Matthew Wakeling wrote:
> tests on servers over here have indicated that running four "CREATE
> INDEX" statements at the time runs four times as fast, assuming the
> table fits in maintenance_work_mem.
I'm benchmarking a patch to the parallel restore, and just out of
curiosity I've been compa
On Thu, 30 Jul 2009, Kevin Grittner wrote:
Steve Crawford wrote:
benchmarks I've seen suggest that with 8 cores you may even see an
almost 8x restore speedup
I'm curious what sort of data in what environment showed that ratio.
It depends on a lot of things. However, certainly for index crea
Kevin Grittner wrote:
Steve Crawford wrote:
benchmarks I've seen suggest that with 8 cores you may even see an
almost 8x restore speedup
I'm curious what sort of data in what environment showed that ratio.
Was going on memory from a presentation I watched. Reports on the web
h
Steve Crawford wrote:
> benchmarks I've seen suggest that with 8 cores you may even see an
> almost 8x restore speedup
I'm curious what sort of data in what environment showed that ratio.
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes
My additional comments:
t...@fuzzy.cz wrote:
...
For future upgrade, what is the basic steps?
0. Create test database - work out bugs and performance issues before
going live.
1. create database
...cluster. You only need to create the individual database if the
options you select
> OK, this seems to be the cause. What were the original values of the
> config variables? If you've lowered the work_mem and you need to sort a
> lot of data, this may be a problem. What amounts of data are you working
> with? If the data were not analyzed recently, the execution plans will be
> i
Unfortunately had to downgrade back to 8.3. Now having troubles with
that and still solving them.
For future upgrade, what is the basic steps?
>Was the database analyzed recently?
Hm... there was smth like auto analyzer in serverlog when i started it
first time, but i didn't mention that.
Should
> Unfortunately had to downgrade back to 8.3. Now having troubles with
> that and still solving them.
>
> For future upgrade, what is the basic steps?
1. create database
2. dump the data from the old database
3. load the data into the new database
4. analyze etc. (I prefer to do this manually at t
> Hi, list. I've just upgraded pgsql from 8.3 to 8.4. I've used pgtune
> before and everything worked fine for me.
>
> And now i have ~93% cpu load. Here's changed values of config:
>
> default_statistics_target = 50
> maintenance_work_mem = 1GB
> constraint_exclusion = on
> checkpoint_completion_t
Hi, list. I've just upgraded pgsql from 8.3 to 8.4. I've used pgtune
before and everything worked fine for me.
And now i have ~93% cpu load. Here's changed values of config:
default_statistics_target = 50
maintenance_work_mem = 1GB
constraint_exclusion = on
checkpoint_completion_target = 0.9
effe
53 matches
Mail list logo