Marking this patch as returned with feedback for this CF, moving it to
the next one. I doubt that there will be much progress here for the
next couple of days, so let's try at least to get something for this
release cycle.
--
Michael
--
Sent via pgsql-hackers mailing list
On Fri, Jan 2, 2015 at 11:52 AM, Bruce Momjian br...@momjian.us wrote:
OK, so given your stats, the feature give a 12.5% reduction in I/O. If
that is significant, shouldn't we see a performance improvement? If we
don't see a performance improvement, is I/O reduction worthwhile? Is it
On Fri, Jan 9, 2015 at 9:49 PM, Rahila Syed rahilasyed...@gmail.com wrote:
So this test can be used to evaluate how shorter records influence
performance since the master waits for flush confirmation from the
standby, right?
Yes. This test can help measure performance improvement due to reduced
So this test can be used to evaluate how shorter records influence
performance since the master waits for flush confirmation from the
standby, right?
Yes. This test can help measure performance improvement due to reduced I/O
on standby as master waits for WAL records flush on standby.
Isn't
On Thu, Jan 8, 2015 at 11:59 PM, Rahila Syed rahilasyed...@gmail.com wrote:
Below are performance numbers in case of synchronous replication with and
without fpw compression using latest version of patch(version 14). The patch
helps improve performance considerably.
Both master and standby are
Hello,
Below are performance numbers in case of synchronous replication with and
without fpw compression using latest version of patch(version 14). The patch
helps improve performance considerably.
Both master and standby are on the same machine in order to get numbers
independent of network
On Sat, Jan 3, 2015 at 1:52 AM, Bruce Momjian br...@momjian.us wrote:
On Fri, Jan 2, 2015 at 10:15:57AM -0600, k...@rice.edu wrote:
On Fri, Jan 02, 2015 at 01:01:06PM +0100, Andres Freund wrote:
On 2014-12-31 16:09:31 -0500, Bruce Momjian wrote:
I still don't understand the value of adding
On Sat, Jan 3, 2015 at 2:24 AM, Bruce Momjian br...@momjian.us wrote:
On Fri, Jan 2, 2015 at 02:18:12PM -0300, Claudio Freire wrote:
On Fri, Jan 2, 2015 at 2:11 PM, Andres Freund and...@2ndquadrant.com wrote:
, I now see the compression patch as something that has negatives, so
has to be
On Sat, Jan 3, 2015 at 1:52 AM, Bruce Momjian br...@momjian.us wrote:
I suggest we at least document that this feature as mostly useful for
I/O reduction, and maybe say CPU usage and performance might be
negatively impacted.
FWIW, that's mentioned in the documentation included in the patch..
--
On 2014-12-31 16:09:31 -0500, Bruce Momjian wrote:
I still don't understand the value of adding WAL compression, given the
high CPU usage and minimal performance improvement. The only big
advantage is WAL storage, but again, why not just compress the WAL file
when archiving.
before: pg_xlog
On Fri, Jan 2, 2015 at 10:15:57AM -0600, k...@rice.edu wrote:
On Fri, Jan 02, 2015 at 01:01:06PM +0100, Andres Freund wrote:
On 2014-12-31 16:09:31 -0500, Bruce Momjian wrote:
I still don't understand the value of adding WAL compression, given the
high CPU usage and minimal performance
On 2015-01-02 12:06:33 -0500, Bruce Momjian wrote:
On Fri, Jan 2, 2015 at 05:55:52PM +0100, Andres Freund wrote:
On 2015-01-02 11:52:42 -0500, Bruce Momjian wrote:
Why are we not seeing the 33% compression and 15% performance
improvement he saw? What am I missing here?
To see
On Fri, Jan 2, 2015 at 2:11 PM, Andres Freund and...@2ndquadrant.com wrote:
, I now see the compression patch as something that has negatives, so
has to be set by the user, and only wins in certain cases. I am
disappointed, and am trying to figure out how this became such a
marginal win for
On Fri, Jan 2, 2015 at 05:55:52PM +0100, Andres Freund wrote:
On 2015-01-02 11:52:42 -0500, Bruce Momjian wrote:
Why are we not seeing the 33% compression and 15% performance
improvement he saw? What am I missing here?
To see performance improvements something needs to be the bottleneck.
On Fri, Jan 02, 2015 at 01:01:06PM +0100, Andres Freund wrote:
On 2014-12-31 16:09:31 -0500, Bruce Momjian wrote:
I still don't understand the value of adding WAL compression, given the
high CPU usage and minimal performance improvement. The only big
advantage is WAL storage, but again,
On Fri, Jan 2, 2015 at 06:11:29PM +0100, Andres Freund wrote:
My negativity is not that I don't want it, but I want to understand why
it isn't better than I remembered. You are basically telling me it was
always a marginal win. :-( Boohoo!
No, I didn't. I told you that *IN ONE
On 2015-01-02 11:52:42 -0500, Bruce Momjian wrote:
Why are we not seeing the 33% compression and 15% performance
improvement he saw? What am I missing here?
To see performance improvements something needs to be the bottleneck. If
WAL writes/flushes aren't that in the tested scenario, you won't
On Fri, Jan 2, 2015 at 02:18:12PM -0300, Claudio Freire wrote:
On Fri, Jan 2, 2015 at 2:11 PM, Andres Freund and...@2ndquadrant.com wrote:
, I now see the compression patch as something that has negatives, so
has to be set by the user, and only wins in certain cases. I am
disappointed,
* Bruce Momjian (br...@momjian.us) wrote:
To be specific, desirable in streaming replication scenarios that don't
use SSL compression. (What percentage is that?) It is something we
should mention in the docs for this feature?
Considering how painful the SSL rengeotiation problems were and
On Tue, Dec 30, 2014 at 01:27:44PM +0100, Andres Freund wrote:
On 2014-12-30 21:23:38 +0900, Michael Paquier wrote:
On Tue, Dec 30, 2014 at 6:21 PM, Jeff Davis pg...@j-davis.com wrote:
On Fri, 2013-08-30 at 09:57 +0300, Heikki Linnakangas wrote:
Speeding up the CRC calculation obviously
On Thu, Jan 1, 2015 at 2:39 AM, Bruce Momjian br...@momjian.us wrote:
On Tue, Dec 30, 2014 at 01:27:44PM +0100, Andres Freund wrote:
On 2014-12-30 21:23:38 +0900, Michael Paquier wrote:
On Tue, Dec 30, 2014 at 6:21 PM, Jeff Davis pg...@j-davis.com wrote:
On Fri, 2013-08-30 at 09:57
On Thu, Jan 1, 2015 at 2:10 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Thu, Jan 1, 2015 at 2:39 AM, Bruce Momjian br...@momjian.us wrote:
So why are you bringing it up? That's not an argument for anything,
except not doing it in such a simplistic way.
I still don't understand the
On Fri, 2013-08-30 at 09:57 +0300, Heikki Linnakangas wrote:
Speeding up the CRC calculation obviously won't help with the WAL volume
per se, ie. you still generate the same amount of WAL that needs to be
shipped in replication. But then again, if all you want to do is to
reduce the volume,
On Tue, Dec 30, 2014 at 6:21 PM, Jeff Davis pg...@j-davis.com wrote:
On Fri, 2013-08-30 at 09:57 +0300, Heikki Linnakangas wrote:
Speeding up the CRC calculation obviously won't help with the WAL volume
per se, ie. you still generate the same amount of WAL that needs to be
shipped in
On 2014-12-30 21:23:38 +0900, Michael Paquier wrote:
On Tue, Dec 30, 2014 at 6:21 PM, Jeff Davis pg...@j-davis.com wrote:
On Fri, 2013-08-30 at 09:57 +0300, Heikki Linnakangas wrote:
Speeding up the CRC calculation obviously won't help with the WAL volume
per se, ie. you still generate the
On Tue, Dec 9, 2014 at 10:45 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
On Mon, Dec 8, 2014 at 3:17 PM, Simon Riggs si...@2ndquadrant.com wrote:
On 8 December 2014 at 11:46, Michael Paquier michael.paqu...@gmail.com
wrote:
I don't really like those new names, but I'd prefer
On Thu, Dec 11, 2014 at 10:33 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Tue, Dec 9, 2014 at 4:09 AM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Dec 7, 2014 at 9:30 PM, Simon Riggs si...@2ndquadrant.com wrote:
* parameter should be SUSET - it doesn't *need* to be set only at
On Fri, Dec 12, 2014 at 10:23 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Dec 11, 2014 at 10:33 PM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Tue, Dec 9, 2014 at 4:09 AM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Dec 7, 2014 at 9:30 PM, Simon Riggs
On Fri, Dec 12, 2014 at 11:32 PM, Robert Haas robertmh...@gmail.com wrote:
On Fri, Dec 12, 2014 at 9:15 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
I just meant that the scratch buffers used to store temporarily the
compressed and uncompressed data should be palloc'd all the time,
On Fri, Dec 12, 2014 at 9:15 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
I just meant that the scratch buffers used to store temporarily the
compressed and uncompressed data should be palloc'd all the time, even
if the switch is off.
If they're fixed size, you can just put them on the
On Fri, Dec 12, 2014 at 9:34 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
I don't think that's a cost worth caring about.
OK, I thought it was.
Space on the heap that never gets used is basically free. The OS
won't actually allocate physical memory unless the pages are actually
On Tue, Dec 9, 2014 at 4:09 AM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Dec 7, 2014 at 9:30 PM, Simon Riggs si...@2ndquadrant.com wrote:
* parameter should be SUSET - it doesn't *need* to be set only at
server start since all records are independent of each other
Why not USERSET?
On Tue, Dec 9, 2014 at 2:15 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Mon, Dec 8, 2014 at 3:17 PM, Simon Riggs si...@2ndquadrant.com wrote:
On 8 December 2014 at 11:46, Michael Paquier michael.paqu...@gmail.com
wrote:
I don't really like those new names, but I'd prefer
On 8 December 2014 at 11:46, Michael Paquier michael.paqu...@gmail.com wrote:
* ideally we'd like to be able to differentiate the types of usage.
which then allows the user to control the level of compression
depending upon the type of action. My first cut at what those settings
should be are
On Sun, Dec 7, 2014 at 9:30 PM, Simon Riggs si...@2ndquadrant.com wrote:
* parameter should be SUSET - it doesn't *need* to be set only at
server start since all records are independent of each other
Why not USERSET? There's no point in trying to prohibit users from
doing things that will
On 2014-12-08 14:09:19 -0500, Robert Haas wrote:
records, just fpis. There is no evidence that we even want to compress
other record types, nor that our compression mechanism is effective at
doing so. Simple = keep name as compress_full_page_writes
Quite right.
I don't really agree with
On Mon, Dec 8, 2014 at 2:21 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2014-12-08 14:09:19 -0500, Robert Haas wrote:
records, just fpis. There is no evidence that we even want to compress
other record types, nor that our compression mechanism is effective at
doing so. Simple = keep
On 12/08/2014 09:21 PM, Andres Freund wrote:
I still think that just compressing the whole record if it's above a
certain size is going to be better than compressing individual
parts. Michael argued thta that'd be complicated because of the varying
size of the required 'scratch space'. I don't
On Tue, Dec 9, 2014 at 5:33 AM, Heikki Linnakangas
hlinnakan...@vmware.com wrote:
On 12/08/2014 09:21 PM, Andres Freund wrote:
I still think that just compressing the whole record if it's above a
certain size is going to be better than compressing individual
parts. Michael argued thta that'd
On 9 December 2014 at 04:09, Robert Haas robertmh...@gmail.com wrote:
On Sun, Dec 7, 2014 at 9:30 PM, Simon Riggs si...@2ndquadrant.com wrote:
* parameter should be SUSET - it doesn't *need* to be set only at
server start since all records are independent of each other
Why not USERSET?
On 9 December 2014 at 04:21, Andres Freund and...@2ndquadrant.com wrote:
On 2014-12-08 14:09:19 -0500, Robert Haas wrote:
records, just fpis. There is no evidence that we even want to compress
other record types, nor that our compression mechanism is effective at
doing so. Simple = keep
On Mon, Dec 8, 2014 at 3:17 PM, Simon Riggs si...@2ndquadrant.com wrote:
On 8 December 2014 at 11:46, Michael Paquier michael.paqu...@gmail.com
wrote:
I don't really like those new names, but I'd prefer
wal_compression_level if we go down that road with 'none' as default
value. We may
On Thu, Dec 4, 2014 at 8:37 PM, Michael Paquier wrote
I pondered something that Andres mentioned upthread: we may not do the
compression in WAL record only for blocks, but also at record level. Hence
joining the two ideas together I think that we should definitely have
a different
GUC to control
On Mon, Dec 8, 2014 at 11:30 AM, Simon Riggs si...@2ndquadrant.com wrote:
* parameter should be SUSET - it doesn't *need* to be set only at
server start since all records are independent of each other
Check.
* ideally we'd like to be able to differentiate the types of usage.
which then allows
On Wed, Jun 11, 2014 at 10:05 AM, Michael Paquier
michael.paqu...@gmail.com wrote:
On Tue, Jun 10, 2014 at 11:49 PM, Rahila Syed rahilasye...@gmail.com wrote:
Hello ,
In order to facilitate changing of compression algorithms and to be able to
recover using WAL records compressed with
On Wed, Jun 11, 2014 at 4:19 PM, Fujii Masao masao.fu...@gmail.com wrote:
IIUC even when we adopt only one algorithm, additional at least one bit is
necessary to see whether this backup block is compressed or not.
This flag is necessary only for backup block, so there is no need to use
the
Hello ,
In order to facilitate changing of compression algorithms and to be able
to recover using WAL records compressed with different compression
algorithms, information about compression algorithm can be stored in WAL
record.
XLOG record header has 2 to 4 padding bytes in order to align the
On Tue, Jun 10, 2014 at 11:49 PM, Rahila Syed rahilasye...@gmail.com wrote:
Hello ,
In order to facilitate changing of compression algorithms and to be able to
recover using WAL records compressed with different compression algorithms,
information about compression algorithm can be stored
On Thu, May 29, 2014 at 7:21 PM, Simon Riggs si...@2ndquadrant.com wrote:
On 29 May 2014 01:07, Bruce Momjian br...@momjian.us wrote:
On Wed, May 28, 2014 at 04:04:13PM +0100, Simon Riggs wrote:
On 28 May 2014 15:34, Fujii Masao masao.fu...@gmail.com wrote:
Also, compress_backup_block GUC
On 29 May 2014 01:07, Bruce Momjian br...@momjian.us wrote:
On Wed, May 28, 2014 at 04:04:13PM +0100, Simon Riggs wrote:
On 28 May 2014 15:34, Fujii Masao masao.fu...@gmail.com wrote:
Also, compress_backup_block GUC needs to be merged with full_page_writes.
Basically I agree with you
Thanks for extending and revising the FPW-compress patch! Could you add
your patch into next CF?
Sure. I will make improvements and add it to next CF.
Isn't it worth measuring the recovery performance for each compression
algorithm?
Yes I will post this soon.
On Wed, May 28, 2014 at 8:04 PM,
On Thu, May 29, 2014 at 11:21:44AM +0100, Simon Riggs wrote:
Uh, how would that work if you want to compress the background_FPWs?
Use compressed_background_FPWs?
We've currently got 1 technique for torn page protection, soon to have
2 and with a 3rd on the horizon and likely to receive
On Tue, May 27, 2014 at 12:57 PM, Rahila Syed rahilasyed...@gmail.com wrote:
Hello All,
0001-CompressBackupBlock_snappy_lz4_pglz extends patch on compression of
full page writes to include LZ4 and Snappy . Changes include making
compress_backup_block GUC from boolean to enum. Value of the GUC
On 28 May 2014 15:34, Fujii Masao masao.fu...@gmail.com wrote:
Also, compress_backup_block GUC needs to be merged with full_page_writes.
Basically I agree with you because I don't want to add new GUC very similar to
the existing one.
But could you imagine the case where full_page_writes =
On Wed, May 28, 2014 at 04:04:13PM +0100, Simon Riggs wrote:
On 28 May 2014 15:34, Fujii Masao masao.fu...@gmail.com wrote:
Also, compress_backup_block GUC needs to be merged with full_page_writes.
Basically I agree with you because I don't want to add new GUC very similar
to
the
Hello All,
0001-CompressBackupBlock_snappy_lz4_pglz extends patch on compression of
full page writes to include LZ4 and Snappy . Changes include making
compress_backup_block GUC from boolean to enum. Value of the GUC can be
OFF, pglz, snappy or lz4 which can be used to turn off compression or
On Sun, May 11, 2014 at 7:30 PM, Simon Riggs si...@2ndquadrant.com wrote:
On 30 August 2013 04:55, Fujii Masao masao.fu...@gmail.com wrote:
My idea is very simple, just compress FPW because FPW is
a big part of WAL. I used pglz_compress() as a compression method,
but you might think that
On Tue, May 13, 2014 at 3:33 AM, Fujii Masao masao.fu...@gmail.com wrote:
On Sun, May 11, 2014 at 7:30 PM, Simon Riggs si...@2ndquadrant.com wrote:
On 30 August 2013 04:55, Fujii Masao masao.fu...@gmail.com wrote:
My idea is very simple, just compress FPW because FPW is
a big part of WAL. I
Hello,
What kind of error did you get at the server crash? Assertion error? If yes,
it might be because of the conflict with
4a170ee9e0ebd7021cb1190fabd5b0cbe2effb8e.
This commit forbids palloc from being called within a critical section, but
the patch does that and then the assertion error
On 30 August 2013 04:55, Fujii Masao masao.fu...@gmail.com wrote:
My idea is very simple, just compress FPW because FPW is
a big part of WAL. I used pglz_compress() as a compression method,
but you might think that other method is better. We can add
something like FPW-compression-hook for
Hello,
Done. Attached is the updated version of the patch.
I was trying to check WAL reduction using this patch on latest available git
version of Postgres using JDBC runner with tpcc benchmark.
patching_problems.txt
http://postgresql.1045698.n5.nabble.com/file/n5803482/patching_problems.txt
Sameer Thakur samthaku...@gmail.com writes:
I was trying to check WAL reduction using this patch on latest available git
version of Postgres using JDBC runner with tpcc benchmark.
patching_problems.txt
http://postgresql.1045698.n5.nabble.com/file/n5803482/patching_problems.txt
I did
On Sat, May 10, 2014 at 8:33 PM, Sameer Thakur samthaku...@gmail.com wrote:
Hello,
Done. Attached is the updated version of the patch.
I was trying to check WAL reduction using this patch on latest available git
version of Postgres using JDBC runner with tpcc benchmark.
patching_problems.txt
On Fri, Oct 11, 2013 at 12:30:41PM +0900, Fujii Masao wrote:
Sure. To be honest, when I received the same request from Andres,
I did that benchmark. But unfortunately because of machine trouble,
I could not report it, yet. Will do that again.
Here is the benchmark result:
* Result
On Sat, Feb 1, 2014 at 10:22 AM, Bruce Momjian br...@momjian.us wrote:
On Fri, Oct 11, 2013 at 12:30:41PM +0900, Fujii Masao wrote:
Sure. To be honest, when I received the same request from Andres,
I did that benchmark. But unfortunately because of machine trouble,
I could not report it,
On Mon, Oct 21, 2013 at 11:52 PM, Fujii Masao masao.fu...@gmail.com wrote:
So, our consensus is to introduce the hooks for FPW compression so that
users can freely select their own best compression algorithm?
Also, probably we need to implement at least one compression contrib module
using
Robert Haas robertmh...@gmail.com writes:
On Mon, Oct 21, 2013 at 11:52 PM, Fujii Masao masao.fu...@gmail.com wrote:
So, our consensus is to introduce the hooks for FPW compression so that
users can freely select their own best compression algorithm?
Also, probably we need to implement at
On Thu, Oct 24, 2013 at 11:07:38AM -0400, Robert Haas wrote:
On Mon, Oct 21, 2013 at 11:52 PM, Fujii Masao masao.fu...@gmail.com wrote:
So, our consensus is to introduce the hooks for FPW compression so that
users can freely select their own best compression algorithm?
Also, probably we
On Thu, Oct 24, 2013 at 11:40 AM, k...@rice.edu k...@rice.edu wrote:
On Thu, Oct 24, 2013 at 11:07:38AM -0400, Robert Haas wrote:
On Mon, Oct 21, 2013 at 11:52 PM, Fujii Masao masao.fu...@gmail.com wrote:
So, our consensus is to introduce the hooks for FPW compression so that
users can
On Thu, Oct 24, 2013 at 12:22:59PM -0400, Robert Haas wrote:
On Thu, Oct 24, 2013 at 11:40 AM, k...@rice.edu k...@rice.edu wrote:
On Thu, Oct 24, 2013 at 11:07:38AM -0400, Robert Haas wrote:
On Mon, Oct 21, 2013 at 11:52 PM, Fujii Masao masao.fu...@gmail.com
wrote:
So, our consensus is
On Thu, Oct 24, 2013 at 8:37 PM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Oct 21, 2013 at 11:52 PM, Fujii Masao masao.fu...@gmail.com wrote:
So, our consensus is to introduce the hooks for FPW compression so that
users can freely select their own best compression algorithm?
Also,
On 2013-10-22 12:52:09 +0900, Fujii Masao wrote:
So, our consensus is to introduce the hooks for FPW compression so that
users can freely select their own best compression algorithm?
No, I don't think that's concensus yet. If you want to make it
configurable on that level you need to have:
1)
(2013/10/22 12:52), Fujii Masao wrote:
On Tue, Oct 22, 2013 at 12:47 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Mon, Oct 21, 2013 at 4:40 PM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
(2013/10/19 14:58), Amit Kapila wrote:
On Tue, Oct 15, 2013 at 11:41 AM, KONDO Mitsumasa
On Wed, Oct 23, 2013 at 7:05 AM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
(2013/10/22 12:52), Fujii Masao wrote:
On Tue, Oct 22, 2013 at 12:47 PM, Amit Kapila amit.kapil...@gmail.com
wrote:
On Mon, Oct 21, 2013 at 4:40 PM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
(2013/10/19 14:58), Amit Kapila wrote:
On Tue, Oct 15, 2013 at 11:41 AM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
I think in general also snappy is mostly preferred for it's low CPU
usage not for compression, but overall my vote is also for snappy.
I think low CPU usage is the
On Mon, Oct 21, 2013 at 4:40 PM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
(2013/10/19 14:58), Amit Kapila wrote:
On Tue, Oct 15, 2013 at 11:41 AM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
In general, my thinking is that we should prefer compression to reduce
IO (WAL
On Tue, Oct 22, 2013 at 12:47 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Mon, Oct 21, 2013 at 4:40 PM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
(2013/10/19 14:58), Amit Kapila wrote:
On Tue, Oct 15, 2013 at 11:41 AM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
In
On Tue, Oct 22, 2013 at 9:22 AM, Fujii Masao masao.fu...@gmail.com wrote:
On Tue, Oct 22, 2013 at 12:47 PM, Amit Kapila amit.kapil...@gmail.com wrote:
On Mon, Oct 21, 2013 at 4:40 PM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
(2013/10/19 14:58), Amit Kapila wrote:
On Tue, Oct 15,
On Tue, Oct 15, 2013 at 11:41 AM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
(2013/10/15 13:33), Amit Kapila wrote:
Snappy is good mainly for un-compressible data, see the link below:
On Wed, Oct 16, 2013 at 01:42:34PM +0900, KONDO Mitsumasa wrote:
(2013/10/15 22:01), k...@rice.edu wrote:
Google's lz4 is also a very nice algorithm with 33% better compression
performance than snappy and 2X the decompression performance in some
benchmarks also with a bsd license:
(2013/10/15 13:33), Amit Kapila wrote:
Snappy is good mainly for un-compressible data, see the link below:
http://www.postgresql.org/message-id/CAAZKuFZCOCHsswQM60ioDO_hk12tA7OG3YcJA8v=4yebmoa...@mail.gmail.com
This result was gotten in ARM architecture, it is not general CPU.
Please see detail
On Tue, Oct 15, 2013 at 03:11:22PM +0900, KONDO Mitsumasa wrote:
(2013/10/15 13:33), Amit Kapila wrote:
Snappy is good mainly for un-compressible data, see the link below:
http://www.postgresql.org/message-id/CAAZKuFZCOCHsswQM60ioDO_hk12tA7OG3YcJA8v=4yebmoa...@mail.gmail.com
This result was
(2013/10/15 22:01), k...@rice.edu wrote:
Google's lz4 is also a very nice algorithm with 33% better compression
performance than snappy and 2X the decompression performance in some
benchmarks also with a bsd license:
https://code.google.com/p/lz4/
If we judge only performance, we will select
(2013/10/13 0:14), Amit Kapila wrote:
On Fri, Oct 11, 2013 at 10:36 PM, Andres Freund and...@2ndquadrant.com wrote:
But maybe pglz is just not a good fit for this, it really
isn't a very good algorithm in this day and aage.
+1. This compression algorithm is needed more faster than pglz which is
On Tue, Oct 15, 2013 at 6:30 AM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
(2013/10/13 0:14), Amit Kapila wrote:
On Fri, Oct 11, 2013 at 10:36 PM, Andres Freund and...@2ndquadrant.com
wrote:
But maybe pglz is just not a good fit for this, it really
isn't a very good algorithm in
On 11/10/13 19:06, Andres Freund wrote:
On 2013-10-11 09:22:50 +0530, Amit Kapila wrote:
I think it will be difficult to prove by using any compression
algorithm, that it compresses in most of the scenario's.
In many cases it can so happen that the WAL will also not be reduced
and tps can also
On Fri, Oct 11, 2013 at 10:36 PM, Andres Freund and...@2ndquadrant.com wrote:
On 2013-10-11 09:22:50 +0530, Amit Kapila wrote:
I think it will be difficult to prove by using any compression
algorithm, that it compresses in most of the scenario's.
In many cases it can so happen that the WAL
On 10 October 2013 23:06 Fujii Masao wrote:
On Wed, Oct 9, 2013 at 1:35 PM, Haribabu kommi haribabu.ko...@huawei.com
wrote:
Thread-1
Threads-2
Head code FPW
On 2013-10-11 09:22:50 +0530, Amit Kapila wrote:
I think it will be difficult to prove by using any compression
algorithm, that it compresses in most of the scenario's.
In many cases it can so happen that the WAL will also not be reduced
and tps can also come down if the data is
Hi,
I did a partial review of this patch, wherein I focused on the patch and
the code itself, as I saw other contributors already did some testing on
it, so that we know it applies cleanly and work to some good extend.
Fujii Masao masao.fu...@gmail.com writes:
In this patch, full_page_writes
On Tue, Oct 8, 2013 at 10:07 PM, KONDO Mitsumasa
kondo.mitsum...@lab.ntt.co.jp wrote:
Hi,
I tested dbt-2 benchmark in single instance and synchronous replication.
Thanks!
Unfortunately, my benchmark results were not seen many differences...
* Test server
Server: HP Proliant DL360 G7
On Wed, Oct 9, 2013 at 1:35 PM, Haribabu kommi
haribabu.ko...@huawei.com wrote:
On 08 October 2013 18:42 KONDO Mitsumasa wrote:
(2013/10/08 20:13), Haribabu kommi wrote:
I will test with sync_commit=on mode and provide the test results.
OK. Thanks!
Pgbench test results with synchronous_commit
On Fri, Oct 11, 2013 at 1:20 AM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
Hi,
I did a partial review of this patch, wherein I focused on the patch and
the code itself, as I saw other contributors already did some testing on
it, so that we know it applies cleanly and work to some good
Hi,
On 2013-10-11 03:44:01 +0900, Fujii Masao wrote:
I'm afraid that the patch has only limited effects in WAL reduction and
performance improvement unless the database contains highly-compressible
data like large blank characters column. It really depends on the contents
of the database. So,
On Fri, Oct 11, 2013 at 3:44 AM, Fujii Masao masao.fu...@gmail.com wrote:
On Fri, Oct 11, 2013 at 1:20 AM, Dimitri Fontaine
dimi...@2ndquadrant.fr wrote:
Hi,
I did a partial review of this patch, wherein I focused on the patch and
the code itself, as I saw other contributors already did some
On Fri, Oct 11, 2013 at 8:35 AM, Andres Freund and...@2ndquadrant.com wrote:
Hi,
On 2013-10-11 03:44:01 +0900, Fujii Masao wrote:
I'm afraid that the patch has only limited effects in WAL reduction and
performance improvement unless the database contains highly-compressible
data like large
On Fri, Oct 11, 2013 at 5:05 AM, Andres Freund and...@2ndquadrant.com wrote:
Hi,
On 2013-10-11 03:44:01 +0900, Fujii Masao wrote:
I'm afraid that the patch has only limited effects in WAL reduction and
performance improvement unless the database contains highly-compressible
data like large
(2013/10/08 17:33), Haribabu kommi wrote:
The checkpoint_timeout and checkpoint_segments are increased to make sure no
checkpoint happens during the test run.
Your setting is easy occurred checkpoint in checkpoint_segments = 256. I don't
know number of disks in your test server, in my test
On 2013-09-11 12:43:21 +0200, Andres Freund wrote:
On 2013-09-11 19:39:14 +0900, Fujii Masao wrote:
* Benchmark
pgbench -c 32 -j 4 -T 900 -M prepared
scaling factor: 100
checkpoint_segments = 1024
checkpoint_timeout = 5min
(every checkpoint during benchmark were triggered
On 08 October 2013 15:22 KONDO Mitsumasa wrote:
(2013/10/08 17:33), Haribabu kommi wrote:
The checkpoint_timeout and checkpoint_segments are increased to make sure no
checkpoint happens during the test run.
Your setting is easy occurred checkpoint in checkpoint_segments = 256. I don't
know
1 - 100 of 126 matches
Mail list logo